text
stringlengths
59
500k
subset
stringclasses
6 values
Diffpack Diffpack is a programming environment for developing simulation software for scientific and engineering applications. Diffpack has its main focus on the numerical modeling and solution of partial differential equations, in particular by the finite element method and the finite difference method (finite volume method is also supported to some extent). Initial release1991 Written inC++, Python, Perl Operating systemLinux, Unix, Mac OS X, Windows TypeScientific simulation software Licenseproprietary (until 1997 public domain software) Websitewww.diffpack.de Features The Diffpack software consists of a family of C++ libraries for general tasks related to numerical solution of partial differential equations, plus a set of Perl and Python scripts that ease the development of simulation programs and problem solving environments for scientific or engineering research. The package was one of the first to explore object-oriented programming and the C++ language for advanced, high-performance computing. History Diffpack has been actively developed since 1991, with main contributions from University of Oslo and the research institutes SINTEF and Simula Research Laboratory. The initiators and main contributors to Diffpack in the 1990s were Hans Petter Langtangen and Are Magnus Bruaset. Version 1.0 of the software was released in the public domain in 1995, with a new version in 1997. The Norwegian company Numerical Objects AS took over the rights of Diffpack 1997 and commercialized the product. In 2003, the German company inuTech GmbH purchased Diffpack and is now the principal maintainer and developer of the software. Adoption Past and present Diffpack customers include AREVA NP, Air Force Research Laboratory, Robert Bosch GmbH, Cambridge University, Canon, CEA, CalCom, DaimlerChrysler, Furukawa, Harvard University, Intel, Mitsubishi, NASA, Nestle, Nippon Steel, Shell, Siemens, Stanford University, Statoil, Veritas, VAI GmbH, and Xerox. Diffpack applications have been built in diverse areas, such as oil and gas, mechanical engineering, telecommunication, medicine and finance. The customer activities span from simple prototype applications to projects involving several man-years of simulator development. See also • List of finite element software packages • List of numerical analysis software References • Diffpack website • Computational Partial Differential Equations - Numerical Methods and Diffpack Programming (book) • inuTech GmbH
Wikipedia
Johannes Frischauf Johannes Frischauf (17 September 1837 in Vienna – 7 January 1924 in Graz) was an Austrian mathematician, physicist, astronomer, geodesist and alpinist. Johannes Frischauf NationalityAustrian Scientific career FieldsMathematics Life and work Frischauf passed the matura at the Academic Gymnasium in Vienna and in 1857 studied mathematics, physics, astronomy at the University of Vienna, as well as geodesy, chemistry, mechanics at the Technischen Hochschule Vienna. He obtained the doctorate in 1864, and became Privatdozent for mathematics at the University of Vienna and assistant at the observatory of the university. In 1863 he was habilitated in mathematics. Starting in 1863, he was professor at the University of Graz for pure and applied mathematics. He worked together with Ludwig Boltzmann. Frischauf developed a new method of map design and wrote textbooks on arithmetics and geometry – for instance in 1872 and 1876 he wrote summaries of the then current knowledge about non-Euclidean geometry (which he called "absolute geometry"). In 1885 he was elected as a member of the Leopoldina.[1][2][3][4][5] Starting with 1868, Frischauf pioneered the touristic development of the Sannthaler and Steiner Alps by opening ways and huts. The streets over the Paulitschsattel and the connection Sulzbach-Leutsch were built because of his initiatives. Together with Franz von Juraschek and Mathias Spreiz he was the first to climb the Admonter Reichenstein. Frischauf defended the view at a time of violent national conflicts, that alpinism should not be subordinated under nationalism or religion or political views. He participated in the foundation of the Croatian Mountaineering Association. Frischauf's Funerary urn was placed on the Scheichenspitze in the Dachstein Mountains. His estate is located at the University of Graz. Works (selection) • Über die Bahn der Asia. In: Sitz. Berichte Kais. Akad. Wiss. Wien, Mat.-nat. Cl. Band 45, 1862, pp. 435–442. • Bahnbestimmung des Planeten 67 Asia. In: Sitz. Berichte Kais. Akad. Wiss. Wien, Mat.-nat. Cl. Band 53, 1866, pp. 96–141. • Einleitung in die analytische Geometrie. Leuschner & Lubensky, Graz 1871. • Zum Rechnen mit unvollständigen Zahlen. In: Zeitschrift math. naturw. Unterr. Band 26, 1895, pp. 161–172. • Beiträge zur Landesaufnahme und Kartographie des Erdsphäroids. B. G. Teubner, Leipzig 1919. • Hochthor bei Johnsbach. In: Jahrbuch Steir. Gebirgsverein. 1873, p. 41. • Reichenstein bei Admont. In: Jahrbuch Steir. Gebirgsverein. 1873, p. 54. • Frischauf, J. (1876). Elemente der absoluten Geometrie. Leipzig.{{cite book}}: CS1 maint: location missing publisher (link) • Die Sannthaler Alpen. Brockhausen und Bräuser, Wien 1877. • Ein Ausflug auf den Monte Baldo.Wien 1883, Wiener Touristen-Führer.11. • Das Uskoken-Gebirge. In: Zeitschrift DÖAV (1890), pp. 474–484. • Krakau bei Murau. Steirische Sommerfrischen, Band 1, Leuschner & Lubensky, Graz 1896, Hrg. vom Steirischen Gebirgsvereine. References 1. Godfried Oliwa (1961), "Frischauf, Johannes", Neue Deutsche Biographie (in German), vol. 5, Berlin: Duncker & Humblot, pp. 618–619; (full text online) 2. "Frischauf, Johannes". In: Österreichisches Biographisches Lexikon 1815–1950 (ÖBL). Vol. 1, Austrian Academy of Sciences, Vienna 1957, p. 370. 3. J. P. Snyder: A Comparison of Pseudocylindrical Map Projections. In: The American Cartographer 1977, Vol. 4, No. 1, pp. 59–81. 4. Robert Tichy, Johannes Wallner: Johannes Frischauf – eine schillernde Persönlichkeit in Mathematik und Alpinismus. (PDF; 904 kB) In: Internat. Math. Nachrichten. Nr. 210 (2009), pp. 21–32 (mit Literaturverzeichnis). 5. Berthold Sutter: Die Badenischen Sprachenverordnungen von 1897. Ihre Genesis und ihre Auswirkungen vornehmlich auf die innerösterreichischen Alpenländer. Band 2. Böhlau, Graz 1965, p. 176f, 249 External links German Wikisource has original text related to this article: Johannes Frischauf • Literature by and about Johannes Frischauf in the German National Library catalogue • Martin Fürnkranz: Bibliographie • Publications of J. Frischauf in Astrophysics Data System Authority control International • ISNI • VIAF • WorldCat National • Norway • Germany • Israel • United States • Czech Republic • Greece • Croatia • Netherlands • Poland Academics • CiNii • Leopoldina • Mathematics Genealogy Project • Scopus • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Continuous-time random walk In mathematics, a continuous-time random walk (CTRW) is a generalization of a random walk where the wandering particle waits for a random time between jumps. It is a stochastic jump process with arbitrary distributions of jump lengths and waiting times.[1][2][3] More generally it can be seen to be a special case of a Markov renewal process. Motivation CTRW was introduced by Montroll and Weiss[4] as a generalization of physical diffusion process to effectively describe anomalous diffusion, i.e., the super- and sub-diffusive cases. An equivalent formulation of the CTRW is given by generalized master equations.[5] A connection between CTRWs and diffusion equations with fractional time derivatives has been established.[6] Similarly, time-space fractional diffusion equations can be considered as CTRWs with continuously distributed jumps or continuum approximations of CTRWs on lattices.[7] Formulation A simple formulation of a CTRW is to consider the stochastic process $X(t)$ defined by $X(t)=X_{0}+\sum _{i=1}^{N(t)}\Delta X_{i},$ whose increments $\Delta X_{i}$ are iid random variables taking values in a domain $\Omega $ and $N(t)$ is the number of jumps in the interval $(0,t)$. The probability for the process taking the value $X$ at time $t$ is then given by $P(X,t)=\sum _{n=0}^{\infty }P(n,t)P_{n}(X).$ Here $P_{n}(X)$ is the probability for the process taking the value $X$ after $n$ jumps, and $P(n,t)$ is the probability of having $n$ jumps after time $t$. Montroll–Weiss formula We denote by $\tau $ the waiting time in between two jumps of $N(t)$ and by $\psi (\tau )$ its distribution. The Laplace transform of $\psi (\tau )$ is defined by ${\tilde {\psi }}(s)=\int _{0}^{\infty }d\tau \,e^{-\tau s}\psi (\tau ).$ Similarly, the characteristic function of the jump distribution $f(\Delta X)$ is given by its Fourier transform: ${\hat {f}}(k)=\int _{\Omega }d(\Delta X)\,e^{ik\Delta X}f(\Delta X).$ One can show that the Laplace–Fourier transform of the probability $P(X,t)$ is given by ${\hat {\tilde {P}}}(k,s)={\frac {1-{\tilde {\psi }}(s)}{s}}{\frac {1}{1-{\tilde {\psi }}(s){\hat {f}}(k)}}.$ The above is called Montroll–Weiss formula. References 1. Klages, Rainer; Radons, Guenther; Sokolov, Igor M. (2008-09-08). Anomalous Transport: Foundations and Applications. ISBN 9783527622986. 2. Paul, Wolfgang; Baschnagel, Jörg (2013-07-11). Stochastic Processes: From Physics to Finance. Springer Science & Business Media. pp. 72–. ISBN 9783319003276. Retrieved 25 July 2014. 3. Slanina, Frantisek (2013-12-05). Essentials of Econophysics Modelling. OUP Oxford. pp. 89–. ISBN 9780191009075. Retrieved 25 July 2014. 4. Elliott W. Montroll; George H. Weiss (1965). "Random Walks on Lattices. II". J. Math. Phys. 6 (2): 167. Bibcode:1965JMP.....6..167M. doi:10.1063/1.1704269. 5. . M. Kenkre; E. W. Montroll; M. F. Shlesinger (1973). "Generalized master equations for continuous-time random walks". Journal of Statistical Physics. 9 (1): 45–50. Bibcode:1973JSP.....9...45K. doi:10.1007/BF01016796. 6. Hilfer, R.; Anton, L. (1995). "Fractional master equations and fractal time random walks". Phys. Rev. E. 51 (2): R848–R851. Bibcode:1995PhRvE..51..848H. doi:10.1103/PhysRevE.51.R848. 7. Gorenflo, Rudolf; Mainardi, Francesco; Vivoli, Alessandro (2005). "Continuous-time random walk and parametric subordination in fractional diffusion". Chaos, Solitons & Fractals. 34 (1): 87–103. arXiv:cond-mat/0701126. Bibcode:2007CSF....34...87G. doi:10.1016/j.chaos.2007.01.052. Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Periodic bouncing solutions for Hill's type sub-linear oscillators with obstacles Mengni Li Department of Mathematics and Yau Mathematical Sciences Center, Tsinghua University, Beijing 100084, China Fund Project: The author is supported by Yau Mathematical Sciences Center, Tsinghua University In this paper we study the boundary regularity of solutions to the Dirichlet problem for a class of Monge-Ampère type equations with nonzero boundary conditions. We construct global Hölder estimates for convex solutions to the problem and emphasize that the boundary regularity essentially depends on the convexity of the domain. The proof is based on a careful study of the concept of $ (a,\eta) $ type convex domain and a family of auxiliary functions. Keywords: Monge-Ampère type equation, Dirichlet problem, convex domain, Hölder estimate, boundary regularity. Mathematics Subject Classification: Primary: 35J96, 35B65; Secondary: 52A20. Citation: Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (1) : 301-317. doi: 10.3934/cpaa.2020267 L. Caffarelli, L. Nirenberg and J. Spruck, The Dirichlet problem for nonlinear second-order elliptic equations. I. Monge-Ampère equation, Commun. Pure Appl. Math., 37 (1984), 369-402. doi: 10.1002/cpa.3160370306. Google Scholar S. Y. Cheng and S. T. Yau, On the regularity of the Monge-Ampère equation $\det\frac{\partial^2u}{\partial x_i\partial x_j} = F(x, u)$, Commun. Pure Appl. Math., 30 (1977), 41-68. doi: 10.1002/cpa.3160300104. Google Scholar S. Y. Cheng and S. T. Yau, Complete affine hypersurfaces. Ⅰ. The completeness of affine metrics, Commun. Pure Appl. Math., 39 (1986), 839-866. doi: 10.1002/cpa.3160390606. Google Scholar K. S. Chou and X. J. Wang, The $L_p$-Minkowski problem and the Minkowski problem in centroaffine geometry, Adv. Math., 205 (2006), 33-83. doi: 10.1016/j.aim.2005.07.004. Google Scholar A. Figalli, The Monge-Ampère Equation and Its Applications, European Mathematical Society (EMS), Zürich, 2017. doi: 10.4171/170. Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer-Verlag, New York, 2001. doi: 10.1007/978-3-642-61798-0. Google Scholar P. F. Guan, N. S. Trudinger and X. J. Wang, On the Dirichlet problem for degenerate Monge-Ampère equations, Acta Math., 182 (1999), 87-104. doi: 10.1007/BF02392824. Google Scholar Y. He, Q. R. Li and X. J. Wang, Multiple solutions of the $L_p$-Minkowski problem, Calc. Var. Partial Differ. Equ., 55 (2016), 13 pp. doi: 10.1007/s00526-016-1063-y. Google Scholar Y. Huang, F. D. Jiang and J. K. Liu, Boundary $C^{2, \alpha}$ estimates for Monge-Ampère type equations, Adv. Math., 281 (2015), 706-733. doi: 10.1016/j.aim.2014.12.043. Google Scholar H. Y. Jian and Y. Li, Optimal boundary regularity for a singular Monge-Ampère equation, J. Differ. Equ., 264 (2018), 6873-6890. doi: 10.1016/j.jde.2018.01.051. Google Scholar H. Y. Jian, Y. Li and X. S. Tu, On a class of degenerate and singular Monge-Ampère equations, arXiv: 1908.06396. Google Scholar H. Y. Jian and X. J. Wang, Bernstein theorem and regularity for a class of Monge-Ampère equations, J. Differ. Geom., 93 (2013), 431-469. doi: 10.4310/jdg/1361844941. Google Scholar H. Y. Jian, X. J. Wang and Y. W. Zhao, Global smoothness for a singular Monge-Ampère equation, J. Differ. Equ., 263 (2017), 7250-7262. doi: 10.1016/j.jde.2017.08.004. Google Scholar N. Q. Le and O. Savin, Schauder estimates for degenerate Monge-Ampère equations and smoothness of the eigenfunctions, Invent. Math., 207 (2017), 389-423. doi: 10.1007/s00222-016-0677-1. Google Scholar M. N. Li and Y. Li, Global regularity for a class of Monge-Ampère type equations, Sci. China Math., (2020), 16pp. doi: 10.1007/s11425-019-1691-1. Google Scholar C. Loewner and L. Nirenberg, Partial differential equations invariant under conformal or projective transformations, in Contributions to Analysis, Academic Press, New York, (1974), 245-272. Google Scholar N. S. Trudinger and J. I. E. Urbas, The Dirichlet problem for the equation of prescribed Gauss curvature, Bull. Austral. Math. Soc., 28 (1983), 217-231. doi: 10.1017/S000497270002089X. Google Scholar N. S. Trudinger and X. J. Wang, Boundary regularity for the Monge-Ampère and affine maximal surface equations, Ann. Math., 167 (2008), 993-1028. doi: 10.4007/annals.2008.167.993. Google Scholar N. S. Trudinger and X. J. Wang, The Monge-Ampère equation and its geometric applications, in Handbook of Geometric Analysis, International Press, Somerville, MA, (2008), 467-524. Google Scholar J. I. E. Urbas, Global Hölder estimates for equations of Monge-Ampère type, Invent. Math., 91 (1988), 1-29. doi: 10.1007/BF01404910. Google Scholar Figure 1. The parameter $ a $ Juhua Shi, Feida Jiang. The degenerate Monge-Ampère equations with the Neumann condition. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020297 Shenglan Xie, Maoan Han, Peng Zhu. A posteriori error estimate of weak Galerkin fem for second order elliptic problem with mixed boundary condition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020340 Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3375-3394. doi: 10.3934/dcds.2020033 Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032 Xiaofeng Ren, David Shoup. The impact of the domain boundary on an inhibitory system: Interior discs and boundary half discs. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3957-3979. doi: 10.3934/dcds.2020048 Wenrui Hao, King-Yeung Lam, Yuan Lou. Ecological and evolutionary dynamics in advective environments: Critical domain size and boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 367-400. doi: 10.3934/dcdsb.2020283 Tomasz Szostok. Inequalities of Hermite-Hadamard type for higher order convex functions, revisited. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020296 João Vitor da Silva, Hernán Vivas. Sharp regularity for degenerate obstacle type problems: A geometric approach. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1359-1385. doi: 10.3934/dcds.2020321 Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 Yoshihisa Morita, Kunimochi Sakamoto. Turing type instability in a diffusion model with mass transport on the boundary. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3813-3836. doi: 10.3934/dcds.2020160 PDF downloads (117)
CommonCrawl
Ex vivo tissue slice culture system to measure drug-response rates of hepatic metastatic colorectal cancer Steve Z. Martin ORCID: orcid.org/0000-0003-0653-64581,2, Daniel C. Wagner1, Nina Hörner1, David Horst2, Hauke Lang3, Katrin E. Tagscherer1 & Wilfried Roth1 The lack of predictive biomarkers or test systems contributes to high failure rates of systemic therapy in metastasized colorectal carcinoma, accounting for a still unfavorable prognosis. Here, we present an ex vivo functional assay to measure drug-response based on a tissue slice culture approach. Tumor tissue slices of hepatic metastases of nine patients suffering from colorectal carcinoma were cultivated for 72 h and treated with different concentrations of the clinically relevant drugs Oxaliplatin, Cetuximab and Pembrolizumab. Easy to use, objective and automated analysis routines based on the Halo platform were developed to measure changes in proliferative activity and the morphometric make-up of the tumor. Apoptotic indices were assessed semiquantitatively. Untreated tumor tissue slices showed high morphological comparability with the original "in vivo"-tumor, preserving proliferation and stromal-tumor interactions. All but one patients showed a dosage dependent susceptibility to treatment with Oxaliplatin, whereas only two patients showed responses to Cetuximab and Pembrolizumab, respectively. Furthermore, we identified possible non-responders to Cetuximab therapy in absence of RAS-mutations. This is the first time to demonstrate feasibility of the tissue slice culture approach for metastatic tissue of colorectal carcinoma. An automated readout of proliferation and tumor-morphometry allows for quantification of drug susceptibility. This strongly indicates a potential value of this technique as a patient-specific test-system of targeted therapy in metastatic colorectal cancer. Co-clinical trials are needed to customize for clinical application and to define adequate read-out cut-off values. Patients with colorectal carcinoma often develop metastases, foremost in the liver [1, 2]. Modern systemic therapeutic strategies include not only platinum-based chemotherapeutics (e.g. FOLFOX), but also novel targeted agents that are directed against a specific characteristic unique to the tumor cells (e.g. antibodies against Epidermal Growth Factor receptor or Programmed cell death ligand 1). Despite numerous promising new drugs, response rates are relatively low, rendering the prognosis of metastasized colorectal carcinoma still unfavourable [2,3,4,5,6]. Adequate stratification is of the utmost importance to select those patients that show a clinical benefit outweighing the side effects of treatment and justifying high costs. Nowadays, this is performed using extensive molecular profiling to identify predictive biomarkers, but clinical practice shows that response to therapy cannot always be reliably predicted using this approach. So far, very few predictive molecular biomarkers have been identified in the context of colorectal carcinoma, the most prominent of which are mutations of KRAS and NRAS that cause irresponsiveness to anti-EGFR antibodies (e.g. Cetuximab) [7,8,9,10]. Other factors such as the tumor-stromal interaction; the specific immune landscape and epigenetic factors seem to play a major role in defining its biological behavior that cannot be predicted with molecular profiling alone [11, 12]. A promising technique to overcome this predicament is to measure therapeutic response using an ex vivo functional assay that cultivates a viable sample of the tumor itself. Various 2D monolayer and 3D models have been proposed and their advantages and disadvantages have been compared in a recent review [13]. The tissue slice culture approach shows the best comparability with the original tumor - preserving tumor morphology and microenvironment - while showing a high experimental success rate as well as a short generation time. Here, the non-fixed viable tumor is cut into thin slices and cultured directly for several days. Recently, few research groups have shown that the functional assessment of primary colorectal carcinoma tissue is feasible using this innovative technique [14,15,16]. However, stratifying patients with metastatic disease into optimal therapy-regiments requires sampling and cultivation of the metastatic tumor tissue. In this study, we describe a protocol for optimal tissue slice culture of hepatic metastases of colorectal carcinoma and propose an automated, easy to use and objective readout strategy for measuring susceptibility to Oxaliplatin, Cetuximab and Pembrolizumab. Nine hepatic metastasectomy specimens of colorectal carcinoma were included in this study. The patients were treated at the Department of General Visceral and Transplantation Surgery of the University Medical Center Mainz between 2017 and 2018. The study was approved by our institution's ethics committee. Table 1 depicts the patient's clinical characteristics. Table 1 Patient characteristics Tissue slice culture system Immediately after surgery, the metastasectomy specimens were transported to the Institute of pathology. Viable tissue (length: 10 mm; diameter: 6 mm) from the invasive margin of the metastasis was sampled using a punch tool (KAI Medical Biopsy Punch, Solingen, Germany) and stored in 4 °C chilled Krebs-Henseleit-Buffer (Sigma-Aldrich/Merck, Darmstadt, Germany). In order to confirm the extraction of adequate tumor tissue, a 1 mm disc was removed with a scalpel from one end of the punch and evaluated in frozen section by a pathologist. Samples without viable tumor were discarded. Punches were then aligned, mounted and immobilized using an agar-ring and cut into thin homogenous slices of 300 μm thickness using a Vibratome VT1200 (Leica Microsystems). They were collected in 4 °C chilled Krebs-Henseleit-Buffer and randomized before distribution to control and therapy groups. The vibration amplitude was adjusted according to the tissue consistency and set between 1 and 2.5 mm. The cutting-velocity was set to 0.4 mm/s. Tissue slices were cultured on special cell-culture inserts (PET membrane with 0.4 μm pore size, Falcon, Corning, USA) to allow preservation of the 3-dimensional structure and assuring the supply with oxygen and cell medium. DMEM (ATCC, Manassas, USA) cell culture medium supplemented with 1% Penicillin/Streptomycin (Sigma-Aldrich/Merck, Darmstadt, Germany; 10000 U Penicillin + 10 mg/ml Streptomycin in 0.9% NaCl) and 10% Fetal Calf Serum (Sigma-Aldrich/Merck, Darmstadt, Germany) was used. For additional oxygen supply, plates were put on an orbital shaker (Thermo Scientific, MaxQ2000 CO2 Plus, 55 rpm) during incubation. Incubation was performed at 37 °C under atmospheric oxygen and CO2 levels. Medium (with or without systemic agents) was changed after 1 hour and every additional 24 h. After 72 h of incubation, tissue slices were harvested and fixed in 4% buffered formalin for a maximum of 24 h. The time between the end of surgery and the start of cultivation of the tumor tissue slices should be as low as possible and was in our case minimally 2 h and maximally 4 h (median 3 h). Treatment regimen Tissue slices were treated with two concentrations of Oxaliplatin (5 and 20 μM); Cetuximab (20 and 200 nM) and Pembrolizumab (140 and 1400 nM). Concentrations were chosen based on already published cell-culture experiments and recent clinical trials [16,17,18,19,20]. In order to account for tumor-heterogeneity, cultivation was performed in quadruplets (n = 4) for each drug and concentration. Twelve tissue slices (n = 12) were used for the untreated control group. Due to the small size of the liver metastasis, only triplicates were used in case of patients 9 and 4, respectively. Conventional and immunohistochemical staining Tissue slices were paraffin embedded and processed to 2 μm sections by a microtome for morphological and immunohistochemical evaluation. For morphological analyses, sections were stained with Hematoxylin and eosin (H&E) and with Elastika-van-Gieson (EvG) according to manufacturer specifications (Roth, Karlsruhe, Germany). Proliferation activity was evaluated using the immunohistochemical surrogate marker Ki-67. Apoptotic indices were assessed using cleaved Caspase 3 (Casp 3) immunostaining. In addition, key-proteins of the checkpoint inhibition system PD1 and PD-L1 were stained on whole slides of the routine-diagnostic sections. Furthermore, microsatellite stability was evaluated using immunohistochemical evaluation of MLH1 and MSH2. Prior to immunostaining sections were dewaxed (30 min at 60 °C; 3 × 5 min Xylol) and rehydrated (decreasing alcohol concentration 100 to 50% Ethanol, each 3 min). Staining was performed automatically using the Dako EnVision™ FLEX HRP/DAB; K 8010 Kit (Dako, Agilent, Santa Clara, USA) and the BenchMark ULTRA platform (Ventana Medical Systems, Oro Valley, USA) according to the manufacturer's specifications. All buffers and chemical agents were included in the kit. While the primary antibodies Ki-67 (Dako Ref.: IR626, mouse), MLH1 (Dako Ref.: IR079, mouse) and MSH2 (Dako Ref.: IR085, mouse) were ready to use, PD1 (Abcam, ab52587, mouse) was diluted 1:100, PD-L1 (Abcam, ab213524, rabbit) was diluted 1:250 and Casp 3 (Cell Signaling, Ref: 05/2017, rabbit) was diluted 3:250. All sections were heated for 35 min in a steam cooker at pH 6 (citrate-buffer; Ki-67, PD1, PD-L1, Casp 3) or pH 9 (EDTA; MSH2, MLH1) for antigen retrieval. Analysis of RAS and BRAF- mutation For DNA extraction, an adequate paraffin block was selected by an experienced pathologist (DW). Up to 10 unstained sections (thickness: 5 μm) of each block were manually macrodissected to enrich tumor cells. Tumor cell content ranged from 50 to 80%, with a median cellularity of 60%. DNA was isolated using RSC DNA FFPE PLUS Custom Kit AX 4920 Promega (Wisconsin, USA) and quantified using Nano Drop (Avantor, Pennsylvania, USA). RAS mutations were analyzed using PCR-based Sanger sequencing. Following primers were used: NRAS Gene Exon 2 NRAS-F 5′-GATGTGGCTCGCCAATTAAC-3′ NRAS-R 5′-CCGACAAGTGAGAGACAGGA-3′ NRAS-RN 5′-GATCAGGTCAGCGGGCTA-3′ NRAS-F 5′-CCCCTTACCCTCCACACC-3′ NRAS-R 5′-GAACACAAAGATCATCCTTTCAGA-3′ NRAS-RN 5′-CCTTTCAGAGAAAATAATGCTCCT-3′ NRAS-F 5′-TGTTCTGATAATATATTCCCGT-3′ NRAS-R 5′-GCACTCCAGCTTAGAAGATA-3′ NRAS-RN 5′-GGATCACATCTCTACCAGAG-3′ KRAS Gene Exon 2 KRAS-F 5′-GGTGAGTTTGTATTAAAAGGTACTGG-3′ KRAS-FN 5′-TTAACCTTATGTGTGACATGTTCTAA-3′ KRAS-R 5′-GGTCCTGCACCAGTAATATGC-3′ KRAS-RN 5′-AAAACAAGATTTACCTCTATTGTTGGA-3′ KRAS-F 5′-TCCAGACTGTGTTTCTCCCT-3′ KRAS-R 5′-AACCCACCTATAATGGTGAATATC-3′ KRAS-RN 5′-TTTATGGCAAATACACAAAGAAAG-3′ KRAS-F 5′-TTTTTCTTTCCCAGAGAACAAAT-3′ KRAS-R 5′-AGCATAATTGAGAGAAAAACTGA-3′ KRAS-RN 5′-ACATAACAGTTATGATTTTGCAG-3′ BRAF Gene Exon 15 BRAF-F 5′- ATCTCTTACCTAAACTCTTCATAATGC -3′ BRAF-R 5′- GGCCAAAAATTTAATCAGTGGA-3′ The sequencing results were interpreted using Genome Lab GeXP Genetic Analysis System (Beckman Coulter, California, USA). Analysis of tissue slice culture All sections were digitalized using the NanoZoomer-Series Digital Slide Scanner (40×, Hamamatsu Photonics, Hamamatsu, Japan). Firstly, H&E stained untreated tissue slices (controls) were visually compared with their representative paraffin-embedded sections used in routine-diagnostic by a pathologist. Overall morphological appearance, architecture, growth-patterns, grading of differentiation and nuclear characteristics of the tumor were assessed. Secondly, untreated (control) and treated (Oxaliplatin, Cetuximab, Pembrolizumab) tissue slices were compared using an automated analysis-readout based on the Halo platform from Indica Labs (Corrales, NM, USA). For immunohistochemical analysis of Ki-67 the module CytoNuclear v1.4 was applied. In a training phase, five representative sections were used to define staining parameters (e.g. minimum nuclear optical density, minimum staining optical density, nuclear and cellular size and roundness) for an optimal distinction between Ki-67 positive and negative tumor cells. A tissue classifier was then trained separately for each section to select epithelial tumor cells. Stroma, blood vessels and areas of necrosis were excluded from analysis. The percentage of Ki-67-positive tumor cells in relation to the total number of tumor cells was calculated and used as a surrogate marker for the proliferation activity. All automated results were visually validated for accuracy. For morphometrical analysis, the EvG stain was used. For each tumor tissue slice a Halo tissue-classifier was trained to recognize the stromal, tumor and necrosis compartment. The area of each compartment was calculated and normalized to the total analyzed area. The automated results were visually validated. For immunohistochemical analysis of Casp 3 digitized slides were visually assessed semiquantitatively by two experienced pathologist (SZM, WR). The apoptotic state is expressed as the tumor-apoptotic fraction defined as the number of Casp 3 positive tumor cells divided by the total number of tumor cells. Importantly, dependent on the cells stage of apoptosis, Casp 3 stain can be nuclear or cytoplasmatic [21]. Stain of non-epithelial cells, necrosis or cell debris was excluded (see Additional file 1). Tissue slices that showed no tumor were excluded from analysis (7 slices of 312). Figure 1 depicts the experimental set up of the tissue slice culture system. Experimental setup of the tissue slice culture system. Susceptibility to systemic drugs is assessed within 6 days Proteins of the checkpoint inhibition system Immunohistochemical PD-L1 and PD1 positivity was analyzed based on the urothelial carcinoma PD-L1 interpretation manual of Agilent [22]. In short, staining of the cell-membrane was classified as positive. Whole slides of the routine-diagnostic sections were assessed visually and positive immune cells and tumor cells were determined in areas comprising approximately 100,000 tumor cells. Necrosis and cell-debris were excluded. Combined positivity score (CPS), tumor cell score (TC%) and immune cell score (IC%) were defined as follows and are depicted in Table 1: $$ CPS=\frac{PDL-1\ positive\ tumor\ cells+ PDL-1\ positive\ immune\ cells}{total\ count\ of\ tumor\ cells}\ x\ 100 $$ $$ TC\%=\frac{PDL-1\ positive\ tumor\ cells}{total\ count\ of\ tumor\ cells}\ x\ 100 $$ $$ IC\%=\frac{PDL-1\ positive\ immune\ cells}{total\ count\ of\ tumor\ cells}\ x\ 100 $$ Analysis of morphometry, Ki-67-proliferation and Casp 3 apoptotic state was performed across all patients and for each patient individually. For the pooled analysis of the Ki-67-proliferation and Casp 3 apoptotic fraction and morphometry, the mean values of each patient were used and depicted in Box-Jitter plots. Statistical significant differences between the control and treatment groups were calculated using the nonparametric Mann–Whitney U test. P-values ≤0.05 were defined as significant. Additionally, the analysis was performed for each patient individually. To calculate differences between the control and treatment groups in each patient, the nonparametric Mann–Whitney U test was performed. Wilcoxon-signed rank test for paired samples can only be used, if the number of tissue slices each group are equal in number. Since this requirement is not met in our study (treatment group n = 4, control group n = 12), the p-value of the Mann-Whitney U test gives the best approximation of statistical relevant group differences. For Ki-67-analysis, a representative 1 mm2 area of the routine-diagnostic section was included in the analysis. For the morphometrical analysis, the medians of the area of necrosis, tumor and stroma for each patient were depicted in stacked plots and normalized to the total area. For statistical analysis, the software Past Version 3.16 [23] was used. Tissue slice culture The tumor tissue slice culture technique was adjusted for liver metastases of colorectal cancer patients. Tumor tissue from nine metastases was cultured for 72 h and morphologically compared to representative routine-diagnostic H&E sections from the original tissue (see Fig. 2). There was a high morphological similarity between the ex vivo and in vivo tumor, as evidenced by comparable tumor growth-patterns, architecture, grading of differentiation and tumor cell cytology. The tumor of tissue slices exhibited only minimal heterogeneous nuclear changes like karyorrhexis, karyolysis or pyknosis in some tumor glands. The immunohistologically assessed proliferation activity (Ki-67) showed a moderate reduction in proliferation for tumors of patients 1, 2, 6, 7 and 9 and similar proliferation for tumors of patients 3 to 5, when comparing the untreated tissue slices with a representative 1 mm2 area of the original tumor (see Fig. 3). Depicted are H&E stained sections of the original tumor tissue and representative untreated tissue slices (control) that were cultured for 72 h. The upper part shows the original tumor (routine-diagnostics) in high magnification. Tissue slices are depicted in the lower part in high magnification Tumor- proliferative activity (Ki-67) of treated (Cetuximab, Pembrolizumab and Oxaliplatin) and untreated (control) tissue slices. Additionally, one 1 mm2 representative section of the original tumor tissue was included in the analysis (routine-diagnostic). The percentage of Ki-67 positive tumor cells is depicted in Box-Jitter plots. Statistical differences were calculated using the Mann-Whitney U test and are marked (* p value ≤0.05; ** p value ≤0.01). a- original tumor; b- control; c- Oxaliplatin 20 μM; d- Oxaliplatin 5 μM; e- Cetuximab 200 nM; f- Cetuximab 20 nM; g- Pembrolizumab 1400 nM; h- Pembrolizumab 140 nM Readout of proliferation index and apoptotic index The tumor tissue slice culture technique was used to measure drug responses of metastatic colorectal cancer tissue. Tumor tissue was treated with Oxaliplatin (5 and 20 μM), Pembrolizumab (140 and 1400 nM) and Cetuximab (20 and 200 nM) for 72 h and compared to untreated controls. To measure susceptibility to those drugs an automated analysis of the proliferation index using Ki-67 immunostain was performed for each patient individually (Fig. 3, Additional file 2: Table S1 and Additional file 3). Additionally semiquantitative analysis of the apoptotic index was carried out using Casp 3 immunostain (Fig. 4, Additional file 2: Table S2 and Additional file 3). Tumor- apoptotic- fraction (Casp3) of treated (Cetuximab, Pembrolizumab and Oxaliplatin) and untreated (control) tissue slices. The percentage of Casp3 positive tumor cells is depicted in Box-Jitter plots. Statistical differences were calculated using the Mann-Whitney U test and are marked (* p value ≤0.05). a- control; b- Oxaliplatin 20 μM; c- Oxaliplatin 5 μM; d- Cetuximab 200 nM; e- Cetuximab 20 nM; f- Pembrolizumab 1400 nM; g- Pembrolizumab 140 nM Proliferation activity of the untreated tissue slices were heterogeneous and varied between 95% in case 5 and 34% in case 6 (median value of 60 ± 19%). Regarding the original tumors proliferative activity ranged from 94% in case 7 to 31% in case 8 (median value of 65 ± 19%). Tumors of patients 1 to 6 showed a reduction of the Ki-67- positive tumor fraction when treated with 5 μM and 20 μM Oxaliplatin. Tumors of patients 7 and 9 showed a reduction only when treated with 20 μM Oxaliplatin. A dosage dependent decrease of proliferation was visible for tumor of patient 5 (95% control, 53% 5 μM and 33% 20 μM Oxaliplatin). The absolute difference of the medians between the untreated (control) and treated (20 μM Oxaliplatin) group was ranging from 62% (patient 5) to 16% (cases 2 and 9) or 0% (case 8). Only tumors of patients 3, 4 and 9 showed a reduction in proliferation, when treated with Pembrolizumab or Cetuximab. Tissue of patient 3 showed a median drop of 23 and 30% when treated with Pembrolizumab (140 and 1400 nM respectively), which was smaller compared to the Oxaliplatin treatment (46% for both concentrations). Tumor of patient 4 showed a decrease in Ki-67 positivity when treated with 200 nM Cetuximab (14%) or 1400 nM Pembrolizumab (22%). Again, this reduction was lower than in the Oxaliplatin-treated group (drop of 35% for both concentrations). Tumor of patient 9 showed a median reduction of the proliferation index of 15%, when treated with 200 nM Cetuximab, which was as high, as in the Oxaliplatin-treated group. Tumor of patient 8 showed no differences in proliferation between control and treatment groups. The tumor-apoptotic fraction of the untreated tissue slices were also heterogeneous and varied between 1% (case 2) and 9.5% (case 7). Tumors of patients 4 to 5 showed an increase of the Casp 3- positive tumor fraction when treated with 20 μM Oxaliplatin. All other treatment groups showed no statistically relevant differences compared to the control group. Pooled analysis of the Ki-67 proliferation fraction across all nine cases confirmed a statistical significant and dosage dependent reduction when treated with Oxaliplatin. There were no significant differences in proliferation after treatment with Pembrolizumab or Cetuximab. Pooled analysis of the Casp 3 tumor-apoptotic fraction across all nine cases revealed no statistical significant differences between control and treatment groups (see Fig. 5). Depicted are tumor-proliferative fractions (I), tumor-apoptotic fractions (II), tumor (III), necrosis (IV) and stroma (V) fractions of Ki-67, Casp3 and morphometric analysis across all nine patients in Box-Jitter plots. The mean-values of each patient are depicted as a black dot. Statistical differences were calculated using the Mann-Whitney U test and are marked (* p value ≤0.05; ** p value ≤0.01). a- control; b- Oxaliplatin 20 μM; c- Oxaliplatin 5 μM; d- Cetuximab 200 nM; e- Cetuximab 20 nM; f- Pembrolizumab 1400 nM; g- Pembrolizumab 140 nM Automated readout of morphometrical analysis In addition to the evaluation of proliferative changes after drug treatment, also morphometric changes were assessed. To measure variations of the area of necrosis, stroma and tumor, treated and untreated tissue slices were stained with EvG and quantified using the Halo-platform. In contrast to H&E, EvG showed a superior contrast between necrosis and stroma in direct comparison and led to a more accurate distinction using the Halo classifier (data not shown). Findings of the analysis are depicted in Fig. 6 and Additional file 2: Table S3). The morphometric analysis of the untreated tissue slices showed substantial differences in the distribution of necrosis, tumor and stroma for all 9 cases. While tumor of patient 3 showed the highest amount of necrosis (median 37%), tumor of patient 9 showed no necrosis at all. An increase in necrosis accompanied by a reduction of the tumor area was visible for cases 5 and 9 when treated with 20 μM Oxaliplatin and for case 7 when treated with 5 and 20 μM Oxaliplatin. Tumors of patients 1 and 6 showed an increase in necrosis after treatment with 200 nM Cetuximab, in case of patient 6 accompanied by a reduction of the stromal compartment. Tumor of patient 4 showed an increase in necrosis when treated with 1400 nM Pembrolizumab. Tumor of patient 3 showed no differences among the groups. Tumors of patients 2 (Pembrolizumab), 5 (Cetuximab) and 8 (Oxaliplatin, Cetuximab and Pembrolizumab) showed a reduction of areas of necrosis, in case of patient 8 accompanied by an increase of the stromal compartment (Pembrolizumab). Pooled morphometric analysis across all nine cases showed no statistically significant differences in necrosis, stroma or tumor-area between control and treatment groups (see Fig. 5). Morphometrical analysis of the treated (Cetuximab, Pembrolizumab and Oxaliplatin) and untreated (control) tissue slices. Stacked plots show the medians of the areas of necrosis (blue), stroma (orange) and tumor (grey) normalized to the total area analyzed. Statistical differences between the groups were calculated using the Mann-Whitney U test and marked with a parenthesis and a label (p ≤ 0.05). 1- control; 2- Oxaliplatin 20 μM; 3- Oxaliplatin 5 μM; 4- Cetuximab 200 nM; 5- Cetuximab 20 nM; 6- Pembrolizumab 1400 nM; 7- Pembrolizumab 140 nM Associations between drug response and molecular tumor characteristics In order to determine associations of therapy response with molecular tumor characteristics, the RAS mutation status as well as the immunohistochemical evaluation of microsatellite stability and checkpoint protein expression was assessed. Visual semiquantitative analysis in whole slides of the original tumor sections showed moderate to high infiltrates of PD1 positive tumor-associated immune cells for all cases particularly at the invasive margin. The PD1 immune cell score (IC%) ranged from 20 to 36 (see Table 1 and Additional file 2: Table S4). PD-L1 analysis showed only few positive tumor cells and moderate to high infiltrates of PD-L1 positive tumor-associated immune cells, particularly at the invasive margin. Only tumors of patients 4 and 9 showed a tumor cell score (TC%) above 1. CPS scores were above 10 for the cases 1–6 and 9 and below 10 for the cases 7 and 8 (see Table 1 and Additional file 2: Table S5). Of all cases, only tumor tissue of patients 3 and 4 showed a reduction of proliferation when treated with Pembrolizumab. All cases showed immunohistochemical expression of MLH1 and MSH2 and therefore no sign of microsatellite instability. PCR-based Sanger sequencing showed KRAS mutations in metastatic tumor tissue of patients 2 (G12D), 3 (G12A) and 4 (G13D) and a NRAS mutation for patient 8 (G13R "c37G > C). Of the five cases harboring no RAS-mutations, only tumor tissue of patient 9 showed a reduction of proliferative activity after treatment with Cetuximab. Additionally, tumor tissue of patient 4, harboring a G13D KRAS mutation, showed a response after cultivation with Cetuximab. In this study, we present an experimental ex vivo test system based on the tissue slice culture approach to estimate the susceptibility of colorectal liver metastases to different drugs. The tissue slice culture approach allows for cultivating tumor tissue, while preserving the tumor morphology and microenvironment. Keeping stromal-tumor interactions intact is fundamental, because they are known to affect progression, proliferation and sensitivity to drugs [24, 25]. Since every tumor entity carries its own stromal-tumor microenvironment, optimal tissue slice culture protocols must be identified for each type separately. So far, only the tissue of the primaries of breast, prostate, lung, colorectal, gastroesophageal and head- and neck carcinomas have been successfully cultured for several days [16, 26,27,28,29,30,31,32]. Hepatic colorectal metastasectomy specimens are a major challenge for the tissue slice culture technique, since they show extensive regressive changes. This study is the first to successfully cultivate colorectal liver metastatic tissue for up to 72 h, keeping stromal-tumor interactions intact and preserving the in vivo tumor morphology (for details of cultivation protocol see Methods and Additional file 4). Additionally, analysis of the proliferation activity showed only moderate if any differences between the in vivo (original tumor) and ex vivo (tissue slice) tumor. Although the technical, organizational and personnel requirements of the method will probably surpass the capabilities of some pathological institutes, implementation at larger and specialized university centers is easily possible. The construction of a predictive ex vivo test system requires an objective and easy to use read out strategy. In this study, we used an automated analysis tool based on digital image analysis. Changes in proliferative activity of the tumor cells were measured using Ki-67-immunostaining as a surrogate marker. Median values were 60 ± 19% for untreated tissue slices and 65 ± 19% for original tumors, as was reported before [33, 34]. Changes in the tumor-stroma-necrosis make-up of the tissue slices were measured using EvG-stains. Halo classifiers had to be trained for each section and the analysis visually validated by a pathologist (SZM) to be repeated if necessary. This automated procedure took about 5 min per section and is approximately as time-consuming as a purely visually semi-quantitative method. The essential advantages, however, are a reliable absolute quantification of the entire section in a short period of time and a high objectivity and reproducibility of the procedure. Tumor tissue slices were treated with Oxaliplatin, Pembrolizumab and Cetuximab, all of which play an important role in the treatment of metastatic colorectal cancer. Oxaliplatin is a cytostatic drug that interlinks DNA-strands inhibiting replication [35] and represents in combination with fluorouracil and leucovorin the first-line standard therapy for metastatic colorectal carcinoma [36]. As a single agent, it demonstrates modest activity with response rates of 10 to 25% [37,38,39,40], which was confirmed in our study. All but one patients showed a dose-dependent reduction of the proliferative activity that was additionally confirmed in a pooled analysis across all nine patients. The morphometric analysis showed increases in areas of necrosis for cases 5, 7 and 9, which was partially accompanied by a decrease in the tumor area. Cetuximab is a monoclonal antibody against the epidermal growth factor receptor EGFR and is often added to first line therapy to improve outcome [4, 10, 41]. It exerts its biological anti-tumor effects in two ways. On the one hand, the EGFR signaling pathway on tumor cells is specifically blocked, leading to cell cycle arrest, reduction of tumor-cell proliferation and increase of apoptosis [42]. Therefore, mutations in the RAS gene are predictive for treatment failure. Of the five cases harboring no RAS-mutations in our study, only tumor of patient 9 seemed to be sensitive towards Cetuximab. Possible reasons for treatment resistance in those other patients are mutations of genes downstream the EGFR/RAS signaling cascade that are, although recommended, not regularly evaluated before systemic therapy in clinical practice [43]. This supports that molecular profiling alone cannot always accurately predict response to therapy in a clinical setting and underlines the need of additional predictive test systems. The tumor of patient 4 is the only one that harbored a RAS mutation while showing intermediate reduction of the proliferative activity after treatment with Cetuximab. However, this specific G13D mutation in the KRAS-gene was shown to be sensitive to Cetuximab treatment in a retrospective trial [44, 45] and in in-vitro cell-culture [46]. On the other hand, being an IgG1 antibody, Cetuximab can crosslink with and activate immune cells via its constant (Fc) region to induce antibody-dependent cellular cytotoxicity (ADCC) [47, 48]. While tumor tissue slices are thought to preserve the immune compartment of local native immune cells during the cultivation process, the suitability of this technique to study ADCC effects has not been investigated so far and exceeded the scope of this study. Pembrolizumab is a monoclonal antibody to programmed cell death 1 protein and FDA approved as second-line therapy for unresectable metastatic colorectal cancer that has high microsatellite instability or deficient mismatch repair [49,50,51,52,53]. Recent data supports that tumors harboring other mutations of DNA proofreading enzymes (e.g. POLE) also upregulate expression of immune checkpoints and are eligible to checkpoint inhibition, while showing an MSS Immunophenotype [52, 53]. Those mutations might explain the reduction of proliferation of tumors of patients 3 and 4 when treated with Pembrolizumab. However, only patient 4 showed a high PD-L1 TC% score above 1. Whether PD-L1 immunostaining is indeed predictive for response to Pembrolizumab therapy in colorectal carcinoma is still unknown and needs to be evaluated in adequate prospective trials [51]. The apoptotic state of the tumor tissue was revealed using Casp 3 immunostain, which is a well-known and highly sensitive method to visualize different steps of the apoptotic process [21]. A significant increase in the number of apoptosis of tumor cells was only detected for tumor tissue of patients 4 and 5, when treated with the cytotoxic drug Oxaliplatin, but was not confirmed in pooled analysis. This finding is supported by recent data of Buzzelli et al. who established colorectal cancer liver metastases organoids and observed that while organoids showed growth delay in response to Oxaliplatin treatment, they did not undergo significant cell death [54]. Both, this finding and our data suggest that Oxaliplatin limits tumor growth through reduction of proliferation and not by inducing apoptosis, which is consistent with its known function to inhibit DNA synthesis. Treatment of tumor tissue with Cetuximab and Pembrolizumab did not reveal a significant increase in the tumor-apoptotic fraction in this study. While there is no data available for colorectal cancer tissue, Gerlach et al. did also not detect significant changes in the number of Casp 3 positive cells after treatment of Head and Neck carcinoma tissue slices with Cetuximab [32]. In summary, the findings of this study suggest a direct correlation between the reduction of the proliferative tumor fraction and the level of drug sensitivity. Therefore, the tumor tissue slice culture approach seems feasible for measuring drug-responses and should be evaluated in further co-clinical trials. Here, a tumor tissue sample is processed in tissue slice culture before systemic therapy, allowing for subsequent direct comparison of ex vivo and in- vivo determined response rates. There are several limitations to this study. Firstly, Oxaliplatin was used as a single agent rather than in combination with 5-FU. Also, only nine patients have been enrolled and investigated that were heterogeneous in their clinical characteristics regarding presurgical therapy, stage and localization of the primary tumor. In addition, there was no data available about the in vivo response rates to systemic therapy, as would be necessary for a co-clinical trial design. Furthermore, we have only performed an automated analysis of changes in morphometrics and proliferative activity via Ki-67. Since Casp 3 immunostain localization can be either cytoplasmatic or nuclear, depending on the apoptotic stage of the individual tumor cell, automated analysis was not possible, because all available digital modules rely on a sole nuclear or cytoplasmatic/ membranous localization of the immune stain. Therefore, evaluation was performed semiquantitatively by two experienced pathologists. We showed that the tissue slice culture technology is feasible for conserving tumor-stroma morphology of hepatic metastases of colorectal cancer. Easy to use automated analysis tools objectively measure absolute changes of proliferation and the distribution among necrosis-, tumor- and stroma- compartments after treatment with systemic drugs. Therefore, this study indicates a potential value of this technique as a patient-specific test-system of targeted therapy in the context of metastatic colorectal carcinoma. Future co-clinical trials will test this hypothesis and define adequate cut-off values of the readout data. All data generated or analyzed during this study are included in this published article and its supplementary information files, with the exception of data that would compromise the individual privacy of the patients. ADCC: antibody-dependent cellular cytotoxicity Casp 3: cleaved caspase-3 Cancer of unknown Primary DW: Daniel C. Wagner EvG: Elastica van Gieson FOL: Folinic Acid H&E: Hematoxylin and Eosin MSI: Microsatellite instable MSS: Microsatellite stable OX: Oxaliplatin PET-CT: Positron emission tomography–computed tomography RECIST: Response Evaluation Criteria In Solid Tumors SUV: Standardized Uptake Value SZM: Steve Z. Martin WR: Wilfried Roth Wild type Ferlay J, Soerjomataram I, Dikshit R, et al. Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN 2012. Int J Cancer. 2015;136:E359–86. Siegel RL, Miller KD, Jemal A. Cancer statistics, 2019. CA Cancer J Clin. 2019;69:7–34. Van Cutsem E, Cervantes A, Adam R, et al. ESMO consensus guidelines for the management of patients with metastatic colorectal cancer. Ann Oncol. 2016;27:1386–422. Van Cutsem E, Kohne CH, Hitre E, et al. Cetuximab and chemotherapy as initial treatment for metastatic colorectal cancer. N Engl J Med. 2009;360:1408–17. Cunningham D, Humblet Y, Siena S, et al. Cetuximab monotherapy and cetuximab plus irinotecan in irinotecan-refractory metastatic colorectal cancer. N Engl J Med. 2004;351:337–45. Siena S, Sartore-Bianchi A, Di Nicolantonio F, Balfour J, Bardelli A. Biomarkers predicting clinical outcome of epidermal growth factor receptor-targeted therapy in metastatic colorectal cancer. J Natl Cancer Inst. 2009;101:1308–24. Douillard JY, Oliner KS, Siena S, et al. Panitumumab-FOLFOX4 treatment and RAS mutations in colorectal cancer. N Engl J Med. 2013;369:1023–34. Van Cutsem E, Lenz HJ, Kohne CH, et al. Fluorouracil, leucovorin, and irinotecan plus cetuximab treatment and RAS mutations in colorectal cancer. J Clin Oncol. 2015;33:692–700. Bokemeyer C, Kohne CH, Ciardiello F, et al. FOLFOX4 plus cetuximab treatment and RAS mutations in colorectal cancer. Eur J Cancer. 2015;51:1243–52. Guren TK, Thomsen M, Kure EH, et al. Cetuximab in treatment of metastatic colorectal cancer: final survival analyses and extended RAS data from the NORDIC-VII study. Br J Cancer. 2017;116:1271–8. Marks DL, Olson RL, Fernandez-Zapico ME. Epigenetic control of the tumor microenvironment. Epigenomics. 2016;8:1671–87. Alizadeh AA, Aranda V, Bardelli A, et al. Toward understanding and exploiting tumor heterogeneity. Nat Med. 2015;21:846–53. Meijer TG, Naipal KA, Jager A, van Gent DC. Ex vivo tumor culture systems for functional drug testing and therapy response prediction. Future Sci OA. 2017;3:FSO190. Vaira V, Fedele G, Pyne S, et al. Preclinical model of organotypic culture for pharmacodynamic profiling of human tumors. Proc Natl Acad Sci U S A. 2010;107:8352–6. Majumder B, Baraneedharan U, Thiyagarajan S, et al. Predicting clinical response to anticancer drugs using an ex vivo platform that captures tumour heterogeneity. Nat Commun. 2015;6:6169. Sonnichsen R, Hennig L, Blaschke V, et al. Individual susceptibility analysis using patient-derived slice cultures of colorectal carcinoma. Clin Colorectal Cancer. 2018;17:e189–99. Chen N, Fang W, Lin Z, et al. KRAS mutation-induced upregulation of PD-L1 mediates immune escape in human lung adenocarcinoma. Cancer Immunol Immunother. 2017;66:1175–87. Gargett T, Yu W, Dotti G, et al. GD2-specific CAR T cells undergo potent activation and deletion following antigen encounter but can be protected from activation-induced cell death by PD-1 blockade. Mol Ther. 2016;24:1135–49. Tashiro T, Okuyama H, Endo H, et al. In vivo and ex vivo cetuximab sensitivity assay using three-dimensional primary culture system to stratify KRAS mutant colorectal cancer. PLoS One. 2017;12:e0174151. Li C, Singh B, Graves-Deal R, et al. Three-dimensional culture system identifies a new mode of cetuximab resistance and disease-relevant genes in colorectal cancer. Proc Natl Acad Sci U S A. 2017;114:E2852–61. Bressenot A, Marchal S, Bezdetnaya L, Garrier J, Guillemin F, Plenat F. Assessment of apoptosis by immunohistochemistry to active caspase-3, active caspase-7, or cleaved PARP in monolayer cells and spheroid and subcutaneous xenografts of human carcinoma. J Histochem Cytochem. 2009;57:289–300. Carpinteria C. PD-L1 IHC 22C3 pharmDx [package insert]. Dako, Agilent Pathology Solutions. 2018. URL: https://www.agilent.com/cs/library/usermanuals/public/29257_22c3_pharmDx_cervical_interpretation_manual_us.pdf; Last visited: 10.04.2019. Hammer Ø, Harper, DAT, & Ryan, PD. PAST: Paleontological Statistics Software Package for Education and Data Analysis. . Palaeontologia Electronica. 2001;4. McMillin DW, Negri JM, Mitsiades CS. The role of tumour-stromal interactions in modifying drug response: challenges and opportunities. Nat Rev Drug Discov. 2013;12:217–28. Romero-Lopez M, Trinh AL, Sobrino A, et al. Recapitulating the human tumor microenvironment: Colon tumor-derived extracellular matrix promotes angiogenesis and tumor cell growth. Biomaterials. 2017;116:118–29. Naipal KA, Verkaik NS, Sanchez H, et al. Tumor slice culture system to assess drug response of primary breast cancer. BMC Cancer. 2016;16:78. van der Kuip H, Murdter TE, Sonnenberg M, et al. Short term culture of breast cancer tissues to study the activity of the anticancer drug taxol in an intact tumor environment. BMC Cancer. 2006;6:86. Davies EJ, Dong M, Gutekunst M, et al. Capturing complex tumour biology in vitro: histological and molecular characterisation of precision cut slices. Sci Rep. 2015;5:17187. Holliday DL, Moss MA, Pollock S, et al. The practicalities of using tissue slices as preclinical organotypic breast cancer models. J Clin Pathol. 2013;66:253–5. Carranza-Torres IE, Guzman-Delgado NE, Coronado-Martinez C, et al. Organotypic culture of breast tumor explants as a multicellular system for the screening of natural compounds with antineoplastic potential. Biomed Res Int. 2015;2015:618021. Koerfer J, Kallendrusch S, Merz F, et al. Organotypic slice cultures of human gastric and esophagogastric junction cancer. Cancer Med. 2016;5:1444–53. Gerlach MM, Merz F, Wichmann G, et al. Slice cultures from head and neck squamous cell carcinoma: a novel test system for drug susceptibility and mechanisms of resistance. Br J Cancer. 2014;110:479–88. Weber JC, Nakano H, Bachellier P, et al. Is a proliferation index of cancer cells a reliable prognostic factor after hepatectomy in patients with colorectal liver metastases? Am J Surg. 2001;182:81–8. Peeters CF, de Waal RM, Wobbes T, Westphal JR, Ruers TJ. Outgrowth of human liver metastases after resection of the primary colorectal tumor: a shift in the balance between apoptosis and proliferation. Int J Cancer. 2006;119:1249–53. Ehrsson H, Wallin I, Yachnin J. Pharmacokinetics of oxaliplatin in humans. Med Oncol. 2002;19:261–5. Comella P, Casaretti R, Sandomenico C, Avallone A, Franco L. Role of oxaliplatin in the treatment of colorectal cancer. Ther Clin Risk Manag. 2009;5:229–38. Levi F, Perpoint B, Garufi C, et al. Oxaliplatin activity against metastatic colorectal cancer. A phase II study of 5-day continuous venous infusion at circadian rhythm modulated rate. Eur J Cancer. 1993;29A:1280–4. Machover D, Diaz-Rubio E, de Gramont A, et al. Two consecutive phase II studies of oxaliplatin (L-OHP) for treatment of patients with advanced colorectal carcinoma who were resistant to previous treatment with fluoropyrimidines. Ann Oncol. 1996;7:95–8. Diaz-Rubio E, Sastre J, Zaniboni A, et al. Oxaliplatin as single agent in previously untreated colorectal carcinoma patients: a phase II multicentric study. Ann Oncol. 1998;9:105–8. Extra JM, Espie M, Calvo F, Ferme C, Mignot L, Marty M. Phase I study of oxaliplatin in patients with advanced cancer. Cancer Chemother Pharmacol. 1990;25:299–303. Bokemeyer C, Bondarenko I, Makhson A, et al. Fluorouracil, leucovorin, and oxaliplatin with and without cetuximab in the first-line treatment of metastatic colorectal cancer. J Clin Oncol. 2009;27:663–71. Mendelsohn J, Baselga J. Epidermal growth factor receptor targeting in cancer. Semin Oncol. 2006;33:369–85. Lupini L, Bassi C, Mlcochova J, et al. Prediction of response to anti-EGFR antibody-based therapies by multigene sequencing in colorectal cancer patients. BMC Cancer. 2015;15:808. De Roock W, Jonker DJ, Di Nicolantonio F, et al. Association of KRAS p.G13D mutation with outcome in patients with chemotherapy-refractory metastatic colorectal cancer treated with cetuximab. JAMA. 2010;304:1812–20. Tejpar S, Celik I, Schlichting M, Sartorius U, Bokemeyer C, Van Cutsem E. Association of KRAS G13D tumor mutations with outcome in patients with metastatic colorectal cancer treated with first-line chemotherapy with or without cetuximab. J Clin Oncol. 2012;30:3570–7. Kumar SS, Price TJ, Mohyieldin O, Borg M, Townsend A, Hardingham JE. KRAS G13D mutation and sensitivity to Cetuximab or Panitumumab in a colorectal Cancer cell line model. Gastrointest Cancer Res. 2014;7:23–6. Iannello A, Ahmad A. Role of antibody-dependent cell-mediated cytotoxicity in the efficacy of therapeutic anti-cancer monoclonal antibodies. Cancer Metastasis Rev. 2005;24:487–99. Strome SE, Sausville EA, Mann D. A mechanistic perspective of monoclonal antibodies in cancer therapy beyond target-related effects. Oncologist. 2007;12:1084–95. Kalyan A, Kircher S, Shah H, Mulcahy M, Benson A. Updates on immunotherapy for colorectal cancer. J Gastrointest Oncol. 2018;9:160–9. Brahmer JR, Drake CG, Wollner I, et al. Phase I study of single-agent anti-programmed death-1 (MDX-1106) in refractory solid tumors: safety, clinical activity, pharmacodynamics, and immunologic correlates. J Clin Oncol. 2010;28:3167–75. Le DT, Uram JN, Wang H, et al. PD-1 blockade in tumors with mismatch-repair deficiency. N Engl J Med. 2015;372:2509–20. Llosa NJ, Cruise M, Tam A, et al. The vigorous immune microenvironment of microsatellite instable colon cancer is balanced by multiple counter-inhibitory checkpoints. Cancer Discov. 2015;5:43–51. Gong J, Wang C, Lee PP, Chu P, Fakih M. Response to PD-1 blockade in microsatellite stable metastatic colorectal Cancer harboring a POLE mutation. J Natl Compr Cancer Netw. 2017;15:142–7. Buzzelli JN, Ouaret D, Brown G, Allen PD, Muschel RJ. Colorectal cancer liver metastases organoids retain characteristics of original tumor and acquire chemotherapy resistance. Stem Cell Res. 2018;27:109–20. We want to thank Pimrapat Gebert of the Institute of biometry and clinical epidemiology of the Charité Universitätsmedizin Berlin for the competent statistical consultation of this study. Furthermore, tissue samples were provided by the tissue bank of the University Medical Center Mainz in accordance with the regulations of the tissue biobank and the approval of the ethics committee of University Medical Center Mainz. Special thanks to Erik Springer, Stefanie Zimmer, Antonietta Valentino, Silke Mitschke and Bonny Adami. Institute of Pathology, University Medical Center Mainz, Langenbeckstraße 1, 55131, Mainz, Germany Steve Z. Martin, Daniel C. Wagner, Nina Hörner, Katrin E. Tagscherer & Wilfried Roth Institute of Pathology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health, Campus Charité Mitte, 10117, Berlin, Germany Steve Z. Martin & David Horst Department of General Visceral and Transplantation Surgery, University Medical Center Mainz, Langenbeckstraße 1, 55131, Mainz, Germany Hauke Lang Nina Hörner David Horst Katrin E. Tagscherer Conception and design: WR, SZM, HL. Administrative support: SZM, HL, DH, WR. Study materials and patient data: WR, HL, KT. Experiments: SZM, NH, DW. Data Analysis: SZM. Manuscript writing: SZM. All authors read and approved the final manuscript. Correspondence to Steve Z. Martin. The study was approved by our institution's ethics committee (Joint Ethics Committee of the Faculty of Economics and Business Administration of Goethe University Frankfurt and the Gutenberg School of Management & Economics of the Faculty of Law, Management and Economics of Johannes Gutenberg University Mainz; [email protected]). Informed written consent was obtained from all participants of this study. Additional file 1: Figure S1. Depicted are examples of selective nuclear and cytoplasmatic location of Casp 3 immunostain of tumor cells (upper row), depending on the individual stage of apoptosis. The tumor apoptotic fraction is defined as Casp 3 positive tumor cells divided by the total number of tumor cells. Stain of non-epithelial cells or unspecific stain of cell debris and necrosis (middle row) were ignored. The lower left picture shows a section detail with three tumor cells positively stained for Casp 3 (black arrows) and unspecific stain (red arrows). Additional file 2: Supplementary Tables. Description of data: Tables with raw data of immunohistochemical and morphometrical analysis as well as evaluation of PD1 and PD-L1 immunostain and details of systemic therapy. . Depicted are H&E, EvG, Ki-67 and Casp 3 stained sections of representative treated (Cetuximab, Pembrolizumab and Oxaliplatin) and untreated (control) tissue slices of patient 5. The upper row depicts H&E stained sections, little boxes show a higher magnification to show nuclear detail. The middle row shows EvG-stained sections and Ki-67 immunostain. The lower row shows Casp 3 Immunostain, little boxes show a higher magnification to show nuclear detail. Adaptations to Tumor Tissue Slice Culture for Hepatic Colorectal Metastases. More detailed information of the protocol of tumor tissue slice culture is provided. Martin, S.Z., Wagner, D.C., Hörner, N. et al. Ex vivo tissue slice culture system to measure drug-response rates of hepatic metastatic colorectal cancer. BMC Cancer 19, 1030 (2019). https://doi.org/10.1186/s12885-019-6270-4 Ex vivo culture Colorectal liver metastases CRLM Predictive biomarker Predictive test system
CommonCrawl
Omega constant The omega constant is a mathematical constant defined as the unique real number that satisfies the equation $\Omega e^{\Omega }=1.$ It is the value of W(1), where W is Lambert's W function. The name is derived from the alternate name for Lambert's W function, the omega function. The numerical value of Ω is given by Ω = 0.567143290409783872999968662210... (sequence A030178 in the OEIS). 1/Ω = 1.763222834351896710225201776951... (sequence A030797 in the OEIS). Properties Fixed point representation The defining identity can be expressed, for example, as $\ln({\tfrac {1}{\Omega }})=\Omega .$ or $-\ln(\Omega )=\Omega $ as well as $e^{-\Omega }=\Omega .$ Computation One can calculate Ω iteratively, by starting with an initial guess Ω0, and considering the sequence $\Omega _{n+1}=e^{-\Omega _{n}}.$ This sequence will converge to Ω as n approaches infinity. This is because Ω is an attractive fixed point of the function e−x. It is much more efficient to use the iteration $\Omega _{n+1}={\frac {1+\Omega _{n}}{1+e^{\Omega _{n}}}},$ because the function $f(x)={\frac {1+x}{1+e^{x}}},$ in addition to having the same fixed point, also has a derivative that vanishes there. This guarantees quadratic convergence; that is, the number of correct digits is roughly doubled with each iteration. Using Halley's method, Ω can be approximated with cubic convergence (the number of correct digits is roughly tripled with each iteration): (see also Lambert W function § Numerical evaluation). $\Omega _{j+1}=\Omega _{j}-{\frac {\Omega _{j}e^{\Omega _{j}}-1}{e^{\Omega _{j}}(\Omega _{j}+1)-{\frac {(\Omega _{j}+2)(\Omega _{j}e^{\Omega _{j}}-1)}{2\Omega _{j}+2}}}}.$ Integral representations An identity due to Victor Adamchik is given by the relationship $\int _{-\infty }^{\infty }{\frac {dt}{(e^{t}-t)^{2}+\pi ^{2}}}={\frac {1}{1+\Omega }}.$ Other relations due to Mező[1][2] and Kalugin-Jeffrey-Corless[3] are: $\Omega ={\frac {1}{\pi }}\operatorname {Re} \int _{0}^{\pi }\log \left({\frac {e^{e^{it}}-e^{-it}}{e^{e^{it}}-e^{it}}}\right)dt,$ $\Omega ={\frac {1}{\pi }}\int _{0}^{\pi }\log \left(1+{\frac {\sin t}{t}}e^{t\cot t}\right)dt.$ The latter two identities can be extended to other values of the W function (see also Lambert W function § Representations). Transcendence The constant Ω is transcendental. This can be seen as a direct consequence of the Lindemann–Weierstrass theorem. For a contradiction, suppose that Ω is algebraic. By the theorem, e−Ω is transcendental, but Ω = e−Ω, which is a contradiction. Therefore, it must be transcendental.[4] References 1. Mező, István. "An integral representation for the principal branch of the Lambert W function". Retrieved 24 April 2022. 2. Mező, István (2020). "An integral representation for the Lambert W function". arXiv:2012.02480 [math.CA].. 3. Kalugin, German A.; Jeffrey, David J.; Corless, Robert M. (2011). "Stieltjes, Poisson and other integral representations for functions of Lambert W". arXiv:1103.5640 [math.CV].. 4. Mező, István; Baricz, Árpád (November 2017). "On the Generalization of the Lambert W Function" (PDF). Transactions of the American Mathematical Society. 369 (11): 7928. Retrieved 28 April 2023. External links • Weisstein, Eric W. "Omega Constant". MathWorld. • "Omega constant (1,000,000 digits)", Darkside communication group (in Japan), retrieved 2017-12-25 Irrational numbers • Chaitin's (Ω) • Liouville • Prime (ρ) • Omega • Cahen • Logarithm of 2 • Gauss's (G) • Twelfth root of 2 • Apéry's (ζ(3)) • Plastic (ρ) • Square root of 2 • Supergolden ratio (ψ) • Erdős–Borwein (E) • Golden ratio (φ) • Square root of 3 • Square root of pi (√π) • Square root of 5 • Silver ratio (δS) • Square root of 6 • Square root of 7 • Euler's (e) • Pi (π) • Schizophrenic • Transcendental • Trigonometric
Wikipedia
Quotient In arithmetic, a quotient (from Latin: quotiens 'how many times', pronounced /ˈkwoʊʃənt/) is a quantity produced by the division of two numbers.[1] The quotient has widespread use throughout mathematics. It has two definitions: either the integer part of a division (in the case of Euclidean division),[2] or as a fraction or a ratio (in the case of a general division). For example, when dividing 20 (the dividend) by 3 (the divisor), the quotient is 6 (with a remainder of 2) in the first sense, and $6{\tfrac {2}{3}}=6.66...$ (a repeating decimal) in the second sense. Ratios can be defined as dimensionless quotients;[3] non-dimensionless quotients are also known as rates.[4] For other uses, see Quotient (disambiguation). Arithmetic operations Addition (+) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{term}}\,+\,{\text{term}}\\\scriptstyle {\text{summand}}\,+\,{\text{summand}}\\\scriptstyle {\text{addend}}\,+\,{\text{addend}}\\\scriptstyle {\text{augend}}\,+\,{\text{addend}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{sum}}$ Subtraction (−) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{term}}\,-\,{\text{term}}\\\scriptstyle {\text{minuend}}\,-\,{\text{subtrahend}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{difference}}$ Multiplication (×) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\text{factor}}\,\times \,{\text{factor}}\\\scriptstyle {\text{multiplier}}\,\times \,{\text{multiplicand}}\end{matrix}}\right\}\,=\,$ $\scriptstyle {\text{product}}$ Division (÷) $\scriptstyle \left.{\begin{matrix}\scriptstyle {\frac {\scriptstyle {\text{dividend}}}{\scriptstyle {\text{divisor}}}}\\[1ex]\scriptstyle {\frac {\scriptstyle {\text{numerator}}}{\scriptstyle {\text{denominator}}}}\end{matrix}}\right\}\,=\,$ $\scriptstyle \left\{{\begin{matrix}\scriptstyle {\text{fraction}}\\\scriptstyle {\text{quotient}}\\\scriptstyle {\text{ratio}}\end{matrix}}\right.$ Exponentiation (^) $\scriptstyle {\text{base}}^{\text{exponent}}\,=\,$ $\scriptstyle {\text{power}}$ nth root (√) $\scriptstyle {\sqrt[{\text{degree}}]{\scriptstyle {\text{radicand}}}}\,=\,$ $\scriptstyle {\text{root}}$ Logarithm (log) $\scriptstyle \log _{\text{base}}({\text{anti-logarithm}})\,=\,$ $\scriptstyle {\text{logarithm}}$ Notation Main article: Division (mathematics) § Notation The quotient is most frequently encountered as two numbers, or two variables, divided by a horizontal line. The words "dividend" and "divisor" refer to each individual part, while the word "quotient" refers to the whole. ${\dfrac {1}{2}}\quad {\begin{aligned}&\leftarrow {\text{dividend or numerator}}\\&\leftarrow {\text{divisor or denominator}}\end{aligned}}{\Biggr \}}\leftarrow {\text{quotient}}$ Integer part definition The quotient is also less commonly defined as the greatest whole number of times a divisor may be subtracted from a dividend—before making the remainder negative. For example, the divisor 3 may be subtracted up to 6 times from the dividend 20, before the remainder becomes negative: 20 − 3 − 3 − 3 − 3 − 3 − 3 ≥ 0, while 20 − 3 − 3 − 3 − 3 − 3 − 3 − 3 < 0. In this sense, a quotient is the integer part of the ratio of two numbers.[5] Quotient of two integers Main article: Rational number A rational number can be defined as the quotient of two integers (as long as the denominator is non-zero). A more detailed definition goes as follows:[6] A real number r is rational, if and only if it can be expressed as a quotient of two integers with a nonzero denominator. A real number that is not rational is irrational. Or more formally: Given a real number r, r is rational if and only if there exists integers a and b such that $r={\tfrac {a}{b}}$ and $b\neq 0$. The existence of irrational numbers—numbers that are not a quotient of two integers—was first discovered in geometry, in such things as the ratio of the diagonal to the side in a square.[7] More general quotients Outside of arithmetic, many branches of mathematics have borrowed the word "quotient" to describe structures built by breaking larger structures into pieces. Given a set with an equivalence relation defined on it, a "quotient set" may be created which contains those equivalence classes as elements. A quotient group may be formed by breaking a group into a number of similar cosets, while a quotient space may be formed in a similar process by breaking a vector space into a number of similar linear subspaces. See also • Product (mathematics) • Quotient category • Quotient graph • Integer division • Quotient module • Quotient object • Quotient of a formal language, also left and right quotient • Quotient ring • Quotient set • Quotient space (topology) • Quotient type • Quotition and partition References 1. "Quotient". Dictionary.com. 2. Weisstein, Eric W. "Integer Division". mathworld.wolfram.com. Retrieved 2020-08-27. 3. "ISO 80000-1:2022(en) Quantities and units — Part 1: General". iso.org. Retrieved 2023-07-23. 4. "The quotient of two numbers (or quantities); the relative sizes of two numbers (or quantities)", "The Mathematics Dictionary" 5. Weisstein, Eric W. "Quotient". MathWorld. 6. Epp, Susanna S. (2011-01-01). Discrete mathematics with applications. Brooks/Cole. p. 163. ISBN 9780495391326. OCLC 970542319. 7. "Irrationality of the square root of 2". www.math.utah.edu. Retrieved 2020-08-27. Fractions and ratios Division and ratio • Dividend ÷ Divisor = Quotient Fraction • Numerator/Denominator = Quotient • Algebraic • Aspect • Binary • Continued • Decimal • Dyadic • Egyptian • Golden • Silver • Integer • Irreducible • Reduction • Just intonation • LCD • Musical interval • Paper size • Percentage • Unit Authority control: National • Germany
Wikipedia
Global attractors for a mixture problem in one dimensional solids with nonlinear damping and sources terms CPAA Home Second order non-autonomous lattice systems and their uniform attractors July 2019, 18(4): 1847-1867. doi: 10.3934/cpaa.2019086 Nonlinear Dirichlet problem for the nonlocal anisotropic operator $ L_K $ Silvia Frassu , Department of Mathematics and Computer Science, University of Cagliari, Viale L. Merello 92, 09123 Cagliari, Italy Received July 2018 Revised September 2018 Published January 2019 Fund Project: The author is member of GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni) of INdAM (Istituto Nazionale di Alta Matematica 'Francesco Severi') In this paper we study an equation driven by a nonlocal anisotropic operator with homogeneous Dirichlet boundary conditions. We find at least three non trivial solutions: one positive, one negative and one of unknown sign, using variational methods and Morse theory. We present some results about regularity of solutions such as $ L^{\infty} $ bound and Hopf's lemma, for the latter we first consider a non negative nonlinearity and then a strictly negative one. Moreover, we prove that, for the corresponding functional, local minimizers with respect to a $ C^0 $-topology weighted with a suitable power of the distance from the boundary are actually local minimizers in the $ X(\Omega) $-topology. Keywords: Integrodifferential operators, Variational methods, Fractional Laplacian, Local minimizers, Mountain Pass Theorem. Mathematics Subject Classification: 35R09, 35R11, 47G20. Citation: Silvia Frassu. Nonlinear Dirichlet problem for the nonlocal anisotropic operator $ L_K $. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1847-1867. doi: 10.3934/cpaa.2019086 D. Applebaum, Lévy processes-From probability to finance and quantum groups, Notices Amer. Math. Soc., 51 (2004), 1336-1347. Google Scholar B. Barrios, E. Colorado, R. Servadei and F. Soria, A critical fractional equation with concave-convex power nonlinearities, Ann. Inst. H. Poincaré Anal. Non Linéaire, 32 (2015), 875-900. doi: 10.1016/j.anihpc.2014.04.003. Google Scholar H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations, Springer, New York, 2011. doi: 10.1007/978-1-4612-0873-0. Google Scholar H. Brezis and L. Nirenberg, $H^1$ versus $C^1$ local minimizers, C. R. Acad. Sci. Paris Ser. I, 317 (1993), 465-472. Google Scholar C. Bucur and E. Valdinoci, Nonlocal Diffusion and Applications, Vol.20., Springer, Bologna, 2016. doi: 10.1007/978-1-4612-0873-0. Google Scholar L. Caffarelli, Non-local diffusions, drifts and games, in Nonlinear Partial Differential Equations, Springer, Berlin, Heidelberg, (2012), 37–52. doi: 10.1007/978-3-642-25361-4_3. Google Scholar F. Demengel, G. Demengel and R. Erné, Functional Spaces for the Theory of Elliptic Partial Differential Equations, Springer, London, 2012. doi: 10.1007/978-1-4612-0873-0. Google Scholar E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar S. Dipierro, X. Ros-Oton and E. Valdinoci, Nonlocal problems with Neumann boundary conditions, Rev. Mat. Iberoam., 33 (2017), 377-416. doi: 10.4171/RMI/942. Google Scholar F. G. Düzgün and A. Iannizzotto, Three nontrivial solutions for nonlinear fractional Laplacian equations, Adv. Nonlinear Anal., 7 (2018), 211-226. doi: 10.1515/anona-2016-0090. Google Scholar A. Fiscella, R. Servadei and E. Valdinoci, Density properties for fractional Sobolev spaces, Ann. Acad. Sci. Fenn. Math., 40 (2015), 235-253. doi: 10.5186/aasfm.2015.4009. Google Scholar A. Greco and R. Servadei, Hopf's lemma and constrained radial symmetry for the fractional Laplacian, Math. Res. Lett., 23 (2016), 863-885. doi: 10.4310/MRL.2016.v23.n3.a14. Google Scholar A. Iannizzotto and M. Squassina, 1/2-Laplacian problems with exponential nonlinearity, J. Math. Anal. Appl., 414 (2014), 372-385. doi: 10.1016/j.jmaa.2013.12.059. Google Scholar A. Iannizzotto, S. Mosconi and M. Squassina, $H^s$ versus $C^0$-weighted minimizers, NoDEA Nonlinear Differential Equations Appl., 22 (2015), 477-497. doi: 10.1007/s00030-014-0292-z. Google Scholar A. Iannizzotto, S. Liu, K. Perera and M. Squassina, Existence results for fractional p-Laplacian problems via Morse theory, Adv. Calc. Var., 9 (2016), 101-125. doi: 10.1515/acv-2014-0024. Google Scholar G. Molica Bisci, V. D. Radulescu and R. Servadei, Variational Methods for Nonlocal Fractional Problems, Cambridge University Press, 2016. doi: 10.1007/978-1-4612-0873-0. Google Scholar D. Motreanu, V. V. Motreanu and N. S. Papageorgiou, Topological and Variational Methods with Applications to Nonlinear Boundary Value Problems, Springer, New York, 2014. doi: 10.1007/978-1-4612-0873-0. Google Scholar P. Pucci and J. Serrin, A mountain pass theorem, J. Differential Equations, 60 (1985), 142-149. doi: 10.1016/0022-0396(85)90125-1. Google Scholar X. Ros-Oton, Nonlocal elliptic equations in bounded domains: a survey, Publ. Mat., 60 (2016), 3-26. Google Scholar X. Ros-Oton and E. Valdinoci, The Dirichlet problem for nonlocal operators with singular kernels: convex and nonconvex domains, Adv. Math., 288 (2016), 732-790. doi: 10.1016/j.aim.2015.11.001. Google Scholar R. Servadei and E. Valdinoci, Mountain Pass solutions for non-local elliptic operators, J. Math. Anal. Appl., 389 (2012), 887-898. doi: 10.1016/j.jmaa.2011.12.032. Google Scholar R. Servadei and E. Valdinoci, Variational methods for non-local operators of elliptic type, Discrete Contin. Dyn. Syst., 33 (2013), 2105-2137. doi: 10.3934/dcds.2013.33.2105. Google Scholar Dorota Bors. Application of Mountain Pass Theorem to superlinear equations with fractional Laplacian controlled by distributed parameters and boundary data. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 29-43. doi: 10.3934/dcdsb.2018003 Claudianor O. Alves, Giovany M. Figueiredo, Marcelo F. Furtado. Multiplicity of solutions for elliptic systems via local Mountain Pass method. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1745-1758. doi: 10.3934/cpaa.2009.8.1745 Raffaella Servadei, Enrico Valdinoci. Variational methods for non-local operators of elliptic type. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 2105-2137. doi: 10.3934/dcds.2013.33.2105 Ian Schindler, Kyril Tintarev. Mountain pass solutions to semilinear problems with critical nonlinearity. Conference Publications, 2007, 2007 (Special) : 912-919. doi: 10.3934/proc.2007.2007.912 Dmitry Glotov, P. J. McKenna. Numerical mountain pass solutions of Ginzburg-Landau type equations. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1345-1359. doi: 10.3934/cpaa.2008.7.1345 Christopher Grumiau, Marco Squassina, Christophe Troestler. On the Mountain-Pass algorithm for the quasi-linear Schrödinger equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1345-1360. doi: 10.3934/dcdsb.2013.18.1345 Lorenzo Brasco, Enea Parini, Marco Squassina. Stability of variational eigenvalues for the fractional $p-$Laplacian. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1813-1845. doi: 10.3934/dcds.2016.36.1813 Juan-Luis Vázquez. Recent progress in the theory of nonlinear diffusion with fractional Laplacian operators. Discrete & Continuous Dynamical Systems - S, 2014, 7 (4) : 857-885. doi: 10.3934/dcdss.2014.7.857 Siwei Duo, Hong Wang, Yanzhi Zhang. A comparative study on nonlocal diffusion operators related to the fractional Laplacian. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 231-256. doi: 10.3934/dcdsb.2018110 Shixiu Zheng, Zhilei Xu, Huan Yang, Jintao Song, Zhenkuan Pan. Comparisons of different methods for balanced data classification under the discrete non-local total variational framework. Mathematical Foundations of Computing, 2019, 2 (1) : 11-28. doi: 10.3934/mfc.2019002 Maria Alessandra Ragusa, Atsushi Tachikawa. Estimates of the derivatives of minimizers of a special class of variational integrals. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1411-1425. doi: 10.3934/dcds.2011.31.1411 Orlando Lopes. Uniqueness and radial symmetry of minimizers for a nonlocal variational problem. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2265-2282. doi: 10.3934/cpaa.2019102 Pavel Drábek, Stephen Robinson. Continua of local minimizers in a quasilinear model of phase transitions. Discrete & Continuous Dynamical Systems - A, 2013, 33 (1) : 163-172. doi: 10.3934/dcds.2013.33.163 Augusto VisintiN. On the variational representation of monotone operators. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 909-918. doi: 10.3934/dcdss.2017046 Annalisa Cesaroni, Matteo Novaga. Volume constrained minimizers of the fractional perimeter with a potential energy. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 715-727. doi: 10.3934/dcdss.2017036 Henk Broer, Konstantinos Efstathiou, Olga Lukina. A geometric fractional monodromy theorem. Discrete & Continuous Dynamical Systems - S, 2010, 3 (4) : 517-532. doi: 10.3934/dcdss.2010.3.517 Yutong Chen, Jiabao Su. Resonant problems for fractional Laplacian. Communications on Pure & Applied Analysis, 2017, 16 (1) : 163-188. doi: 10.3934/cpaa.2017008 Wenxiong Chen, Shijie Qi. Direct methods on fractional equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1269-1310. doi: 10.3934/dcds.2019055 Christos Gavriel, Richard Vinter. Regularity of minimizers for second order variational problems in one independent variable. Discrete & Continuous Dynamical Systems - A, 2011, 29 (2) : 547-557. doi: 10.3934/dcds.2011.29.547 Florian Krügel. Some properties of minimizers of a variational problem involving the total variation functional. Communications on Pure & Applied Analysis, 2015, 14 (1) : 341-360. doi: 10.3934/cpaa.2015.14.341 Silvia Frassu \begin{document}$ L_K $\end{document}" readonly="readonly">
CommonCrawl
Let $n$ be a positive integer. How many different values can $\gcd(n + 5, n + 11)$ attain? Let $d = \gcd(n + 5, n + 11)$, so $d$ divides both $n + 5$ and $n + 11$. Then $d$ divides $(n + 11) - (n + 5) = 6$. Therefore, $d$ can only be 1, 2, 3, or 6. If $n = 2$, then $\gcd(n + 5, n + 11) = \gcd(7,13) = 1$. If $n = 3$, then $\gcd(n + 5, n + 11) = \gcd(8,14) = 2$. If $n = 4$, then $\gcd(n + 5, n + 11) = \gcd(9,15) = 3$. If $n = 1$, then $\gcd(n + 5, n + 11) = \gcd(6,12) = 6$. Hence, all the values 1, 2, 3, and 6 are attainable, for a total of $\boxed{4}$ possible values.
Math Dataset
Spontaneous parametric down-conversion process can split photons into type II photon pairs with mutually perpendicular polarization. Part of a series of articles about i ℏ ∂ ∂ t | ψ ( t ) ⟩ = H ^ | ψ ( t ) ⟩ {\displaystyle i\hbar {\frac {\partial }{\partial t))|\psi (t)\rangle ={\hat {H))|\psi (t)\rangle } Schrödinger equation Old quantum theory Bra–ket notation Hamiltonian Complementarity Energy level Nonlocality Quantum number Tunnelling Wave function Bell's inequality Davisson–Germer Double-slit Elitzur–Vaidman Franck–Hertz Leggett–Garg inequality Mach–Zehnder Quantum eraser Delayed-choice Schrödinger's cat Stern–Gerlach Wheeler's delayed-choice Phase-space Sum-over-histories (path integral) Klein–Gordon Rydberg Consistent histories de Broglie–Bohm Hidden-variable Many-worlds Objective collapse Quantum logic Relativistic quantum mechanics Quantum field theory EPR paradox Density matrix Scattering theory Quantum statistical mechanics Quantum machine learning Bethe Blackett Bohr de Broglie Davisson Ehrenfest Fock Glauber Gutzwiller Hilbert Laue Moseley Onnes Rabi von Neumann Weyl Zeilinger Quantum entanglement is the phenomenon that occurs when a group of particles are generated, interact, or share spatial proximity in a way such that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical and quantum physics: entanglement is a primary feature of quantum mechanics not present in classical mechanics.[1] Measurements of physical properties such as position, momentum, spin, and polarization performed on entangled particles can, in some cases, be found to be perfectly correlated. For example, if a pair of entangled particles is generated such that their total spin is known to be zero, and one particle is found to have clockwise spin on a first axis, then the spin of the other particle, measured on the same axis, is found to be anticlockwise. However, this behavior gives rise to seemingly paradoxical effects: any measurement of a particle's properties results in an irreversible wave function collapse of that particle and changes the original quantum state. With entangled particles, such measurements affect the entangled system as a whole. Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky, and Nathan Rosen,[2] and several papers by Erwin Schrödinger shortly thereafter,[3][4] describing what came to be known as the EPR paradox. Einstein and others considered such behavior impossible, as it violated the local realism view of causality (Einstein referring to it as "spooky action at a distance")[5] and argued that the accepted formulation of quantum mechanics must therefore be incomplete. Later, however, the counterintuitive predictions of quantum mechanics were verified[6][7][8] in tests where polarization or spin of entangled particles was measured at separate locations, statistically violating Bell's inequality. In earlier tests, it could not be ruled out that the result at one point could have been subtly transmitted to the remote point, affecting the outcome at the second location.[8] However, so-called "loophole-free" Bell tests have been performed where the locations were sufficiently separated that communications at the speed of light would have taken longer—in one case, 10,000 times longer—than the interval between the measurements.[7][6] According to some interpretations of quantum mechanics, the effect of one measurement occurs instantly. Other interpretations which do not recognize wavefunction collapse dispute that there is any "effect" at all. However, all interpretations agree that entanglement produces correlation between the measurements, and that the mutual information between the entangled particles can be exploited, but that any transmission of information at faster-than-light speeds is impossible.[9][10] Quantum entanglement has been demonstrated experimentally with photons,[11][12] neutrinos,[13] electrons,[14][15] molecules as large as buckyballs,[16][17] and even small diamonds.[18] The utilization of entanglement in communication, computation and quantum radar is a very active area of research and development. Despite much popular thought to the contrary, quantum entanglement cannot be used for faster-than-light communication.[19] Further information: Hidden-variable theory Article headline regarding the Einstein–Podolsky–Rosen (EPR) paradox paper, in the May 4, 1935 issue of The New York Times. In 1935, Albert Einstein, Boris Podolsky and Nathan Rosen published a paper on the counterintuitive predictions that quantum mechanics makes for pairs of objects prepared together in a particular way.[2] In this study, the three formulated the Einstein–Podolsky–Rosen paradox (EPR paradox), a thought experiment that attempted to show that "the quantum-mechanical description of physical reality given by wave functions is not complete."[2] However, the three scientists did not coin the word entanglement, nor did they generalize the special properties of the quantum state they considered. Following the EPR paper, Erwin Schrödinger wrote a letter to Einstein in German in which he used the word Verschränkung (translated by himself as entanglement) "to describe the correlations between two particles that interact and then separate, as in the EPR experiment."[20] Schrödinger shortly thereafter published a seminal paper defining and discussing the notion of "entanglement." In the paper, he recognized the importance of the concept, and stated:[3] "I would not call [entanglement] one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought." Like Einstein, Schrödinger was dissatisfied with the concept of entanglement, because it seemed to violate the speed limit on the transmission of information implicit in the theory of relativity.[21] Einstein later famously derided entanglement as "spukhafte Fernwirkung"[22] or "spooky action at a distance." The EPR paper generated significant interest among physicists, which inspired much discussion about the foundations of quantum mechanics and Bohm's interpretation in particular, but produced relatively little other published work. Despite the interest, the weak point in EPR's argument was not discovered until 1964, when John Stewart Bell proved that one of their key assumptions, the principle of locality, as applied to the kind of hidden variables interpretation hoped for by EPR, was mathematically inconsistent with the predictions of quantum theory. Specifically, Bell demonstrated an upper limit, seen in Bell's inequality, regarding the strength of correlations that can be produced in any theory obeying local realism, and showed that quantum theory predicts violations of this limit for certain entangled systems.[23] His inequality is experimentally testable, and there have been numerous relevant experiments, starting with the pioneering work of Stuart Freedman and John Clauser in 1972[24] and Alain Aspect's experiments in 1982.[25] An early experimental breakthrough was due to Carl Kocher,[11][12] who already in 1967 presented an apparatus in which two photons successively emitted from a calcium atom were shown to be entangled – the first case of entangled visible light. The two photons passed diametrically positioned parallel polarizers with higher probability than classically predicted but with correlations in quantitative agreement with quantum mechanical calculations. He also showed that the correlation varied as the squared cosine of the angle between the polarizer settings[12] and decreased exponentially with time lag between emitted photons.[26] Kocher's apparatus, equipped with better polarizers, was used by Freedman and Clauser who could confirm the cosine-squared dependence and use it to demonstrate a violation of Bell's inequality for a set of fixed angles.[24] All these experiments have shown agreement with quantum mechanics rather than the principle of local realism. For decades, each had left open at least one loophole by which it was possible to question the validity of the results. However, in 2015 an experiment was performed that simultaneously closed both the detection and locality loopholes, and was heralded as "loophole-free"; this experiment ruled out a large class of local realism theories with certainty.[27] Aspect writes that "... no experiment ... can be said to be totally loophole-free," but he says the experiments "remove the last doubts that we should renounce" local hidden variables, and refers to examples of remaining loopholes as being "far fetched" and "foreign to the usual way of reasoning in physics."[28] Bell's work raised the possibility of using these super-strong correlations as a resource for communication. It led to the 1984 discovery of quantum key distribution protocols, most famously BB84 by Charles H. Bennett and Gilles Brassard[29] and E91 by Artur Ekert.[30] Although BB84 does not use entanglement, Ekert's protocol uses the violation of a Bell's inequality as a proof of security. In 2022, the Nobel Prize in Physics was awarded to Aspect, Clauser, and Anton Zeilinger "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science".[31] Meaning of entanglement An entangled system is defined to be one whose quantum state cannot be factored as a product of states of its local constituents; that is to say, they are not individual particles but are an inseparable whole. In entanglement, one constituent cannot be fully described without considering the other(s). The state of a composite system is always expressible as a sum, or superposition, of products of states of local constituents; it is entangled if this sum cannot be written as a single product term. Quantum systems can become entangled through various types of interactions. For some ways in which entanglement may be achieved for experimental purposes, see the section below on methods. Entanglement is broken when the entangled particles decohere through interaction with the environment; for example, when a measurement is made.[32] As an example of entanglement: a subatomic particle decays into an entangled pair of other particles. The decay events obey the various conservation laws, and as a result, the measurement outcomes of one daughter particle must be highly correlated with the measurement outcomes of the other daughter particle (so that the total momenta, angular momenta, energy, and so forth remains roughly the same before and after this process). For instance, a spin-zero particle could decay into a pair of spin-1/2 particles. Since the total spin before and after this decay must be zero (conservation of angular momentum), whenever the first particle is measured to be spin up on some axis, the other, when measured on the same axis, is always found to be spin down. (This is called the spin anti-correlated case; and if the prior probabilities for measuring each spin are equal, the pair is said to be in the singlet state.) The above result may or may not be perceived as surprising. A classical system would display the same property, and a hidden variable theory would certainly be required to do so, based on conservation of angular momentum in classical and quantum mechanics alike. The difference is that a classical system has definite values for all the observables all along, while the quantum system does not. In a sense to be discussed below, the quantum system considered here seems to acquire a probability distribution for the outcome of a measurement of the spin along any axis of the other particle upon measurement of the first particle. This probability distribution is in general different from what it would be without measurement of the first particle. This may certainly be perceived as surprising in the case of spatially separated entangled particles. The paradox is that a measurement made on either of the particles apparently collapses the state of the entire entangled system—and does so instantaneously, before any information about the measurement result could have been communicated to the other particle (assuming that information cannot travel faster than light) and hence assured the "proper" outcome of the measurement of the other part of the entangled pair. In the Copenhagen interpretation, the result of a spin measurement on one of the particles is a collapse into a state in which each particle has a definite spin (either up or down) along the axis of measurement. The outcome is taken to be random, with each possibility having a probability of 50%. However, if both spins are measured along the same axis, they are found to be anti-correlated. This means that the random outcome of the measurement made on one particle seems to have been transmitted to the other, so that it can make the "right choice" when it too is measured.[33] The distance and timing of the measurements can be chosen so as to make the interval between the two measurements spacelike, hence, any causal effect connecting the events would have to travel faster than light. According to the principles of special relativity, it is not possible for any information to travel between two such measuring events. It is not even possible to say which of the measurements came first. For two spacelike separated events x1 and x2 there are inertial frames in which x1 is first and others in which x2 is first. Therefore, the correlation between the two measurements cannot be explained as one measurement determining the other: different observers would disagree about the role of cause and effect. (In fact similar paradoxes can arise even without entanglement: the position of a single particle is spread out over space, and two widely separated detectors attempting to detect the particle in two different places must instantaneously attain appropriate correlation, so that they do not both detect the particle.) Hidden variables theory A possible resolution to the paradox is to assume that quantum theory is incomplete, and the result of measurements depends on predetermined "hidden variables".[34] The state of the particles being measured contains some hidden variables, whose values effectively determine, right from the moment of separation, what the outcomes of the spin measurements are going to be. This would mean that each particle carries all the required information with it, and nothing needs to be transmitted from one particle to the other at the time of measurement. Einstein and others (see the previous section) originally believed this was the only way out of the paradox, and the accepted quantum mechanical description (with a random measurement outcome) must be incomplete. Violations of Bell's inequality Local hidden variable theories fail, however, when measurements of the spin of entangled particles along different axes are considered. If a large number of pairs of such measurements are made (on a large number of pairs of entangled particles), then statistically, if the local realist or hidden variables view were correct, the results would always satisfy Bell's inequality. A number of experiments have shown in practice that Bell's inequality is not satisfied. However, prior to 2015, all of these had loophole problems that were considered the most important by the community of physicists.[35][36] When measurements of the entangled particles are made in moving relativistic reference frames, in which each measurement (in its own relativistic time frame) occurs before the other, the measurement results remain correlated.[37][38] The fundamental issue about measuring spin along different axes is that these measurements cannot have definite values at the same time―they are incompatible in the sense that these measurements' maximum simultaneous precision is constrained by the uncertainty principle. This is contrary to what is found in classical physics, where any number of properties can be measured simultaneously with arbitrary accuracy. It has been proven mathematically that compatible measurements cannot show Bell-inequality-violating correlations,[39] and thus entanglement is a fundamentally non-classical phenomenon. Notable experimental results proving quantum entanglement The first experiment that verified Einstein's spooky action at a distance (entanglement) was successfully corroborated in a lab by Chien-Shiung Wu and a colleague named I. Shaknov in 1949, and was published on new year's day in 1950. The result specifically proved the quantum correlations of a pair of photons.[40] In experiments in 2012 and 2013, polarization correlation was created between photons that never coexisted in time.[41][42] The authors claimed that this result was achieved by entanglement swapping between two pairs of entangled photons after measuring the polarization of one photon of the early pair, and that it proves that quantum non-locality applies not only to space but also to time. In three independent experiments in 2013, it was shown that classically communicated separable quantum states can be used to carry entangled states.[43] The first loophole-free Bell test was held by Ronald Hanson of the Delft University of Technology in 2015, confirming the violation of Bell inequality.[44] In August 2014, Brazilian researcher Gabriela Barreto Lemos and team were able to "take pictures" of objects using photons that had not interacted with the subjects, but were entangled with photons that did interact with such objects. Lemos, from the University of Vienna, is confident that this new quantum imaging technique could find application where low light imaging is imperative, in fields like biological or medical imaging.[45] Since 2016, various companies, for example IBM and Microsoft, have successfully created quantum computers that allowed developers and tech enthusiasts to freely experiment with concepts of quantum mechanics including quantum entanglement.[46] Mystery of time There have been suggestions to look at the concept of time as an emergent phenomenon that is a side effect of quantum entanglement.[47][48] In other words, time is an entanglement phenomenon, which places all equal clock readings (of correctly prepared clocks, or of any objects usable as clocks) into the same history. This was first fully theorized by Don Page and William Wootters in 1983.[49] The Wheeler–DeWitt equation that combines general relativity and quantum mechanics – by leaving out time altogether – was introduced in the 1960s and it was taken up again in 1983, when Page and Wootters made a solution based on quantum entanglement. Page and Wootters argued that entanglement can be used to measure time.[50] Emergent gravity Based on AdS/CFT correspondence, Mark Van Raamsdonk suggested that spacetime arises as an emergent phenomenon of the quantum degrees of freedom that are entangled and live in the boundary of the space-time.[51] Induced gravity can emerge from the entanglement first law.[52][53] Non-locality and entanglement In the media and popular science, quantum non-locality is often portrayed as being equivalent to entanglement. While this is true for pure bipartite quantum states, in general entanglement is only necessary for non-local correlations, but there exist mixed entangled states that do not produce such correlations.[54] A well-known example is the Werner states that are entangled for certain values of p s y m {\displaystyle p_{sym)) , but can always be described using local hidden variables.[55] Moreover, it was shown that, for arbitrary numbers of particles, there exist states that are genuinely entangled but admit a local model.[56] The mentioned proofs about the existence of local models assume that there is only one copy of the quantum state available at a time. If the particles are allowed to perform local measurements on many copies of such states, then many apparently local states (e.g., the qubit Werner states) can no longer be described by a local model. This is, in particular, true for all distillable states. However, it remains an open question whether all entangled states become non-local given sufficiently many copies.[57] In short, entanglement of a state shared by two particles is necessary but not sufficient for that state to be non-local. It is important to recognize that entanglement is more commonly viewed as an algebraic concept, noted for being a prerequisite to non-locality as well as to quantum teleportation and to superdense coding, whereas non-locality is defined according to experimental statistics and is much more involved with the foundations and interpretations of quantum mechanics.[58] Quantum mechanical framework The following subsections are for those with a good working knowledge of the formal, mathematical description of quantum mechanics, including familiarity with the formalism and theoretical framework developed in the articles: bra–ket notation and mathematical formulation of quantum mechanics. Pure states Consider two arbitrary quantum systems A and B, with respective Hilbert spaces HA and HB. The Hilbert space of the composite system is the tensor product H A ⊗ H B . {\displaystyle H_{A}\otimes H_{B}.} If the first system is in state | ψ ⟩ A {\displaystyle |\psi \rangle _{A)) and the second in state | ϕ ⟩ B {\displaystyle |\phi \rangle _{B)) , the state of the composite system is | ψ ⟩ A ⊗ | ϕ ⟩ B . {\displaystyle |\psi \rangle _{A}\otimes |\phi \rangle _{B}.} States of the composite system that can be represented in this form are called separable states, or product states. Not all states are separable states (and thus product states). Fix a basis { | i ⟩ A } {\displaystyle \{|i\rangle _{A}\)) for HA and a basis { | j ⟩ B } {\displaystyle \{|j\rangle _{B}\)) for HB. The most general state in HA ⊗ HB is of the form | ψ ⟩ A B = ∑ i , j c i j | i ⟩ A ⊗ | j ⟩ B {\displaystyle |\psi \rangle _{AB}=\sum _{i,j}c_{ij}|i\rangle _{A}\otimes |j\rangle _{B)) . This state is separable if there exist vectors [ c i A ] , [ c j B ] {\displaystyle [c_{i}^{A}],[c_{j}^{B}]} so that c i j = c i A c j B , {\displaystyle c_{ij}=c_{i}^{A}c_{j}^{B},} yielding | ψ ⟩ A = ∑ i c i A | i ⟩ A {\textstyle |\psi \rangle _{A}=\sum _{i}c_{i}^{A}|i\rangle _{A)) and | ϕ ⟩ B = ∑ j c j B | j ⟩ B . {\textstyle |\phi \rangle _{B}=\sum _{j}c_{j}^{B}|j\rangle _{B}.} It is inseparable if for any vectors [ c i A ] , [ c j B ] {\displaystyle [c_{i}^{A}],[c_{j}^{B}]} at least for one pair of coordinates c i A , c j B {\displaystyle c_{i}^{A},c_{j}^{B)) we have c i j ≠ c i A c j B . {\displaystyle c_{ij}\neq c_{i}^{A}c_{j}^{B}.} If a state is inseparable, it is called an 'entangled state'. For example, given two basis vectors { | 0 ⟩ A , | 1 ⟩ A } {\displaystyle \{|0\rangle _{A},|1\rangle _{A}\)) of HA and two basis vectors { | 0 ⟩ B , | 1 ⟩ B } {\displaystyle \{|0\rangle _{B},|1\rangle _{B}\)) of HB, the following is an entangled state: 1 2 ( | 0 ⟩ A ⊗ | 1 ⟩ B − | 1 ⟩ A ⊗ | 0 ⟩ B ) . {\displaystyle {\tfrac {1}{\sqrt {2))}\left(|0\rangle _{A}\otimes |1\rangle _{B}-|1\rangle _{A}\otimes |0\rangle _{B}\right).} If the composite system is in this state, it is impossible to attribute to either system A or system B a definite pure state. Another way to say this is that while the von Neumann entropy of the whole state is zero (as it is for any pure state), the entropy of the subsystems is greater than zero. In this sense, the systems are "entangled". This has specific empirical ramifications for interferometry.[59] The above example is one of four Bell states, which are (maximally) entangled pure states (pure states of the HA ⊗ HB space, but which cannot be separated into pure states of each HA and HB). Now suppose Alice is an observer for system A, and Bob is an observer for system B. If in the entangled state given above Alice makes a measurement in the { | 0 ⟩ , | 1 ⟩ } {\displaystyle \{|0\rangle ,|1\rangle \)) eigenbasis of A, there are two possible outcomes, occurring with equal probability:[60] Alice measures 0, and the state of the system collapses to | 0 ⟩ A | 1 ⟩ B {\displaystyle |0\rangle _{A}|1\rangle _{B)) . If the former occurs, then any subsequent measurement performed by Bob, in the same basis, will always return 1. If the latter occurs, (Alice measures 1) then Bob's measurement will return 0 with certainty. Thus, system B has been altered by Alice performing a local measurement on system A. This remains true even if the systems A and B are spatially separated. This is the foundation of the EPR paradox. The outcome of Alice's measurement is random. Alice cannot decide which state to collapse the composite system into, and therefore cannot transmit information to Bob by acting on her system. Causality is thus preserved, in this particular scheme. For the general argument, see no-communication theorem. As mentioned above, a state of a quantum system is given by a unit vector in a Hilbert space. More generally, if one has less information about the system, then one calls it an 'ensemble' and describes it by a density matrix, which is a positive-semidefinite matrix, or a trace class when the state space is infinite-dimensional, and has trace 1. Again, by the spectral theorem, such a matrix takes the general form: ρ = ∑ i w i | α i ⟩ ⟨ α i | , {\displaystyle \rho =\sum _{i}w_{i}|\alpha _{i}\rangle \langle \alpha _{i}|,} where the wi are positive-valued probabilities (they sum up to 1), the vectors αi are unit vectors, and in the infinite-dimensional case, we would take the closure of such states in the trace norm. We can interpret ρ as representing an ensemble where wi is the proportion of the ensemble whose states are | α i ⟩ {\displaystyle |\alpha _{i}\rangle } . When a mixed state has rank 1, it therefore describes a 'pure ensemble'. When there is less than total information about the state of a quantum system we need density matrices to represent the state. Experimentally, a mixed ensemble might be realized as follows. Consider a "black box" apparatus that spits electrons towards an observer. The electrons' Hilbert spaces are identical. The apparatus might produce electrons that are all in the same state; in this case, the electrons received by the observer are then a pure ensemble. However, the apparatus could produce electrons in different states. For example, it could produce two populations of electrons: one with state | z + ⟩ {\displaystyle |\mathbf {z} +\rangle } with spins aligned in the positive z direction, and the other with state | y − ⟩ {\displaystyle |\mathbf {y} -\rangle } with spins aligned in the negative y direction. Generally, this is a mixed ensemble, as there can be any number of populations, each corresponding to a different state. Following the definition above, for a bipartite composite system, mixed states are just density matrices on HA ⊗ HB. That is, it has the general form ρ = ∑ i w i [ ∑ j c ¯ i j ( | α i j ⟩ ⊗ | β i j ⟩ ) ] [ ∑ k c i k ( ⟨ α i k | ⊗ ⟨ β i k | ) ] {\displaystyle \rho =\sum _{i}w_{i}\left[\sum _{j}{\bar {c))_{ij}(|\alpha _{ij}\rangle \otimes |\beta _{ij}\rangle )\right]\left[\sum _{k}c_{ik}(\langle \alpha _{ik}|\otimes \langle \beta _{ik}|)\right]} where the wi are positively valued probabilities, ∑ j | c i j | 2 = 1 {\textstyle \sum _{j}|c_{ij}|^{2}=1} , and the vectors are unit vectors. This is self-adjoint and positive and has trace 1. Extending the definition of separability from the pure case, we say that a mixed state is separable if it can be written as[61]: 131–132 ρ = ∑ i w i ρ i A ⊗ ρ i B , {\displaystyle \rho =\sum _{i}w_{i}\rho _{i}^{A}\otimes \rho _{i}^{B},} where the wi are positively valued probabilities and the ρ i A {\displaystyle \rho _{i}^{A)) 's and ρ i B {\displaystyle \rho _{i}^{B)) 's are themselves mixed states (density operators) on the subsystems A and B respectively. In other words, a state is separable if it is a probability distribution over uncorrelated states, or product states. By writing the density matrices as sums of pure ensembles and expanding, we may assume without loss of generality that ρ i A {\displaystyle \rho _{i}^{A)) and ρ i B {\displaystyle \rho _{i}^{B)) are themselves pure ensembles. A state is then said to be entangled if it is not separable. In general, finding out whether or not a mixed state is entangled is considered difficult. The general bipartite case has been shown to be NP-hard.[62] For the 2 × 2 and 2 × 3 cases, a necessary and sufficient criterion for separability is given by the famous Positive Partial Transpose (PPT) condition.[63] Reduced density matrices The idea of a reduced density matrix was introduced by Paul Dirac in 1930.[64] Consider as above systems A and B each with a Hilbert space HA, HB. Let the state of the composite system be | Ψ ⟩ ∈ H A ⊗ H B . {\displaystyle |\Psi \rangle \in H_{A}\otimes H_{B}.} As indicated above, in general there is no way to associate a pure state to the component system A. However, it still is possible to associate a density matrix. Let ρ T = | Ψ ⟩ ⟨ Ψ | {\displaystyle \rho _{T}=|\Psi \rangle \;\langle \Psi |} . which is the projection operator onto this state. The state of A is the partial trace of ρT over the basis of system B: ρ A = d e f ∑ j N B ( I A ⊗ ⟨ j | B ) ( | Ψ ⟩ ⟨ Ψ | ) ( I A ⊗ | j ⟩ B ) = Tr B ρ T . {\displaystyle \rho _{A}\ {\stackrel {\mathrm {def} }{=))\ \sum _{j}^{N_{B))\left(I_{A}\otimes \langle j|_{B}\right)\left(|\Psi \rangle \langle \Psi |\right)\left(I_{A}\otimes |j\rangle _{B}\right)={\hbox{Tr))_{B}\;\rho _{T}.} The sum occurs over N B := dim ⁡ ( H B ) {\displaystyle N_{B}:=\dim(H_{B})} and I A {\displaystyle I_{A)) the identity operator in H A {\displaystyle H_{A)) . ρA is sometimes called the reduced density matrix of ρ on subsystem A. Colloquially, we "trace out" system B to obtain the reduced density matrix on A. For example, the reduced density matrix of A for the entangled state 1 2 ( | 0 ⟩ A ⊗ | 1 ⟩ B − | 1 ⟩ A ⊗ | 0 ⟩ B ) , {\displaystyle {\tfrac {1}{\sqrt {2))}\left(|0\rangle _{A}\otimes |1\rangle _{B}-|1\rangle _{A}\otimes |0\rangle _{B}\right),} discussed above is ρ A = 1 2 ( | 0 ⟩ A ⟨ 0 | A + | 1 ⟩ A ⟨ 1 | A ) {\displaystyle \rho _{A}={\tfrac {1}{2))\left(|0\rangle _{A}\langle 0|_{A}+|1\rangle _{A}\langle 1|_{A}\right)} This demonstrates that, as expected, the reduced density matrix for an entangled pure ensemble is a mixed ensemble. Also not surprisingly, the density matrix of A for the pure product state | ψ ⟩ A ⊗ | ϕ ⟩ B {\displaystyle |\psi \rangle _{A}\otimes |\phi \rangle _{B)) discussed above is ρ A = | ψ ⟩ A ⟨ ψ | A {\displaystyle \rho _{A}=|\psi \rangle _{A}\langle \psi |_{A)) . In general, a bipartite pure state ρ is entangled if and only if its reduced states are mixed rather than pure. Two applications that use them Reduced density matrices were explicitly calculated in different spin chains with unique ground state. An example is the one-dimensional AKLT spin chain:[65] the ground state can be divided into a block and an environment. The reduced density matrix of the block is proportional to a projector to a degenerate ground state of another Hamiltonian. The reduced density matrix also was evaluated for XY spin chains, where it has full rank. It was proved that in the thermodynamic limit, the spectrum of the reduced density matrix of a large block of spins is an exact geometric sequence[66] in this case. Entanglement as a resource In quantum information theory, entangled states are considered a 'resource', i.e., something costly to produce and that allows implementing valuable transformations.[67][68] The setting in which this perspective is most evident is that of "distant labs", i.e., two quantum systems labeled "A" and "B" on each of which arbitrary quantum operations can be performed, but which do not interact with each other quantum mechanically. The only interaction allowed is the exchange of classical information, which combined with the most general local quantum operations gives rise to the class of operations called LOCC (local operations and classical communication). These operations do not allow the production of entangled states between systems A and B. But if A and B are provided with a supply of entangled states, then these, together with LOCC operations can enable a larger class of transformations. For example, an interaction between a qubit of A and a qubit of B can be realized by first teleporting A's qubit to B, then letting it interact with B's qubit (which is now a LOCC operation, since both qubits are in B's lab) and then teleporting the qubit back to A. Two maximally entangled states of two qubits are used up in this process. Thus entangled states are a resource that enables the realization of quantum interactions (or of quantum channels) in a setting where only LOCC are available, but they are consumed in the process. There are other applications where entanglement can be seen as a resource, e.g., private communication or distinguishing quantum states.[69] Classification of entanglement Not all quantum states are equally valuable as a resource. To quantify this value, different entanglement measures (see below) can be used, that assign a numerical value to each quantum state. However, it is often interesting to settle for a coarser way to compare quantum states. This gives rise to different classification schemes. Most entanglement classes are defined based on whether states can be converted to other states using LOCC or a subclass of these operations. The smaller the set of allowed operations, the finer the classification. Important examples are: If two states can be transformed into each other by a local unitary operation, they are said to be in the same LU class. This is the finest of the usually considered classes. Two states in the same LU class have the same value for entanglement measures and the same value as a resource in the distant-labs setting. There is an infinite number of different LU classes (even in the simplest case of two qubits in a pure state).[70][71] If two states can be transformed into each other by local operations including measurements with probability larger than 0, they are said to be in the same 'SLOCC class' ("stochastic LOCC"). Qualitatively, two states ρ 1 {\displaystyle \rho _{1)) and ρ 2 {\displaystyle \rho _{2)) in the same SLOCC class are equally powerful (since I can transform one into the other and then do whatever it allows me to do), but since the transformations ρ 1 → ρ 2 {\displaystyle \rho _{1}\to \rho _{2)) and ρ 2 → ρ 1 {\displaystyle \rho _{2}\to \rho _{1)) may succeed with different probability, they are no longer equally valuable. E.g., for two pure qubits there are only two SLOCC classes: the entangled states (which contains both the (maximally entangled) Bell states and weakly entangled states like | 00 ⟩ + 0.01 | 11 ⟩ {\displaystyle |00\rangle +0.01|11\rangle } ) and the separable ones (i.e., product states like | 00 ⟩ {\displaystyle |00\rangle } ).[72][73] Instead of considering transformations of single copies of a state (like ρ 1 → ρ 2 {\displaystyle \rho _{1}\to \rho _{2)) ) one can define classes based on the possibility of multi-copy transformations. E.g., there are examples when ρ 1 → ρ 2 {\displaystyle \rho _{1}\to \rho _{2)) is impossible by LOCC, but ρ 1 ⊗ ρ 1 → ρ 2 {\displaystyle \rho _{1}\otimes \rho _{1}\to \rho _{2)) is possible. A very important (and very coarse) classification is based on the property whether it is possible to transform an arbitrarily large number of copies of a state ρ {\displaystyle \rho } into at least one pure entangled state. States that have this property are called distillable. These states are the most useful quantum states since, given enough of them, they can be transformed (with local operations) into any entangled state and hence allow for all possible uses. It came initially as a surprise that not all entangled states are distillable, those that are not are called 'bound entangled'.[74][69] A different entanglement classification is based on what the quantum correlations present in a state allow A and B to do: one distinguishes three subsets of entangled states: (1) the non-local states, which produce correlations that cannot be explained by a local hidden variable model and thus violate a Bell inequality, (2) the steerable states that contain sufficient correlations for A to modify ("steer") by local measurements the conditional reduced state of B in such a way, that A can prove to B that the state they possess is indeed entangled, and finally (3) those entangled states that are neither non-local nor steerable. All three sets are non-empty.[75] In this section, the entropy of a mixed state is discussed as well as how it can be viewed as a measure of quantum entanglement. The plot of von Neumann entropy Vs Eigenvalue for a bipartite 2-level pure state. When the eigenvalue has value .5, von Neumann entropy is at a maximum, corresponding to maximum entanglement. In classical information theory H, the Shannon entropy, is associated to a probability distribution, p 1 , ⋯ , p n {\displaystyle p_{1},\cdots ,p_{n)) , in the following way:[76] H ( p 1 , ⋯ , p n ) = − ∑ i p i log 2 ⁡ p i . {\displaystyle H(p_{1},\cdots ,p_{n})=-\sum _{i}p_{i}\log _{2}p_{i}.} Since a mixed state ρ is a probability distribution over an ensemble, this leads naturally to the definition of the von Neumann entropy: S ( ρ ) = − Tr ( ρ log 2 ⁡ ρ ) . {\displaystyle S(\rho )=-{\hbox{Tr))\left(\rho \log _{2}{\rho }\right).} In general, one uses the Borel functional calculus to calculate a non-polynomial function such as log2(ρ). If the nonnegative operator ρ acts on a finite-dimensional Hilbert space and has eigenvalues λ 1 , ⋯ , λ n {\displaystyle \lambda _{1},\cdots ,\lambda _{n)) , log2(ρ) turns out to be nothing more than the operator with the same eigenvectors, but the eigenvalues log 2 ⁡ ( λ 1 ) , ⋯ , log 2 ⁡ ( λ n ) {\displaystyle \log _{2}(\lambda _{1}),\cdots ,\log _{2}(\lambda _{n})} . The Shannon entropy is then: S ( ρ ) = − Tr ( ρ log 2 ⁡ ρ ) = − ∑ i λ i log 2 ⁡ λ i {\displaystyle S(\rho )=-{\hbox{Tr))\left(\rho \log _{2}{\rho }\right)=-\sum _{i}\lambda _{i}\log _{2}\lambda _{i)) . Since an event of probability 0 should not contribute to the entropy, and given that lim p → 0 p log ⁡ p = 0 , {\displaystyle \lim _{p\to 0}p\log p=0,} the convention 0 log(0) = 0 is adopted. This extends to the infinite-dimensional case as well: if ρ has spectral resolution ρ = ∫ λ d P λ , {\displaystyle \rho =\int \lambda dP_{\lambda },} assume the same convention when calculating ρ log 2 ⁡ ρ = ∫ λ log 2 ⁡ λ d P λ . {\displaystyle \rho \log _{2}\rho =\int \lambda \log _{2}\lambda dP_{\lambda }.} As in statistical mechanics, the more uncertainty (number of microstates) the system should possess, the larger the entropy. For example, the entropy of any pure state is zero, which is unsurprising since there is no uncertainty about a system in a pure state. The entropy of any of the two subsystems of the entangled state discussed above is log(2) (which can be shown to be the maximum entropy for 2 × 2 mixed states). As a measure of entanglement Entropy provides one tool that can be used to quantify entanglement, although other entanglement measures exist.[77][78] If the overall system is pure, the entropy of one subsystem can be used to measure its degree of entanglement with the other subsystems. For bipartite pure states, the von Neumann entropy of reduced states is the unique measure of entanglement in the sense that it is the only function on the family of states that satisfies certain axioms required of an entanglement measure.[79] It is a classical result that the Shannon entropy achieves its maximum at, and only at, the uniform probability distribution {1/n,...,1/n}. Therefore, a bipartite pure state ρ ∈ HA ⊗ HB is said to be a maximally entangled state if the reduced state of each subsystem of ρ is the diagonal matrix [ 1 n ⋱ 1 n ] . {\displaystyle {\begin{bmatrix}{\frac {1}{n))&&\\&\ddots &\\&&{\frac {1}{n))\end{bmatrix)).} For mixed states, the reduced von Neumann entropy is not the only reasonable entanglement measure. As an aside, the information-theoretic definition is closely related to entropy in the sense of statistical mechanics[80] (comparing the two definitions in the present context, it is customary to set the Boltzmann constant k = 1). For example, by properties of the Borel functional calculus, we see that for any unitary operator U, S ( ρ ) = S ( U ρ U ∗ ) . {\displaystyle S(\rho )=S\left(U\rho U^{*}\right).} Indeed, without this property, the von Neumann entropy would not be well-defined. In particular, U could be the time evolution operator of the system, i.e., U ( t ) = exp ⁡ ( − i H t ℏ ) , {\displaystyle U(t)=\exp \left({\frac {-iHt}{\hbar ))\right),} where H is the Hamiltonian of the system. Here the entropy is unchanged. The reversibility of a process is associated with the resulting entropy change, i.e., a process is reversible if, and only if, it leaves the entropy of the system invariant. Therefore, the march of the arrow of time towards thermodynamic equilibrium is simply the growing spread of quantum entanglement.[81] This provides a connection between quantum information theory and thermodynamics. Rényi entropy also can be used as a measure of entanglement. Entanglement measures Entanglement measures quantify the amount of entanglement in a (often viewed as a bipartite) quantum state. As aforementioned, entanglement entropy is the standard measure of entanglement for pure states (but no longer a measure of entanglement for mixed states). For mixed states, there are some entanglement measures in the literature[77] and no single one is standard. Entanglement cost Distillable entanglement Entanglement of formation Relative entropy of entanglement Squashed entanglement Logarithmic negativity Most (but not all) of these entanglement measures reduce for pure states to entanglement entropy, and are difficult (NP-hard) to compute.[82] The Reeh-Schlieder theorem of quantum field theory is sometimes seen as an analogue of quantum entanglement. Entanglement has many applications in quantum information theory. With the aid of entanglement, otherwise impossible tasks may be achieved. Among the best-known applications of entanglement are superdense coding and quantum teleportation.[83] Most researchers believe that entanglement is necessary to realize quantum computing (although this is disputed by some).[84] Entanglement is used in some protocols of quantum cryptography,[85][86] but to prove the security of QKD under standard assumptions does not require entanglement.[87] However, the device independent security of QKD is shown exploiting entanglement between the communication partners.[88] There are several canonical entangled states that appear often in theory and experiments. For two qubits, the Bell states are | Φ ± ⟩ = 1 2 ( | 0 ⟩ A ⊗ | 0 ⟩ B ± | 1 ⟩ A ⊗ | 1 ⟩ B ) {\displaystyle |\Phi ^{\pm }\rangle ={\frac {1}{\sqrt {2))}(|0\rangle _{A}\otimes |0\rangle _{B}\pm |1\rangle _{A}\otimes |1\rangle _{B})} | Ψ ± ⟩ = 1 2 ( | 0 ⟩ A ⊗ | 1 ⟩ B ± | 1 ⟩ A ⊗ | 0 ⟩ B ) . {\displaystyle |\Psi ^{\pm }\rangle ={\frac {1}{\sqrt {2))}(|0\rangle _{A}\otimes |1\rangle _{B}\pm |1\rangle _{A}\otimes |0\rangle _{B}).} These four pure states are all maximally entangled (according to the entropy of entanglement) and form an orthonormal basis (linear algebra) of the Hilbert space of the two qubits. They play a fundamental role in Bell's theorem. For M>2 qubits, the GHZ state is | G H Z ⟩ = | 0 ⟩ ⊗ M + | 1 ⟩ ⊗ M 2 , {\displaystyle |\mathrm {GHZ} \rangle ={\frac {|0\rangle ^{\otimes M}+|1\rangle ^{\otimes M)){\sqrt {2))},} which reduces to the Bell state | Φ + ⟩ {\displaystyle |\Phi ^{+}\rangle } for M = 2 {\displaystyle M=2} . The traditional GHZ state was defined for M = 3 {\displaystyle M=3} . GHZ states are occasionally extended to qudits, i.e., systems of d rather than 2 dimensions. Also for M>2 qubits, there are spin squeezed states, a class of squeezed coherent states satisfying certain restrictions on the uncertainty of spin measurements, which are necessarily entangled.[89] Spin squeezed states are good candidates for enhancing precision measurements using quantum entanglement.[90] For two bosonic modes, a NOON state is | ψ NOON ⟩ = | N ⟩ a | 0 ⟩ b + | 0 ⟩ a | N ⟩ b 2 , {\displaystyle |\psi _{\text{NOON))\rangle ={\frac {|N\rangle _{a}|0\rangle _{b}+|{0}\rangle _{a}|{N}\rangle _{b)){\sqrt {2))},\,} This is like the Bell state | Ψ + ⟩ {\displaystyle |\Psi ^{+}\rangle } except the basis kets 0 and 1 have been replaced with "the N photons are in one mode" and "the N photons are in the other mode". Finally, there also exist twin Fock states for bosonic modes, which can be created by feeding a Fock state into two arms leading to a beam splitter. They are the sum of multiple of NOON states, and can be used to achieve the Heisenberg limit.[91] For the appropriately chosen measures of entanglement, Bell, GHZ, and NOON states are maximally entangled while spin squeezed and twin Fock states are only partially entangled. The partially entangled states are generally easier to prepare experimentally. Methods of creating entanglement Entanglement is usually created by direct interactions between subatomic particles. These interactions can take numerous forms. One of the most commonly used methods is spontaneous parametric down-conversion to generate a pair of photons entangled in polarisation.[69][92] Other methods include the use of a fiber coupler to confine and mix photons, photons emitted from decay cascade of the bi-exciton in a quantum dot,[93] the use of the Hong–Ou–Mandel effect, etc. Quantum entanglement of a particle and its antiparticle, such as an electron and a positron, can be created by partial overlap of the corresponding quantum wave functions in Hardy's interferometer.[94][95] In the earliest tests of Bell's theorem, the entangled particles were generated using atomic cascades.[24] It is also possible to create entanglement between quantum systems that never directly interacted, through the use of entanglement swapping. Two independently prepared, identical particles may also be entangled if their wave functions merely spatially overlap, at least partially.[96] Testing a system for entanglement A density matrix ρ is called separable if it can be written as a convex sum of product states, namely ρ = ∑ j p j ρ j ( A ) ⊗ ρ j ( B ) {\displaystyle {\rho =\sum _{j}p_{j}\rho _{j}^{(A)}\otimes \rho _{j}^{(B)))} with 1 ≥ p j ≥ 0 {\displaystyle 1\geq p_{j}\geq 0} probabilities. By definition, a state is entangled if it is not separable. For 2-Qubit and Qubit-Qutrit systems (2 × 2 and 2 × 3 respectively) the simple Peres–Horodecki criterion provides both a necessary and a sufficient criterion for separability, and thus—inadvertently—for detecting entanglement. However, for the general case, the criterion is merely a necessary one for separability, as the problem becomes NP-hard when generalized.[97][98] Other separability criteria include (but not limited to) the range criterion, reduction criterion, and those based on uncertainty relations.[99][100][101][102] See Ref.[103] for a review of separability criteria in discrete-variable systems and Ref.[104] for a review on techniques and challenges in experimental entanglement certification in discrete-variable systems. A numerical approach to the problem is suggested by Jon Magne Leinaas, Jan Myrheim and Eirik Ovrum in their paper "Geometrical aspects of entanglement".[105] Leinaas et al. offer a numerical approach, iteratively refining an estimated separable state towards the target state to be tested, and checking if the target state can indeed be reached. An implementation of the algorithm (including a built-in Peres-Horodecki criterion testing) is "StateSeparator" web-app. In continuous variable systems, the Peres-Horodecki criterion also applies. Specifically, Simon[106] formulated a particular version of the Peres-Horodecki criterion in terms of the second-order moments of canonical operators and showed that it is necessary and sufficient for 1 ⊕ 1 {\displaystyle 1\oplus 1} -mode Gaussian states (see Ref.[107] for a seemingly different but essentially equivalent approach). It was later found[108] that Simon's condition is also necessary and sufficient for 1 ⊕ n {\displaystyle 1\oplus n} -mode Gaussian states, but no longer sufficient for 2 ⊕ 2 {\displaystyle 2\oplus 2} -mode Gaussian states. Simon's condition can be generalized by taking into account the higher order moments of canonical operators[109][110] or by using entropic measures.[111][112] In 2016, China launched the world's first quantum communications satellite.[113] The $100m Quantum Experiments at Space Scale (QUESS) mission was launched on Aug 16, 2016, from the Jiuquan Satellite Launch Center in northern China at 01:40 local time. For the next two years, the craft – nicknamed "Micius" after the ancient Chinese philosopher – will demonstrate the feasibility of quantum communication between Earth and space, and test quantum entanglement over unprecedented distances. In the June 16, 2017, issue of Science, Yin et al. report setting a new quantum entanglement distance record of 1,203 km, demonstrating the survival of a two-photon pair and a violation of a Bell inequality, reaching a CHSH valuation of 2.37 ± 0.09, under strict Einstein locality conditions, from the Micius satellite to bases in Lijian, Yunnan and Delingha, Quinhai, increasing the efficiency of transmission over prior fiberoptic experiments by an order of magnitude.[114][115] Naturally entangled systems The electron shells of multi-electron atoms always consist of entangled electrons. The correct ionization energy can be calculated only by consideration of electron entanglement.[116] It has been suggested that in the process of photosynthesis, entanglement is involved in the transfer of energy between light-harvesting complexes and photosynthetic reaction centers where the energy of each absorbed photon is harvested in the form of chemical energy. Without such a process, the efficient conversion of light into chemical energy cannot be explained. Using femtosecond spectroscopy, the coherence of entanglement in the Fenna-Matthews-Olson complex was measured over hundreds of femtoseconds (a relatively long time in this regard) providing support to this theory.[117][118] However, critical follow-up studies question the interpretation of these results and assign the reported signatures of electronic quantum coherence to nuclear dynamics in the chromophores or to the experiments being performed at cryogenic rather than physiological temperatures.[119][120][121][122][123][124][125] Entanglement of macroscopic objects In 2020, researchers reported the quantum entanglement between the motion of a millimetre-sized mechanical oscillator and a disparate distant spin system of a cloud of atoms.[126][127] Later work complemented this work by quantum-entangling two mechanical oscillators.[128][129][130] Entanglement of elements of living systems In October 2018, physicists reported producing quantum entanglement using living organisms, particularly between photosynthetic molecules within living bacteria and quantized light.[131][132] Living organisms (green sulphur bacteria) have been studied as mediators to create quantum entanglement between otherwise non-interacting light modes, showing high entanglement between light and bacterial modes, and to some extent, even entanglement within the bacteria.[133] Bound entanglement Concurrence (quantum computing) CNOT gate Einstein's thought experiments Entanglement distillation Entanglement witness ER=EPR Faster-than-light communication Multipartite entanglement Normally distributed and uncorrelated does not imply independent Pauli exclusion principle Quantum coherence Quantum discord Quantum network Quantum phase transition Quantum pseudo-telepathy Quantum teleportation Retrocausality Separable state Spontaneous parametric down-conversion Stern–Gerlach experiment Ward's probability amplitude Physics portal ^ Overbye, Dennis (10 October 2022). "Black Holes May Hide a Mind-Bending Secret About Our Universe - Take gravity, add quantum mechanics, stir. What do you get? Just maybe, a holographic cosmos". The New York Times. Retrieved 10 October 2022. ^ a b c Einstein, Albert; Podolsky, Boris; Rosen, Nathan (1935). "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?". Phys. Rev. 47 (10): 777–780. Bibcode:1935PhRv...47..777E. doi:10.1103/PhysRev.47.777. ^ a b Schrödinger E (1935). "Discussion of probability relations between separated systems". Mathematical Proceedings of the Cambridge Philosophical Society. 31 (4): 555–563. Bibcode:1935PCPS...31..555S. doi:10.1017/S0305004100013554. S2CID 121278681. ^ Schrödinger E. (1936). "Probability relations between separated systems". Mathematical Proceedings of the Cambridge Philosophical Society. 32 (3): 446–452. Bibcode:1936PCPS...32..446S. doi:10.1017/S0305004100019137. S2CID 122822435. ^ Physicist John Bell depicts the Einstein camp in this debate in his article entitled "Bertlmann's socks and the nature of reality", p. 143 of Speakable and unspeakable in quantum mechanics: "For EPR that would be an unthinkable 'spooky action at a distance'. To avoid such action at a distance they have to attribute, to the space-time regions in question, real properties in advance of observation, correlated properties, which predetermine the outcomes of these particular observations. Since these real properties, fixed in advance of observation, are not contained in quantum formalism, that formalism for EPR is incomplete. It may be correct, as far as it goes, but the usual quantum formalism cannot be the whole story." And again on p. 144 Bell says: "Einstein had no difficulty accepting that affairs in different places could be correlated. What he could not accept was that an intervention at one place could influence, immediately, affairs at the other." Downloaded 5 July 2011 from Bell, J. S. (1987). Speakable and Unspeakable in Quantum Mechanics (PDF). CERN. ISBN 0521334950. Archived from the original (PDF) on 12 April 2015. Retrieved 14 June 2014. ^ a b Yin, Juan; Cao, Yuan; Yong, Hai-Lin; Ren, Ji-Gang; Liang, Hao; Liao, Sheng-Kai; Zhou, Fei; Liu, Chang; Wu, Yu-Ping; Pan, Ge-Sheng; Li, Li; Liu, Nai-Le; Zhang, Qiang; Peng, Cheng-Zhi; Pan, Jian-Wei (2013). "Bounding the speed of 'spooky action at a distance". Physical Review Letters. 110 (26): 260407. arXiv:1303.0614. Bibcode:2013PhRvL.110z0407Y. doi:10.1103/PhysRevLett.110.260407. PMID 23848853. S2CID 119293698. ^ a b Matson, John (13 August 2012). "Quantum teleportation achieved over record distances". Nature News. doi:10.1038/nature.2012.11163. S2CID 124852641. ^ a b Francis, Matthew. Quantum entanglement shows that reality can't be local, Ars Technica, 30 October 2012 ^ Roger Penrose, The Road to Reality: A Complete Guide to the Laws of the Universe, London, 2004, p. 603. ^ Griffiths, David J. (2004), Introduction to Quantum Mechanics (2nd ed.), Prentice Hall, ISBN 978-0-13-111892-8 ^ a b Kocher, CA; Commins, ED (1967). "Polarization Correlation of Photons Emitted in an Atomic Cascade". Physical Review Letters. 18 (15): 575–577. Bibcode:1967PhRvL..18..575K. doi:10.1103/PhysRevLett.18.575. ^ a b c Carl A. Kocher, Ph.D. Thesis (University of California at Berkeley, 1967). Polarization Correlation of Photons Emitted in an Atomic Cascade ^ Formaggio, J. A.; Kaiser, D. I.; Murskyj, M. M.; Weiss, T. E. (2016). "Violation of the Leggett-Garg inequality in neutrino oscillations". Physical Review Letters. 117 (5): 050402. arXiv:1602.00041. Bibcode:2016PhRvL.117e0402F. doi:10.1103/PhysRevLett.117.050402. PMID 27517759. S2CID 6127630. ^ Hensen, B.; et al. (21 October 2015). "Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres". Nature. 526 (7575): 682–686. arXiv:1508.05949. Bibcode:2015Natur.526..682H. doi:10.1038/nature15759. hdl:2117/79298. PMID 26503041. S2CID 205246446. See also free online access version. ^ Markoff, Jack (21 October 2015). "Sorry, Einstein. Quantum Study Suggests 'Spooky Action' Is Real". The New York Times. Retrieved 21 October 2015. ^ Arndt, M; Nairz, O; Vos-Andreae, J; Keller, C; van der Zouw, G; Zeilinger, A (14 October 1999). "Wave–particle duality of C60 molecules". Nature. 401 (6754): 680–682. Bibcode:1999Natur.401..680A. doi:10.1038/44348. PMID 18494170. S2CID 4424892. (subscription required) ^ Nairz, Olaf; Arndt, Markus; Zeilinger, Anton (2003). "Quantum interference experiments with large molecules". American Journal of Physics. 71 (4): 319–325. Bibcode:2003AmJPh..71..319N. doi:10.1119/1.1531580. ^ Lee, K. C.; Sprague, M. R.; Sussman, B. J.; Nunn, J.; Langford, N. K.; Jin, X.- M.; Champion, T.; Michelberger, P.; Reim, K. F.; England, D.; Jaksch, D.; Walmsley, I. A. (2 December 2011). "Entangling macroscopic diamonds at room temperature". Science. 334 (6060): 1253–1256. Bibcode:2011Sci...334.1253L. doi:10.1126/science.1211914. PMID 22144620. S2CID 206536690. ^ Siegel, Ethan. "No, We Still Can't Use Quantum Entanglement To Communicate Faster Than Light". Forbes. Retrieved 6 January 2023. ^ Kumar, M., Quantum, Icon Books, 2009, p. 313. ^ Alisa Bokulich, Gregg Jaeger, Philosophy of Quantum Information and Entanglement, Cambridge University Press, 2010, xv. ^ Letter from Einstein to Max Born, 3 March 1947; The Born-Einstein Letters; Correspondence between Albert Einstein and Max and Hedwig Born from 1916 to 1955, Walker, New York, 1971. (cited in M. P. Hobson; et al. (1998), "Quantum Entanglement and Communication Complexity (1998)", SIAM J. Comput., 30 (6): 1829–1841, CiteSeerX 10.1.1.20.8324 ) ^ J. S. Bell (1964). "On the Einstein-Poldolsky-Rosen paradox". Physics Physique Физика. 1 (3): 195–200. doi:10.1103/PhysicsPhysiqueFizika.1.195. ^ a b c Freedman, Stuart J.; Clauser, John F. (1972). "Experimental Test of Local Hidden-Variable Theories". Physical Review Letters. 28 (14): 938–941. Bibcode:1972PhRvL..28..938F. doi:10.1103/PhysRevLett.28.938. ^ Aspect, Alain; Grangier, Philippe; Roger, Gérard (1982). "Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities". Physical Review Letters. 49 (2): 91–94. Bibcode:1982PhRvL..49...91A. doi:10.1103/PhysRevLett.49.91. ^ Kocher, CA (1971). "Time correlations in the detection of successively emitted photons". Annals of Physics. 65 (1): 1–18. Bibcode:1971AnPhy..65....1K. doi:10.1016/0003-4916(71)90159-X. ^ Hanson, Ronald (2015). "Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres". Nature. 526 (7575): 682–686. arXiv:1508.05949. Bibcode:2015Natur.526..682H. doi:10.1038/nature15759. PMID 26503041. S2CID 205246446. ^ Aspect, Alain (16 December 2015). "Closing the Door on Einstein and Bohr's Quantum Debate". Physics. 8: 123. Bibcode:2015PhyOJ...8..123A. doi:10.1103/Physics.8.123. ^ C. H. Bennett and G. Brassard. "Quantum cryptography: Public key distribution and coin tossing". In Proceedings of IEEE International Conference on Computers, Systems and Signal Processing, volume 175, p. 8. New York, 1984. http://researcher.watson.ibm.com/researcher/files/us-bennetc/BB84highest.pdf Archived 30 January 2020 at the Wayback Machine ^ Ekert, A.K. (1991). "Quantum cryptography based on Bell's theorem". Phys. Rev. Lett. 67 (6): 661–663. Bibcode:1991PhRvL..67..661E. doi:10.1103/PhysRevLett.67.661. ISSN 0031-9007. PMID 10044956. ^ "The Nobel Prize in Physics 2022". Nobel Prize (Press release). The Royal Swedish Academy of Sciences . 4 October 2022. Retrieved 5 October 2022. ^ Asher Peres, Quantum Theory: Concepts and Methods, Kluwer, 1993; ISBN 0-7923-2549-4 p. 115. ^ Rupert W., Anderson (28 March 2015). The Cosmic Compendium: Interstellar Travel (First ed.). The Cosmic Compendium. p. 100. ISBN 9781329022027. ^ Gibney, Elizabeth (2017). "Cosmic Test Bolsters Einstein's "Spooky Action at a Distance"". Scientific American. ^ I. Gerhardt; Q. Liu; A. Lamas-Linares; J. Skaar; V. Scarani; V. Makarov; C. Kurtsiefer (2011), "Experimentally faking the violation of Bell's inequalities", Phys. Rev. Lett., 107 (17): 170404, arXiv:1106.3224, Bibcode:2011PhRvL.107q0404G, doi:10.1103/PhysRevLett.107.170404, PMID 22107491, S2CID 16306493 ^ Santos, E (2004). "The failure to perform a loophole-free test of Bell's Inequality supports local realism". Foundations of Physics. 34 (11): 1643–1673. Bibcode:2004FoPh...34.1643S. doi:10.1007/s10701-004-1308-z. S2CID 123642560. ^ H. Zbinden; et al. (2001). "Experimental test of nonlocal quantum correlations in relativistic configurations". Phys. Rev. A. 63 (2): 22111. arXiv:quant-ph/0007009. Bibcode:2001PhRvA..63b2111Z. doi:10.1103/PhysRevA.63.022111. S2CID 44611890. ^ Some of the history of both referenced Zbinden, et al. experiments is provided in Gilder, L., The Age of Entanglement, Vintage Books, 2008, pp. 321–324. ^ Cirel'son, B. S. (1980). "Quantum generalizations of Bell's inequality". Letters in Mathematical Physics. 4 (2): 93–100. Bibcode:1980LMaPh...4...93C. doi:10.1007/BF00417500. S2CID 120680226. ^ Wu, C. 's.; Shaknov, I. (1950). "The Angular Correlation of Scattered Annihilation Radiation". Physical Review. 77 (1): 136. Bibcode:1950PhRv...77..136W. doi:10.1103/PhysRev.77.136. ^ Xiao-song Ma, Stefan Zotter, Johannes Kofler, Rupert Ursin, Thomas Jennewein, Časlav Brukner & Anton Zeilinger; Zotter; Kofler; Ursin; Jennewein; Brukner; Zeilinger (26 April 2012). "Experimental delayed-choice entanglement swapping". Nature Physics. 8 (6): 480–485. arXiv:1203.4834. Bibcode:2012NatPh...8..480M. doi:10.1038/nphys2294. S2CID 119208488. ((cite journal)): CS1 maint: multiple names: authors list (link) ^ Megidish, E.; Halevy, A.; Shacham, T.; Dvir, T.; Dovrat, L.; Eisenberg, H. S. (2013). "Entanglement Swapping between Photons that have Never Coexisted". Physical Review Letters. 110 (21): 210403. arXiv:1209.4191. Bibcode:2013PhRvL.110u0403M. doi:10.1103/physrevlett.110.210403. PMID 23745845. S2CID 30063749. ^ "Classical carrier could create entanglement". physicsworld.com. 11 December 2013. Retrieved 14 June 2014. ^ "Loophole-free Bell test | Ronald Hanson". Archived from the original on 4 July 2018. Retrieved 24 October 2015. ^ Gibney, Elizabeth (2014). "Entangled photons make a picture from a paradox". Nature. doi:10.1038/nature.2014.15781. S2CID 124976589. Retrieved 13 October 2014. ^ Rozatkar, Gaurav (16 August 2018). "Demonstration of quantum entanglement". OSF. ^ Moreva, Ekaterina (2014). "Time from quantum entanglement: an experimental illustration". Physical Review A. 89 (5): 052122. arXiv:1310.4691. Bibcode:2014PhRvA..89e2122M. doi:10.1103/PhysRevA.89.052122. S2CID 118638346. ^ Aron, Jacob (25 October 2013). "Entangled toy universe shows time may be an illusion". Retrieved 8 January 2022. ^ David Deutsch, The Beginning of infinity. Page 299 ^ "Quantum Experiment Shows How Time 'Emerges' from Entanglement". Medium. 23 October 2013. Retrieved 13 October 2014. ^ Van Raamsdonk, Mark (19 June 2010). "Building up spacetime with quantum entanglement". General Relativity and Gravitation. 42 (10): 2323–2329. arXiv:1005.3035. Bibcode:2010GReGr..42.2323V. doi:10.1007/s10714-010-1034-0. ISSN 0001-7701. S2CID 189843725. ^ Lee, Jae-Weon; Kim, Hyeong-Chan; Lee, Jungjai (2013). "Gravity from quantum information". Journal of the Korean Physical Society. 63 (5): 1094–1098. arXiv:1001.5445. Bibcode:2013JKPS...63.1094L. doi:10.3938/jkps.63.1094. ISSN 0374-4884. S2CID 118494859. ^ Swingle, Brian; Van Raamsdonk, Mark (12 May 2014). "Universality of Gravity from Entanglement". arXiv:1405.2933 [hep-th]. ^ Brunner, Nicolas; Cavalcanti, Daniel; Pironio, Stefano; Scarani, Valerio; Wehner, Stephanie (2014). "Bell nonlocality". Reviews of Modern Physics. 86 (2): 419–478. arXiv:1303.2849. Bibcode:2014RvMP...86..419B. doi:10.1103/RevModPhys.86.419. S2CID 119194006. ^ Werner, R.F. (1989). "Quantum States with Einstein-Podolsky-Rosen correlations admitting a hidden-variable model". Physical Review A. 40 (8): 4277–4281. Bibcode:1989PhRvA..40.4277W. doi:10.1103/PhysRevA.40.4277. PMID 9902666. ^ Augusiak, R.; Demianowicz, M.; Tura, J.; Acín, A. (2015). "Entanglement and nonlocality are inequivalent for any number of parties". Physical Review Letters. 115 (3): 030404. arXiv:1407.3114. Bibcode:2015PhRvL.115c0404A. doi:10.1103/PhysRevLett.115.030404. hdl:2117/78836. PMID 26230773. S2CID 29758483. ^ Vértesi, Tamás; Brunner, Nicolas (2014). "Disproving the Peres conjecture by showing Bell nonlocality from bound entanglement". Nature Communications. 5 (1): 5297. arXiv:1405.4502. Bibcode:2014NatCo...5.5297V. doi:10.1038/ncomms6297. PMID 25370352. S2CID 5135148. ^ In the literature "non-locality" is sometimes used to characterize concepts that differ from the non-existence of a local hidden variable model, e.g., whether states can be distinguished by local measurements and which can occur also for non-entangled states (see, e.g., Charles H. Bennett, David P. DiVincenzo, Christopher A. Fuchs, Tal Mor, Eric Rains, Peter W. Shor, John A. Smolin, and William K. Wootters (1999). "Quantum nonlocality without entanglement". Phys. Rev. A. 59 (2): 1070–1091. arXiv:quant-ph/9804053. Bibcode:1999PhRvA..59.1070B. doi:10.1103/PhysRevA.59.1070. S2CID 15282650. ((cite journal)): CS1 maint: uses authors parameter (link)). This non-standard use of the term is not discussed here. ^ Jaeger G, Shimony A, Vaidman L; Shimony; Vaidman (1995). "Two Interferometric Complementarities". Phys. Rev. 51 (1): 54–67. Bibcode:1995PhRvA..51...54J. doi:10.1103/PhysRevA.51.54. PMID 9911555. ((cite journal)): CS1 maint: multiple names: authors list (link) ^ Nielsen, Michael A.; Chuang, Isaac L. (2000). Quantum Computation and Quantum Information. Cambridge University Press. pp. 112–113. ISBN 978-0-521-63503-5. ^ Laloe, Franck (2001), "Do We Really Understand Quantum Mechanics", American Journal of Physics, 69 (6): 655–701, arXiv:quant-ph/0209123, Bibcode:2001AmJPh..69..655L, doi:10.1119/1.1356698 ^ Gurvits L (2003). "Classical deterministic complexity of Edmonds' Problem and quantum entanglement". Proceedings of the Thirty-Fifth ACM symposium on Theory of computing - STOC '03. Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing. p. 10. arXiv:quant-ph/0303055. doi:10.1145/780542.780545. ISBN 978-1-58113-674-6. S2CID 5745067. ^ Horodecki M, Horodecki P, Horodecki R; Horodecki; Horodecki (1996). "Separability of mixed states: necessary and sufficient conditions". Physics Letters A. 223 (1): 210. arXiv:quant-ph/9605038. Bibcode:1996PhLA..223....1H. CiteSeerX 10.1.1.252.496. doi:10.1016/S0375-9601(96)00706-2. S2CID 10580997. ((cite journal)): CS1 maint: multiple names: authors list (link) ^ Dirac, Paul Adrien Maurice (1930). "Note on exchange phenomena in the Thomas atom" (PDF). Mathematical Proceedings of the Cambridge Philosophical Society. 26 (3): 376–385. Bibcode:1930PCPS...26..376D. doi:10.1017/S0305004100016108. ^ Fan, H; Korepin V; Roychowdhury V (2004). "Entanglement in a Valence-Bond Solid State". Physical Review Letters. 93 (22): 227203. arXiv:quant-ph/0406067. Bibcode:2004PhRvL..93v7203F. doi:10.1103/PhysRevLett.93.227203. PMID 15601113. S2CID 28587190. ^ Franchini, F.; Its, A. R.; Korepin, V. E.; Takhtajan, L. A. (2010). "Spectrum of the density matrix of a large block of spins of the XY model in one dimension". Quantum Information Processing. 10 (3): 325–341. arXiv:1002.2931. doi:10.1007/s11128-010-0197-7. S2CID 6683370. ^ Chitambar, Eric; Gour, Gilad (2019). "Quantum resource theories". Reviews of Modern Physics. 91 (2): 025001. arXiv:1806.06107. Bibcode:2019RvMP...91b5001C. doi:10.1103/RevModPhys.91.025001. S2CID 119194947. ^ Georgiev, Danko D.; Gudder, Stanley P. (2022). "Sensitivity of entanglement measures in bipartite pure quantum states". Modern Physics Letters B. 36 (22): 2250101–2250255. arXiv:2206.13180. Bibcode:2022MPLB...3650101G. doi:10.1142/S0217984922501019. S2CID 250072286. ^ a b c Horodecki, Ryszard; Horodecki, Pawel; Horodecki, Michal; Horodecki, Karol (2009). "Quantum entanglement". Reviews of Modern Physics. 81 (2): 865–942. arXiv:quant-ph/0702225. Bibcode:2009RvMP...81..865H. doi:10.1103/RevModPhys.81.865. S2CID 59577352. ^ Grassl, M.; Rötteler, M.; Beth, T. (1998). "Computing local invariants of quantum-bit systems". Phys. Rev. A. 58 (3): 1833–1839. arXiv:quant-ph/9712040. Bibcode:1998PhRvA..58.1833G. doi:10.1103/PhysRevA.58.1833. S2CID 15892529. ^ B. Kraus (2010). "Local unitary equivalence of multipartite pure states". Phys. Rev. Lett. 104 (2): 020504. arXiv:0909.5152. Bibcode:2010PhRvL.104b0504K. doi:10.1103/PhysRevLett.104.020504. PMID 20366579. S2CID 29984499. ^ M. A. Nielsen (1999). "Conditions for a Class of Entanglement Transformations". Phys. Rev. Lett. 83 (2): 436. arXiv:quant-ph/9811053. Bibcode:1999PhRvL..83..436N. doi:10.1103/PhysRevLett.83.436. S2CID 17928003. ^ Gour, G. & Wallach, N. R. (2013). "Classification of Multipartite Entanglement of All Finite Dimensionality". Phys. Rev. Lett. 111 (6): 060502. arXiv:1304.7259. Bibcode:2013PhRvL.111f0502G. doi:10.1103/PhysRevLett.111.060502. PMID 23971544. S2CID 1570745. ((cite journal)): CS1 maint: uses authors parameter (link) ^ Horodecki, M.; Horodecki, P.; Horodecki, R. (1998). "Mixed-state entanglement and distillation: Is there a bound entanglement in nature?". Phys. Rev. Lett. 80 (1998): 5239–5242. arXiv:quant-ph/9801069. Bibcode:1998PhRvL..80.5239H. doi:10.1103/PhysRevLett.80.5239. S2CID 111379972. ^ H. M. Wiseman, S. J. Jones, and A. C. Doherty (2007). "Steering, Entanglement, Nonlocality, and the Einstein-Podolsky-Rosen Paradox". Phys. Rev. Lett. 98 (14): 140402. arXiv:quant-ph/0612147. Bibcode:2007PhRvL..98n0402W. doi:10.1103/PhysRevLett.98.140402. PMID 17501251. S2CID 30078867. ((cite journal)): CS1 maint: uses authors parameter (link) ^ Cerf, Nicolas J.; Cleve, Richard. "Information-theoretic interpretation of quantum error-correcting codes" (PDF). ^ a b Plenio, Martin B.; Virmani, Shashank (2007). "An introduction to entanglement measures". Quant. Inf. Comp. 1: 1–51. arXiv:quant-ph/0504163. Bibcode:2005quant.ph..4163P. ^ Vedral, Vlatko (2002). "The role of relative entropy in quantum information theory". Reviews of Modern Physics. 74 (1): 197–234. arXiv:quant-ph/0102094. Bibcode:2002RvMP...74..197V. doi:10.1103/RevModPhys.74.197. S2CID 6370982. ^ Hill, S; Wootters, W. K. (1997). "Entanglement of a Pair of Quantum Bits". Phys. Rev. Lett. 78 (26): 5022–5025. arXiv:quant-ph/9703041. Bibcode:1997PhRvL..78.5022H. doi:10.1103/PhysRevLett.78.5022. S2CID 9173232. ^ Peres, Asher (1993). Quantum Theory: Concepts and Methods. Kluwer. pp. 260–270. ISBN 0-7923-2549-4. OCLC 28854083. ^ Wolchover, Natalie (25 April 2014). "New Quantum Theory Could Explain the Flow of Time". www.wired.com. Quanta Magazine. Retrieved 27 April 2014. ^ Huang, Yichen (21 March 2014). "Computing quantum discord is NP-complete". New Journal of Physics. 16 (3): 033027. arXiv:1305.5941. Bibcode:2014NJPh...16c3027H. doi:10.1088/1367-2630/16/3/033027. S2CID 118556793. ^ Bouwmeester, Dik; Pan, Jian-Wei; Mattle, Klaus; Eibl, Manfred; Weinfurter, Harald & Zeilinger, Anton (1997). "Experimental Quantum Teleportation" (PDF). Nature. 390 (6660): 575–579. arXiv:1901.11004. Bibcode:1997Natur.390..575B. doi:10.1038/37539. S2CID 4422887. ^ Richard Jozsa; Noah Linden (2002). "On the role of entanglement in quantum computational speed-up". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 459 (2036): 2011–2032. arXiv:quant-ph/0201143. Bibcode:2003RSPSA.459.2011J. CiteSeerX 10.1.1.251.7637. doi:10.1098/rspa.2002.1097. S2CID 15470259. ^ Ekert, Artur K. (1991). "Quantum cryptography based on Bell's theorem". Physical Review Letters. 67 (6): 661–663. Bibcode:1991PhRvL..67..661E. doi:10.1103/PhysRevLett.67.661. PMID 10044956. S2CID 27683254. ^ Juan Yin, Yu-Huai Li, Sheng-Kai Liao, Meng Yang, Yuan Cao, Liang Zhang, Ji-Gang Ren, Wen-Qi Cai, Wei-Yue Liu, Shuang-Lin Li, Rong Shu, Yong-Mei Huang, Lei Deng, Li Li, Qiang Zhang, Nai-Le Liu, Yu-Ao Chen, Chao-Yang Lu, Xiang-Bin Wang, Feihu Xu, Jian-Yu Wang, Cheng-Zhi Peng, Artur K. Ekert, Jian-Wei Pan (2020). "Entanglement-based secure quantum cryptography over 1,120 kilometres". Nature. 582 (7813): 501–505. Bibcode:2020Natur.582..501Y. doi:10.1038/s41586-020-2401-y. PMID 32541968. S2CID 219692094. ((cite journal)): CS1 maint: multiple names: authors list (link) ^ R. Renner, N. Gisin, B. Kraus (2005). "An information-theoretic security proof for QKD protocols". Phys. Rev. A. 72: 012332. arXiv:quant-ph/0502064. doi:10.1103/PhysRevA.72.012332. S2CID 119052621. ((cite journal)): CS1 maint: multiple names: authors list (link) ^ S. Pirandola, U. L. Andersen, L. Banchi, M. Berta, D. Bunandar, R. Colbeck, D. Englund, T. Gehring, C. Lupo, C. Ottaviani, J. L. Pereira, M. Razavi, J. Shamsul Shaari, M. Tomamichel, V. C. Usenko, G. Vallone, P. Villoresi, P. Wallden (2020). "Advances in quantum cryptography". Adv. Opt. Photon. 12 (4): 1012–1236. arXiv:1906.01645. Bibcode:2020AdOP...12.1012P. doi:10.1364/AOP.361502. S2CID 174799187. Archived from the original on 14 December 2021. Retrieved 14 December 2021. ((cite journal)): CS1 maint: bot: original URL status unknown (link) CS1 maint: multiple names: authors list (link) ^ Kitagawa, Masahiro; Ueda, Masahito (1993). "Squeezed Spin States". Phys. Rev. A. 47 (6): 5138–5143. Bibcode:1993PhRvA..47.5138K. doi:10.1103/physreva.47.5138. hdl:11094/77656. PMID 9909547. ^ Wineland, D. J.; Bollinger, J. J.; Itano, W. M.; Moore, F. L.; Heinzen, D. J. (1992). "Spin squeezing and reduced quantum noise in spectroscopy". Phys. Rev. A. 46 (11): R6797–R6800. Bibcode:1992PhRvA..46.6797W. doi:10.1103/PhysRevA.46.R6797. PMID 9908086. ^ Holland, M. J; Burnett, K (1993). "Interferometric detection of optical phase shifts at the Heisenberg limit". Physical Review Letters. 71 (9): 1355–1358. Bibcode:1993PhRvL..71.1355H. doi:10.1103/PhysRevLett.71.1355. PMID 10055519. ^ Shadbolt, P. J.; Verde, M. R.; Peruzzo, A.; Politi, A.; Laing, A.; Lobino, M.; Matthews, J. C. F.; Thompson, M. G.; O'Brien, J. L. (2012). "Generating, manipulating and measuring entanglement and mixture with a reconfigurable photonic circuit". Nature Photonics. 6 (1): 45–59. arXiv:1108.3309. Bibcode:2012NaPho...6...45S. doi:10.1038/nphoton.2011.283. S2CID 56206588. ^ Akopian, N. (2006). "Entangled Photon Pairs from Semiconductor Quantum Dots". Phys. Rev. Lett. 96 (2): 130501. arXiv:quant-ph/0509060. Bibcode:2006PhRvL..96b0501D. doi:10.1103/PhysRevLett.96.020501. PMID 16486553. S2CID 22040546. ^ Hardy, Lucien (1992). "Quantum mechanics, local realistic theories, and Lorentz-invariant realistic theories". Physical Review Letters. 68 (20): 2981–2984. Bibcode:1992PhRvL..68.2981H. doi:10.1103/PhysRevLett.68.2981. PMID 10045577. ^ Georgiev, Danko; Cohen, Eliahu (2022). "Entanglement measures for two-particle quantum histories". Physical Review A. 106 (6): 062437. arXiv:2212.07502. Bibcode:2022PhRvA.106f2437G. doi:10.1103/PhysRevA.106.062437. S2CID 254685902. ^ Lo Franco, Rosario; Compagno, Giuseppe (14 June 2018). "Indistinguishability of Elementary Systems as a Resource for Quantum Information Processing". Phys. Rev. Lett. 120 (24): 240403. arXiv:1712.00706. Bibcode:2018PhRvL.120x0403L. doi:10.1103/PhysRevLett.120.240403. PMID 29957003. S2CID 49562954. ^ Gurvits, L., Classical deterministic complexity of Edmonds' problem and quantum entanglement, in Proceedings of the 35th ACM Symposium on Theory of Computing, ACM Press, New York, 2003. ^ Sevag Gharibian (2010). "Strong NP-Hardness of the Quantum Separability Problem". Quantum Information and Computation. 10 (3&4): 343–360. arXiv:0810.4507. doi:10.26421/QIC10.3-4-11. S2CID 621887. . ^ Hofmann, Holger F.; Takeuchi, Shigeki (22 September 2003). "Violation of local uncertainty relations as a signature of entanglement". Physical Review A. 68 (3): 032103. arXiv:quant-ph/0212090. Bibcode:2003PhRvA..68c2103H. doi:10.1103/PhysRevA.68.032103. S2CID 54893300. ^ Gühne, Otfried (18 March 2004). "Characterizing Entanglement via Uncertainty Relations". Physical Review Letters. 92 (11): 117903. arXiv:quant-ph/0306194. Bibcode:2004PhRvL..92k7903G. doi:10.1103/PhysRevLett.92.117903. PMID 15089173. S2CID 5696147. ^ Gühne, Otfried; Lewenstein, Maciej (24 August 2004). "Entropic uncertainty relations and entanglement". Physical Review A. 70 (2): 022316. arXiv:quant-ph/0403219. Bibcode:2004PhRvA..70b2316G. doi:10.1103/PhysRevA.70.022316. S2CID 118952931. ^ Huang, Yichen (29 July 2010). "Entanglement criteria via concave-function uncertainty relations". Physical Review A. 82 (1): 012335. Bibcode:2010PhRvA..82a2335H. doi:10.1103/PhysRevA.82.012335. ^ Gühne, Otfried; Tóth, Géza (2009). "Entanglement detection". Physics Reports. 474 (1–6): 1–75. arXiv:0811.2803. Bibcode:2009PhR...474....1G. doi:10.1016/j.physrep.2009.02.004. S2CID 119288569. ^ Friis, Nicolai; Vitagliano, Giuseppe; Malik, Mehul; Huber, Marcus (2019). "Entanglement certification from theory to experiment". Nature Reviews Physics. 1: 72–87. arXiv:1906.10929. doi:10.1038/s42254-018-0003-5. ISSN 2522-5820. S2CID 125658647. ^ Leinaas, Jon Magne; Myrheim, Jan; Ovrum, Eirik (2006). "Geometrical aspects of entanglement". Physical Review A. 74 (1): 012313. arXiv:quant-ph/0605079. Bibcode:2006PhRvA..74a2313L. doi:10.1103/PhysRevA.74.012313. S2CID 119443360. ^ Simon, R. (2000). "Peres-Horodecki Separability Criterion for Continuous Variable Systems". Physical Review Letters. 84 (12): 2726–2729. arXiv:quant-ph/9909044. Bibcode:2000PhRvL..84.2726S. doi:10.1103/PhysRevLett.84.2726. PMID 11017310. S2CID 11664720. ^ Duan, Lu-Ming; Giedke, G.; Cirac, J. I.; Zoller, P. (2000). "Inseparability Criterion for Continuous Variable Systems". Physical Review Letters. 84 (12): 2722–2725. arXiv:quant-ph/9908056. Bibcode:2000PhRvL..84.2722D. doi:10.1103/PhysRevLett.84.2722. PMID 11017309. S2CID 9948874. ^ Werner, R. F.; Wolf, M. M. (2001). "Bound Entangled Gaussian States". Physical Review Letters. 86 (16): 3658–3661. arXiv:quant-ph/0009118. Bibcode:2001PhRvL..86.3658W. doi:10.1103/PhysRevLett.86.3658. PMID 11328047. S2CID 20897950. ^ Shchukin, E.; Vogel, W. (2005). "Inseparability Criteria for Continuous Bipartite Quantum States". Physical Review Letters. 95 (23): 230502. arXiv:quant-ph/0508132. Bibcode:2005PhRvL..95w0502S. doi:10.1103/PhysRevLett.95.230502. PMID 16384285. S2CID 28595936. ^ Hillery, Mark; Zubairy, M.Suhail (2006). "Entanglement Conditions for Two-Mode States". Physical Review Letters. 96 (5): 050503. arXiv:quant-ph/0507168. Bibcode:2006PhRvL..96e0503H. doi:10.1103/PhysRevLett.96.050503. PMID 16486912. S2CID 43756465. ^ Walborn, S.; Taketani, B.; Salles, A.; Toscano, F.; de Matos Filho, R. (2009). "Entropic Entanglement Criteria for Continuous Variables". Physical Review Letters. 103 (16): 160505. arXiv:0909.0147. Bibcode:2009PhRvL.103p0505W. doi:10.1103/PhysRevLett.103.160505. PMID 19905682. S2CID 10523704. ^ Yichen Huang (October 2013). "Entanglement Detection: Complexity and Shannon Entropic Criteria". IEEE Transactions on Information Theory. 59 (10): 6774–6778. doi:10.1109/TIT.2013.2257936. S2CID 7149863. ^ "China launches world's first quantum science satellite". physicsworld.com. 16 August 2016. Retrieved 7 December 2021. ^ Yin, Juan; Cao, Yuan; Li, Yu-Huai; Liao, Sheng-Kai; Zhang, Liang; Ren, Ji-Gang; Cai, Wen-Qi; Liu, Wei-Yue; Li, Bo; Dai, Hui; Li, Guang-Bing; Lu, Qi-Ming; Gong, Yun-Hong; Xu, Yu; Li, Shuang-Lin; Li, Feng-Zhi; Yin, Ya-Yun; Jiang, Zi-Qing; Li, Ming; Jia, Jian-Jun; Ren, Ge; He, Dong; Zhou, Yi-Lin; Zhang, Xiao-Xiang; Wang, Na; Chang, Xiang; Zhu, Zhen-Cai; Liu, Nai-Le; Chen, Yu-Ao; Lu, Chao-Yang; Shu, Rong; Peng, Cheng-Zhi; Wang, Jian-Yu; Pan, Jian-Wei (2017). "Satellite-based entanglement distribution over 1200 kilometers". Science. 356 (6343): 1140–1144. arXiv:1707.01339. doi:10.1126/science.aan3211. PMID 28619937. ^ "China's quantum satellite achieves 'spooky action' at record distance". 14 June 2017. ^ Frank Jensen: Introduction to Computational Chemistry. Wiley, 2007, ISBN 978-0-470-01187-4. ^ Berkeley Lab Press Release: Untangling the Quantum Entanglement Behind Photosynthesis: Berkeley scientists shine new light on green plant secrets. ^ Mohan Sarovar, Akihito Ishizaki, Graham R. Fleming, K. Birgitta Whaley: Quantum entanglement in photosynthetic light harvesting complexes. arXiv:0905.3787 ^ R. Tempelaar; T. L. C. Jansen; J. Knoester (2014). "Vibrational Beatings Conceal Evidence of Electronic Coherence in the FMO Light-Harvesting Complex". J. Phys. Chem. B. 118 (45): 12865–12872. doi:10.1021/jp510074q. PMID 25321492. ^ N. Christenson; H. F. Kauffmann; T. Pullerits; T. Mancal (2012). "Origin of Long-Lived Coherences in Light-Harvesting Complexes". J. Phys. Chem. B. 116 (25): 7449–7454. arXiv:1201.6325. Bibcode:2012arXiv1201.6325C. doi:10.1021/jp304649c. PMC 3789255. PMID 22642682. ^ A. Kolli; E. J. O'Reilly; G. D. Scholes; A. Olaya-Castro (2012). "The fundamental role of quantized vibrations in coherent light harvesting by cryptophyte algae". J. Chem. Phys. 137 (17): 174109. arXiv:1203.5056. Bibcode:2012JChPh.137q4109K. doi:10.1063/1.4764100. PMID 23145719. S2CID 20156821. ^ V. Butkus; D. Zigmantas; L. Valkunas; D. Abramavicius (2012). "Vibrational vs. electronic coherences in 2D spectrum of molecular systems". Chem. Phys. Lett. 545 (30): 40–43. arXiv:1201.2753. Bibcode:2012CPL...545...40B. doi:10.1016/j.cplett.2012.07.014. S2CID 96663719. ^ V. Tiwari; W. K. Peters; D. M. Jonas (2013). "Electronic resonance with anticorrelated pigment vibrations drives photosynthetic energy transfer outside the adiabatic framework". Proc. Natl. Acad. Sci. USA. 110 (4): 1203–1208. doi:10.1073/pnas.1211157110. PMC 3557059. PMID 23267114. ^ E. Thyrhaug; K. Zidek; J. Dostal; D. Bina; D. Zigmantas (2016). "Exciton Structure and Energy Transfer in the Fenna−Matthews− Olson Complex". J. Phys. Chem. Lett. 7 (9): 1653–1660. doi:10.1021/acs.jpclett.6b00534. PMID 27082631. S2CID 26355154. ^ Y. Fujihashi; G. R. Fleming; A. Ishizaki (2015). "Impact of environmentally induced fluctuations on quantum mechanically mixed electronic and vibrational pigment states in photosynthetic energy transfer and 2D electronic spectra". J. Chem. Phys. 142 (21): 212403. arXiv:1505.05281. Bibcode:2015JChPh.142u2403F. doi:10.1063/1.4914302. PMID 26049423. S2CID 1082742. ^ "Quantum entanglement realized between distant large objects". phys.org. Retrieved 9 October 2020. ^ Thomas, Rodrigo A.; Parniak, Michał; Østfeldt, Christoffer; Møller, Christoffer B.; Bærentsen, Christian; Tsaturyan, Yeghishe; Schliesser, Albert; Appel, Jürgen; Zeuthen, Emil; Polzik, Eugene S. (21 September 2020). "Entanglement between distant macroscopic mechanical and spin systems". Nature Physics. 17 (2): 228–233. arXiv:2003.11310. doi:10.1038/s41567-020-1031-5. ISSN 1745-2481. S2CID 214641162. Retrieved 9 October 2020. ^ "Vibrating drumheads are entangled quantum mechanically". Physics World. 17 May 2021. Retrieved 14 June 2021. ^ Lépinay, Laure Mercier de; Ockeloen-Korppi, Caspar F.; Woolley, Matthew J.; Sillanpää, Mika A. (7 May 2021). "Quantum mechanics–free subsystem with mechanical oscillators". Science. 372 (6542): 625–629. arXiv:2009.12902. Bibcode:2021Sci...372..625M. doi:10.1126/science.abf5389. ISSN 0036-8075. PMID 33958476. S2CID 221971015. Retrieved 14 June 2021. ^ Kotler, Shlomi; Peterson, Gabriel A.; Shojaee, Ezad; Lecocq, Florent; Cicak, Katarina; Kwiatkowski, Alex; Geller, Shawn; Glancy, Scott; Knill, Emanuel; Simmonds, Raymond W.; Aumentado, José; Teufel, John D. (7 May 2021). "Direct observation of deterministic macroscopic entanglement". Science. 372 (6542): 622–625. arXiv:2004.05515. Bibcode:2021Sci...372..622K. doi:10.1126/science.abf2998. ISSN 0036-8075. PMID 33958475. S2CID 233872863. Retrieved 14 June 2021. ^ Marletto, C.; Coles, D.M.; Farrow, T.; Vedral, V. (2018). "Entanglement between living bacteria and quantized light witnessed by Rabi splitting". Journal of Physics Communications. 2 (10): 101001. arXiv:1702.08075. Bibcode:2018JPhCo...2j1001M. doi:10.1088/2399-6528/aae224. S2CID 119236759. ^ O'Callaghan, Jonathan (29 October 2018). ""Schrödinger's Bacterium" Could Be a Quantum Biology Milestone – A recent experiment may have placed living organisms in a state of quantum entanglement". Scientific American. Retrieved 29 October 2018. ^ Krisnanda, T.; Marletto, C.; Vedral, V.; Paternostro, M.; Paterek, T. (2018). "Probing quantum features of photosynthetic organisms". NPJ Quantum Information. 4: 60. arXiv:1711.06485. Bibcode:2018npjQI...4...60K. doi:10.1038/s41534-018-0110-2. Albert, David Z.; Galchen, Rivka (2009). "Was Einstein Wrong?: A Quantum Threat to Special Relativity". Scientific American. 300 (3): 32–39. doi:10.1038/scientificamerican0309-32. PMID 19253771. Bengtsson I; Życzkowski K (2006). "Geometry of Quantum States". An Introduction to Quantum Entanglement. Cambridge: Cambridge University Press. second, revised edition (2017) Bub, Jeffrey (2019). "Quantum Entanglement and Information". Stanford Encyclopedia of Philosophy. Stanford, California: Stanford University. Cramer, JG (2015). The Quantum Handshake: Entanglement, Nonlocality and Transactions. Springer Verlag. ISBN 978-3-319-24642-0. Duarte, FJ (2019). Fundamentals of Quantum Entanglement. Bristol, UK: Institute of Physics. ISBN 978-0-7503-2226-3. Gühne, O.; Tóth, G. (2009). "Entanglement detection". Physics Reports. 474 (1–6): 1–75. arXiv:0811.2803. Bibcode:2009PhR...474....1G. doi:10.1016/j.physrep.2009.02.004. S2CID 119288569. Bhaskara VS, Panigrahi PK (2017). "Generalized concurrence measure for faithful quantification of multiparticle pure state entanglement using Lagrange's identity and wedge product". Quantum Information Processing. 16 (5): 118. arXiv:1607.00164. Bibcode:2017QuIP...16..118B. doi:10.1007/s11128-017-1568-0. S2CID 43754114. Swain SN, Bhaskara VS, Panigrahi PK (2022). "Generalized entanglement measure for continuous-variable systems". Phys. Rev. A. 105 (5): 052441. arXiv:1706.01448. Bibcode:2022PhRvA.105e2441S. doi:10.1103/PhysRevA.105.052441. S2CID 239885759. ((cite journal)): CS1 maint: multiple names: authors list (link) Jaeger G (2009). Entanglement, Information, and the Interpretation of Quantum Mechanics. Heildelberg: Springer. ISBN 978-3-540-92127-1. Steward EG (2008). Quantum Mechanics: Its Early Development and the Road to Entanglement. Imperial College Press. ISBN 978-1-86094-978-4. Wikiquote has quotations related to Quantum entanglement. Einstein Got It Wrong, Can You Do Better? How Quantum Entanglement Works Explanatory video by Scientific American magazine Hanson Lab – Loophole-free Bell test 'Spooky action at a distance', no cheating. Two Diamonds Linked by Strange Quantum Entanglement Entanglement experiment with photon pairs – interactive Scientists demonstrate quantum nature of entanglement swapping Quantum Entanglement and Bell's Theorem at MathPages Audio – Cain/Gay (2009) Astronomy Cast Entanglement Ion trapping quantum information processing IEEE Spectrum On-line: The trap technique Spooky Actions At A Distance?: Oppenheimer Lecture, Prof. David Mermin (Cornell University) Univ. California, Berkeley, 2008. Non-mathematical popular lecture on YouTube, posted Mar 2008 "Quantum Entanglement versus Classical Correlation" (Interactive demonstration) Born rule Ground state Degenerate levels Zero-point energy Quantum state Symmetry in quantum mechanics Wave–particle duality Matrix mechanics Path integral formulation Phase space Von Neumann-Wigner Delayed-choice quantum eraser Mach–Zehnder interferometer Wheeler's delayed choice Quantum chemistry Quantum cosmology Quantum differential calculus Quantum dynamics Quantum geometry Quantum measurement problem Quantum stochastic calculus Quantum spacetime Quantum algorithms Quantum amplifier Quantum bus Quantum cellular automata Quantum finite automata Quantum channel Quantum circuit Quantum complexity theory Quantum cryptography Quantum electronics Quantum error correction Quantum imaging Quantum image processing Quantum key distribution Quantum logic gates Quantum machine Quantum metamaterial Quantum metrology Quantum neural network Quantum optics Quantum programming Quantum sensing Quantum simulator Casimir effect Quantum mysticism
CommonCrawl
Rotation operator (quantum mechanics) This article concerns the rotation operator, as it appears in quantum mechanics. Part of a series of articles about Quantum mechanics $i\hbar {\frac {\partial }{\partial t}}|\psi (t)\rangle ={\hat {H}}|\psi (t)\rangle $ Schrödinger equation • Introduction • Glossary • History Background • Classical mechanics • Old quantum theory • Bra–ket notation • Hamiltonian • Interference Fundamentals • Complementarity • Decoherence • Entanglement • Energy level • Measurement • Nonlocality • Quantum number • State • Superposition • Symmetry • Tunnelling • Uncertainty • Wave function • Collapse Experiments • Bell's inequality • Davisson–Germer • Double-slit • Elitzur–Vaidman • Franck–Hertz • Leggett–Garg inequality • Mach–Zehnder • Popper • Quantum eraser • Delayed-choice • Schrödinger's cat • Stern–Gerlach • Wheeler's delayed-choice Formulations • Overview • Heisenberg • Interaction • Matrix • Phase-space • Schrödinger • Sum-over-histories (path integral) Equations • Dirac • Klein–Gordon • Pauli • Rydberg • Schrödinger Interpretations • Bayesian • Consistent histories • Copenhagen • de Broglie–Bohm • Ensemble • Hidden-variable • Local • Many-worlds • Objective collapse • Quantum logic • Relational • Transactional Advanced topics • Relativistic quantum mechanics • Quantum field theory • Quantum information science • Quantum computing • Quantum chaos • EPR paradox • Density matrix • Scattering theory • Quantum statistical mechanics • Quantum machine learning Scientists • Aharonov • Bell • Bethe • Blackett • Bloch • Bohm • Bohr • Born • Bose • de Broglie • Compton • Dirac • Davisson • Debye • Ehrenfest • Einstein • Everett • Fock • Fermi • Feynman • Glauber • Gutzwiller • Heisenberg • Hilbert • Jordan • Kramers • Pauli • Lamb • Landau • Laue • Moseley • Millikan • Onnes • Planck • Rabi • Raman • Rydberg • Schrödinger • Simmons • Sommerfeld • von Neumann • Weyl • Wien • Wigner • Zeeman • Zeilinger Quantum mechanical rotations With every physical rotation $R$, we postulate a quantum mechanical rotation operator $D(R)$ which rotates quantum mechanical states. $|\alpha \rangle _{R}=D(R)|\alpha \rangle $ In terms of the generators of rotation, $D(\mathbf {\hat {n}} ,\phi )=\exp \left(-i\phi {\frac {\mathbf {\hat {n}} \cdot \mathbf {J} }{\hbar }}\right),$ where $\mathbf {\hat {n}} $ is rotation axis, $\mathbf {J} $ is angular momentum, and $\hbar $ is the reduced Planck constant. The translation operator The rotation operator $\operatorname {R} (z,\theta )$, with the first argument $z$ indicating the rotation axis and the second $\theta $ the rotation angle, can operate through the translation operator $\operatorname {T} (a)$ for infinitesimal rotations as explained below. This is why, it is first shown how the translation operator is acting on a particle at position x (the particle is then in the state $|x\rangle $ according to Quantum Mechanics). Translation of the particle at position $x$ to position $x+a$: $\operatorname {T} (a)|x\rangle =|x+a\rangle $ Because a translation of 0 does not change the position of the particle, we have (with 1 meaning the identity operator, which does nothing): $\operatorname {T} (0)=1$ $\operatorname {T} (a)\operatorname {T} (da)|x\rangle =\operatorname {T} (a)|x+da\rangle =|x+a+da\rangle =\operatorname {T} (a+da)|x\rangle \Rightarrow \operatorname {T} (a)\operatorname {T} (da)=\operatorname {T} (a+da)$ Taylor development gives: $\operatorname {T} (da)=\operatorname {T} (0)+{\frac {d\operatorname {T} (0)}{da}}da+\cdots =1-{\frac {i}{\hbar }}p_{x}da$ with $p_{x}=i\hbar {\frac {d\operatorname {T} (0)}{da}}$ From that follows: $\operatorname {T} (a+da)=\operatorname {T} (a)\operatorname {T} (da)=\operatorname {T} (a)\left(1-{\frac {i}{\hbar }}p_{x}da\right)\Rightarrow {\frac {\operatorname {T} (a+da)-\operatorname {T} (a)}{da}}={\frac {d\operatorname {T} }{da}}=-{\frac {i}{\hbar }}p_{x}\operatorname {T} (a)$ This is a differential equation with the solution $\operatorname {T} (a)=\exp \left(-{\frac {i}{\hbar }}p_{x}a\right).$ Additionally, suppose a Hamiltonian $H$ is independent of the $x$ position. Because the translation operator can be written in terms of $p_{x}$, and $[p_{x},H]=0$, we know that $[H,\operatorname {T} (a)]=0.$ This result means that linear momentum for the system is conserved. In relation to the orbital angular momentum Classically we have for the angular momentum $\mathbf {L} =\mathbf {r} \times \mathbf {p} .$ This is the same in quantum mechanics considering $\mathbf {r} $ and $\mathbf {p} $ as operators. Classically, an infinitesimal rotation $dt$ of the vector $\mathbf {r} =(x,y,z)$ about the $z$-axis to $\mathbf {r} '=(x',y',z)$ leaving $z$ unchanged can be expressed by the following infinitesimal translations (using Taylor approximation): ${\begin{aligned}x'&=r\cos(t+dt)=x-y\,dt+\cdots \\y'&=r\sin(t+dt)=y+x\,dt+\cdots \end{aligned}}$ From that follows for states: $\operatorname {R} (z,dt)|r\rangle =\operatorname {R} (z,dt)|x,y,z\rangle =|x-y\,dt,y+x\,dt,z\rangle =\operatorname {T} _{x}(-y\,dt)\operatorname {T} _{y}(x\,dt)|x,y,z\rangle =\operatorname {T} _{x}(-y\,dt)\operatorname {T} _{y}(x\,dt)|r\rangle $ And consequently: $\operatorname {R} (z,dt)=\operatorname {T} _{x}(-y\,dt)\operatorname {T} _{y}(x\,dt)$ Using $T_{k}(a)=\exp \left(-{\frac {i}{\hbar }}p_{k}a\right)$ from above with $k=x,y$ and Taylor expansion we get: $\operatorname {R} (z,dt)=\exp \left[-{\frac {i}{\hbar }}\left(xp_{y}-yp_{x}\right)dt\right]=\exp \left(-{\frac {i}{\hbar }}L_{z}dt\right)=1-{\frac {i}{\hbar }}L_{z}dt+\cdots $ with $L_{z}=xp_{y}-yp_{x}$ the $z$-component of the angular momentum according to the classical cross product. To get a rotation for the angle $t$, we construct the following differential equation using the condition $\operatorname {R} (z,0)=1$: ${\begin{aligned}&\operatorname {R} (z,t+dt)=\operatorname {R} (z,t)\operatorname {R} (z,dt)\\[1.1ex]\Rightarrow {}&{\frac {d\operatorname {R} }{dt}}={\frac {\operatorname {R} (z,t+dt)-\operatorname {R} (z,t)}{dt}}=\operatorname {R} (z,t){\frac {\operatorname {R} (z,dt)-1}{dt}}=-{\frac {i}{\hbar }}L_{z}\operatorname {R} (z,t)\\[1.1ex]\Rightarrow {}&\operatorname {R} (z,t)=\exp \left(-{\frac {i}{\hbar }}\,t\,L_{z}\right)\end{aligned}}$ Similar to the translation operator, if we are given a Hamiltonian $H$ which rotationally symmetric about the $z$-axis, $[L_{z},H]=0$ implies $[\operatorname {R} (z,t),H]=0$. This result means that angular momentum is conserved. For the spin angular momentum about for example the $y$-axis we just replace $L_{z}$ with $ S_{y}={\frac {\hbar }{2}}\sigma _{y}$ (where $\sigma _{y}$ is the Pauli Y matrix) and we get the spin rotation operator $\operatorname {D} (y,t)=\exp \left(-i{\frac {t}{2}}\sigma _{y}\right).$ Effect on the spin operator and quantum states See also: Rotation group SO(3) § A note on Lie algebra, and Change of basis § Endomorphisms Operators can be represented by matrices. From linear algebra one knows that a certain matrix $A$ can be represented in another basis through the transformation $A'=PAP^{-1}$ where $P$ is the basis transformation matrix. If the vectors $b$ respectively $c$ are the z-axis in one basis respectively another, they are perpendicular to the y-axis with a certain angle $t$ between them. The spin operator $S_{b}$ in the first basis can then be transformed into the spin operator $S_{c}$ of the other basis through the following transformation: $S_{c}=\operatorname {D} (y,t)S_{b}\operatorname {D} ^{-1}(y,t)$ From standard quantum mechanics we have the known results $ S_{b}|b+\rangle ={\frac {\hbar }{2}}|b+\rangle $ and $ S_{c}|c+\rangle ={\frac {\hbar }{2}}|c+\rangle $ where $|b+\rangle $ and $|c+\rangle $ are the top spins in their corresponding bases. So we have: ${\frac {\hbar }{2}}|c+\rangle =S_{c}|c+\rangle =\operatorname {D} (y,t)S_{b}\operatorname {D} ^{-1}(y,t)|c+\rangle \Rightarrow $ $S_{b}\operatorname {D} ^{-1}(y,t)|c+\rangle ={\frac {\hbar }{2}}\operatorname {D} ^{-1}(y,t)|c+\rangle $ Comparison with $ S_{b}|b+\rangle ={\frac {\hbar }{2}}|b+\rangle $ yields $|b+\rangle =D^{-1}(y,t)|c+\rangle $. This means that if the state $|c+\rangle $ is rotated about the $y$-axis by an angle $t$, it becomes the state $|b+\rangle $, a result that can be generalized to arbitrary axes. See also • Symmetry in quantum mechanics • Spherical basis • Optical phase space References • L.D. Landau and E.M. Lifshitz: Quantum Mechanics: Non-Relativistic Theory, Pergamon Press, 1985 • P.A.M. Dirac: The Principles of Quantum Mechanics, Oxford University Press, 1958 • R.P. Feynman, R.B. Leighton and M. Sands: The Feynman Lectures on Physics, Addison-Wesley, 1965 Operators in physics General Space and time • d'Alembertian • Parity • Time Particles • C-symmetry Operators for operators • Anti-symmetric operator • Ladder operator Quantum Fundamental • Momentum • Position • Rotation Energy • Total energy • Hamiltonian • Kinetic energy Angular momentum • Total • Orbital • Spin Electromagnetism • Transition dipole moment Optics • Displacement • Hanbury Brown and Twiss effect • Quantum correlator • Squeeze Particle physics • Casimir invariant • Creation and annihilation
Wikipedia
Typeface Reveals Spatial Economical Patterns Ruixian Ma1,2, Wei Wang1, Fan Zhang1, Kyuha Shim1,3 & Carlo Ratti1 Environmental social sciences Socioeconomic scenarios An Author Correction to this article was published on 24 December 2019 This article has been updated Understanding the socioeconomic and demographic characteristics of an urban region is vital for policy-making, urban management, and urban planning. Auditing socioeconomic and demographic patterns traditionally entails producing a large portion of data by human-participant surveys, which are usually costly and time consuming. Even with newly developed computational methods, amenity characteristics such as typeface, color, and graphic element choices are still missing at the city scale. However, they have a huge impact on personalized preferences. Currently, researchers tend to use large-scale street view imagery to uncover physical and socioeconomic patterns. In this research, we first propose a framework that uses deep convolutional neural network to recognize the typeface from street view imagery in London. Second, we analyze the relationship between 11 typefaces and the average household income in 77 wards of London. The result show that the typefaces used in the neighborhood are highly correlated with economic and demographic factors. Typeface could be an alternative metric to evaluate economic and demographic status in large-scale urban regions. More generally, typeface can also act as a key visual characteristic of a city. Researchers and policy-makers have been studying socioeconomic patterns for decades. Uncovering underlying socioeconomic patterns such as household income distribution could facilitate the effective allocation of city resources, further fulfilling the needs of urban dwellers. The income distribution is shown to be related to where residents with different income levels live1. Three factors are generally used to measure the living preferences, namely, accessibility, space, and environmental amenities2. Accessibility is the distance to shops or companies' locations. Pioneers, such as William Alonso3, measure the accessibility based on t/q. Later, Brueckner et al. observed that people tend to move to neighborhoods where amenities fit their expenditure requirements4. Environmental amenities include natural features, neighborhood characteristics, and so on. To date, the computational method using publicly available data to extract natural features and predict socioeconomic data is causing phenomenon impact. Jean et al. accurately predicted spatial poverty results between 2013 to 2015 through five African countries using satellite imagery (Google Static Maps)5. Apart from the natural features, Gebru et al. used vehicles extracted from Google Street View images to predict income, race, education, and voting patterns. The study successfully validated that socioeconomic patterns can be predicted by objective characteristics from neighborhoods6. Similarly, by extracting scene information only from street view images, a deep learning model can predict daily human mobility patterns at urban streets7. However, few studies have looked into individual characteristics within neighborhood characteristics, such as amenity characteristics. We assume that amenity characteristics matter for household income as well as other aspects. For example, London Covent Garden and Old Street have different neighborhood styles and amenity characteristics. People with similar income may prefer living in different areas because of different area characteristics. In other words, the quantity or accessibility of amenities cannot adequately represent incomes in different areas. Under certain circumstances, the amenity characteristic could better describe household incomes. To the best of the authors' knowledge, a study is still to be conducted to discover the patterns between amenity characteristics and household incomes at the city scale. It has long been accepted that typeface is one of the key elements of amenity characteristic8, and typeface that appears on signages and posters can also indicate people's aesthetic preferences. Here, we propose the use of typeface to predict household income by only using publicly available data. Collecting city-scaled typeface usage still retains many challenges because identifying the thousands of typeface styles requires professional training. Therefore, the traditional methods of collecting demographic data, such as crowdsourcing or door-to-door study, will not apply to typeface data retrieval. Computational text and typeface recognition methods have been proposed in recent years. Jaderberg et al. spotted text in natural image9. Wang and Chen recognized typefaces from wild scene backgrounds10,11. However, identifying typeface style from Google Street View images is more challenging because they have lower resolution and higher distortion than other sources of images, such as Flickr and Panoramio12,13. To address the issues discussed above, we propose a framework in this study. First, we generate training data in training the deep learning model to recognize typefaces by using Convolutional Neural Network(CNN)14. Second, we map the city-scaled typeface data to the corresponding geospatial properties for correlation analysis between typefaces and socioeconomics. In this work, we collect 59,515 typefaces images from 748,471 Google Street View images in 77 wards in London. We take the distribution of typefaces and amenity types as the predictors and the household income (https://data.london.gov.uk/) as the response in a multivariate regression, obtaining a R2 of 0.552, which considerably outperformed the result only using the distribution of amenity types as the predictors (R2 = 0.297). Through a correlation analysis, we find that the typefaces are highly correlated with amenity categories. The result verifies some knowledge that could only be previously explained by a designer's instinct. For example, designers tend to use Sans-Serif and Serif typefaces in finance-related industries. Our result shows that these two typefaces are the first two correlation coefficients related to finance. Furthermore, we use Spearman correlation to map 11 typefaces and household incomes. We find that the relationships between them vary a lot. For example, Serif is positively correlated with income, whereas Sans-Serif has a coefficient negatively correlated with income. Thus, different typeface selections could attract different household income residents. In summary, this research contributes to the following directions: Typeface types and most common amenity categories are correlated in a neighborhood scale. Typeface can act as a metric to measure socio-economic characteristics. Typeface impression We collected 900 typefaces that frequently appear in urban streets, including Helvetica, Gill Sans, Times New Roman, etc. To obtain a better recognition accuracy and demonstrate a clearer relationship between typefaces and economical factors, we labeled 900 typefaces into the 11 most commonly used typeface classes15,16, such as Serif, Sans Serif, and Script. Considering that the weight of the font could change its visual influence17, we built two categories for each of Serif, Sans Serif, and Script, in terms of Regular and Bold. Owing to the fewer number of typefaces and their inconspicuous weight, we set Decorative, Casual, and Blackletter to only have a regular weight. Despite being a type of Serif typeface, Slab also has a squared stroke at the end; therefore, we set Slab as an individual class in our classification system. We recognized 59,515 text images and their corresponding typefaces from 748,471 Google Street View images in central London. Sans-Serif is the most-used typeface in the city, which occupies over 25% the typeface database. This pattern makes sense to designers because Sans Serif is generally more friendly and preferred than the other typefaces18,19. In particular, both Helvetica and Gill Sans (Sans Serif category) frequently appear in London. Decorative and Serif are the second- and third-most popular typefaces, respectively. Furthermore, as a typeface affects our perceptions, we followed Henderson's20 typeface emotion experiment, which discovers potential trade-off impressions caused by typefaces. Then, we clustered them into our 9 out of 11 commonly used typeface classes. Some of the typefaces were already in our 900 typeface dataset. To demonstrate, Gill Sans, Arial, and Garamond are in Sans Serif. We asked five designers to map the rest of the typefaces scored by Henderson's experiment. From Fig. 1, we can see that human perceptions of different typefaces vary drastically. For example, Serif had the highest score regarding honesty. This finding explains why Serif is extensively used for finance amenities. Sans-Serif has the lowest score regarding innovation, whereas Decorative has the highest innovation score because it always delivers an innovative and engaging feeling. In this connection, we believe that the presence and frequency of different typefaces is associated with local amenities and socio-economics. Nine-class typeface impressions. Correlation between typefaces and amenity types How to choose the appropriate typeface in enhancing consumer purchasing behavior was examined by previous works, such as Doyle's appropriate font choice study21 and Ulrich's package design guidelines22. However, few studies discovered the relationships between amenity types and typefaces under spatial scenario. Consequently, we calculate the Spearman's Rank correlation coefficient to quantify the relationship between typefaces and amenity data. In this case, we only use the sample pairs of a typeface and amenity type that matched with each other. In total, there are 5,238 amenities with their corresponding typefaces being obtained. Details about the matching approach is elaborated in method section. As demonstrated in Fig. 2, Sans Serif, Serif, and Decorative present high correlations with most of the amenity types. Indeed, our data also show that these three typefaces are the top three commonly used typefaces in the city. By comparing the correlation coefficients of different typefaces with the same amenity type, we can determine which typefaces are more preferred for this amenity type. For example, finance has a higher correlation with Sans Serif and Serif than other typefaces with the correlation coefficient of ρ = 0.81 and ρ = 0.78, respectively. This result indicates that finance-related industries, such as banks, prefer to use Serif and Sans Serif typefaces. For instance, the signage of Santander and HSBC uses Serif, and Bank of America uses Sans Serif. Through the overall observation of Figs. 1 and 2, we can better explain on the reasons why an amenity chooses such typefaces. It can also be interpreted that banks consider honesty and credible as an important impression. Notably, both Sans Serif and Serif are considerably correlated to finance because they have a high degree of honesty impression. Correlation coefficients between typefaces and amenity categories. The x-, y-axis refer to the typeface types and amenities categories, respectively. The value represents the corresponding correlation coefficient between a pair of typeface and amenity category. Another interesting example that can be discovered in Fig. 2 is that Serif has a high correlation coefficient with most amenity types. However, nightclubs are relatively less relevant with Serif typeface than most amenity types. This phenomenon happens presumably because the Serif typeface gives readable and honesty impressions, and these features are not what nightclubs typically attempt to deliver to their customers. By contrast, the top two correlated typeface types used in nightclubs are Script and Decorative, which all have higher innovation and warm impressions than the other typeface types. The preliminary analysis and results demonstrated the potential usage of typefaces on the evaluation of local urban functions and socio-economics. In order to further examine the additional contribution of typefaces when the amenity information being controlled, we conducted multivariate regressions in the following part. How presence of typefaces associated with socio-economic characteristics We adopted multivariate regression to explore how well typefaces can explain the variation of household incomes. As a control group, we also take amenity information in the regression analysis. In detail, we built three models with different variables to model the variation of household income. As is shown in Eqs. 1, 2 and 3. By only taking amenity information as the explanatory variables, the household income is modeled as follows: $$Amenity\,model(Model\,\mathrm{1):}\,log(Income)={\beta }_{0}+{\beta }_{1}{A}_{1}+{\beta }_{2}{A}_{2}+\mathrm{...}+\,{\beta }_{i}{A}_{i}+\varepsilon $$ where Income refers to the household income value of a ward in London, and Ai represents the percentage of an amenity number in a particular ward. To deal with the skewed distribution of household income, we apply logarithmic transformation to the values of household income in order to learn a more robust regression model. Similarly, by only taking typeface as the explanatory variables, similary, the household income is modeled as: $$Type\,face\,model(Model\,2):\,log(Income)={\beta }_{0}+{\beta }_{1}{T}_{1}+{\beta }_{2}{T}_{2}+\mathrm{...}+\,{\beta }_{i}{T}_{i}+\varepsilon $$ where Ti refers to the percentage of an typeface number in a particular ward. Finally, by involving both typefaces and amenities into the household income model, we have: $$Combined\,model(Model\,\mathrm{3)}\,:\,log(Income)={\beta }_{0}+\mathop{\sum }\limits_{n\mathrm{=1}}^{i}\,{\beta }_{i}{T}_{i}+\mathop{\sum }\limits_{n\mathrm{=1}}^{j}\,{\beta }_{j}{A}_{j}+\varepsilon $$ where Ti is the explanatory variables and Ai serves as the controlled variable. In order to measure the maximum degree of the relationship between typeface/amenity information and household incomes, we use the whole sample of typeface data (59,515) and amenity data (21,905). Fig. 3 depicts the spatial distribution of median household income and three typefaces. For typeface, the value is calculated by the ratio of the number of a typeface to the total amount of all typefaces of a ward. We can see that the distributions of the three typefaces present clear spatial patterns and are varying from each other. Spatial distribution of (A). Media household income; (B–D) the ratio of three typefaces, namely Serif, Sans Serif, and Decorative in each ward. Table 1 presents the detailed coefficients and R2 of the three model. Generally, When combining amenity types and typeface data, they explain household income better (R2 = 0.552) than using the typefaces (R2 = 0.410) or amenity types (R2 = 0.297) individually. The determination coefficients R2 generally indicate that variation of household income can be better explained by typeface information than using amenity information (0.410 V.s 0.297); and typeface can explain extra variation of household income in addition to amenity information (0.552V.s 0.297). Table 1 Multivariate regressions between household income and amenities/typefaces. Considering the potential multicollinearity among the amenity and typeface variable, as well as the skewed value distribution of amenity and typeface numbers, we didn't look into the coefficients of the regression model since it is potentially biased when compare with each other. In order to identify the actual contributions of different typefaces to household income, we further employed Spearman's ranking method to obtain the correlation coefficients between household income and each typeface category. A total size of 59,515 typefaces are involved in the correlation analysis. As shown in Fig. 4, Serif evidently has the highest correlation coefficient with household income (ρ = 0.44), followed by Script (ρ = 0.23). By contrast, Sans Serif has a negative correlation with household income (ρ = −0.26). This correlation represents that high-income people live in wards with a large number of Serif and Script typefaces. Thus, we presume that reassuring and readable impressions are popular in high-income areas. Notably, not all types of typefaces have significant coefficient with incomes. For example, Decorative, Casual, and Blackletter have very low correlation coefficients with income. Spearman correlation coefficients between household income and typefaces. From these experiments, we can take typeface as an important aesthetic element to understand cities. Apart from its artistic influence on our urban city, it also correlates significantly with amenity types, household incomes and other socio-economic characteristics. Discussion and Conclusion This research contributes to the following directions. First, we proposed a state-of-the-art framework to recognize typeface from Google Street View images. The framework also successfully maps the typeface in the Street View images to its corresponding amenity attributes. Second, we used the collected data to examine the relationship between typefaces and amenity types in a quantitative manner, which can only have an qualitative discussion before. For example, Decorative and Script are highly correlated with nightclubs, Sans Serif and Serif correlate with the finance industry. Our results provide some empirical evidence of how the usage of typeface is linked to the function of an amenity. Finally, we find that typefaces can contribute to the explanation of the local socio-economics, and different typefaces have different correlation coefficients to the economy. We mapped 11 typeface class correlation coefficients with household income. Nevertheless, these results should be taken as preliminary, where we acknowledge two limitations - sample data uncertainty and the lack of multiple area analysis, which reduces the extent the research results can be generalized. Regarding the data uncertainty issue, owing to the fast development of machine learning methods, many studies employ the data yield from a machine learning model and attempt to explore the associations between predicted data and others. However, how to involve the uncertainty of samples to the statistical analysis is worth discussing. This work also faces the issue when employing the typeface data generated from machine learning models. This issue can be potentially approached through two ways: first, modeling and including errors as a kind of effect in a special statistical model; second, obtaining the results from different machine learning models and comparing the results. Besides, in future work, we will apply our method to other cities worldwide to test the generalizability and transferability. We believe that different cities will have different typeface patterns, and the relationship between the typefaces and socio-economic indicators may vary. Exploring how different cities' incomes are affected by typefaces is promising. Another interesting topic is to look further on whether typeface have an impact on other social indicators in our cities, such as sense of safe, human activities, demographics, or even political preferences. For example, election posters can influence the election results23. We believe that political preferences could be revealed via typefaces. Therefore, a strong potential arises in using typeface as a key to unlock many of the still unknown demographic indicators by combining solid spatial statistical and geographic modeling methods24,25. Moreover, typeface usage patterns could also be beneficial to the linguistic landscape field, such as Cook's the language of street study26. Last but not least, we hope our method can help other urban planners, designers, and engineers to reveal the city with their interest. Methods for typeface dataset collection and mapping of amenity attributes Fig. 5 presents the working pipeline in this research. To generate ground truth typeface data, we use a learning-based pipeline to build our study, which requires three steps to complete our experiment: Text detection from the Street View image. Extract localization of the text in the Google Street View image, and retain the geographic information of these Street View images, such as latitude and longitude, and so on. Typefaces recognition. Identify typefaces from the text images that just extracted from Google Street View images, and retain geographic information of the text images. Match typeface with amenity name. We recognize the semantic text using the text images from the last step and continue to preserve the typeface, geographic information, and so on. Then, we match the name of the amenity with the text identified from the text image to obtain the amenity information corresponding to the text image. Dataset collection work flow (Due to copyright, our example of Street View photo was taken by our-self instead of actual Google Street View). These three steps use three different models to synthesize the pipeline as shown in Fig. 5, namely, object detection, typeface recognition, and text recognition models. The following three sections describe how to use these models in each step, respectively. All of our visual analytic datasets are based on Google Street View images. The Street View Images can be downloaded through the Google Street View Application Programming Interface (API). The API is an HTTP URL that allows users to modify the attributes by Latitude and Longitude, heading (compass of the camera, range from 0 to 360), fov (determines the horizontal field of view of the image), and pitch (specifies the up or down angle of the camera). According to Li's study with regard to collecting Google Street View images27. We set the position parameters of our API according to the path of the street, because most signage and posters are placed on buildings and because the Street View Image resolution is limited (640 × 640). Hence, in our test, when setting the pitch to 11 and fov to 45, the signage is likely captured in the image. In the meantime, while most studies set six headings to obtain a 360° panorama Street View Image6,27 on one site, we divide 360° into eight headings so that each captured Street View image will have sufficient pixels for further analysis. In addition, we set Global Positioning System (GPS) coordinates every 15 m to capture valid signages and posters on the street. A distance longer than this set up would miss capturing shops in high-density store areas, and a shorter distance would increase the chances of repeating signages. In total, we set 97,154 sites and collected 748,471 street view images (some places have no valid street images) in central London as of November 2016. Text detection from street view images To collect the text localization data of street view images from more than three quarter million images, we use an Efficient and Accurate Scene Text detector (EAST)28; it can extract the localization of the text area in the street view images. Then, we spend about 42 hours to run all 748,471 images on an NVIDIA GeForce GTX 1080. While collecting street view images from London, except amenity's appearance, we also collected several residential house images, and the balcony in these residential images are often considered as text by recognition network. This situation also happens when we test on other networks such as Faster R-CNN network29. Therefore, we trained a neural network to filter residential houses, which include balcony images. We fine-tune a ResNet-101 model and obtain an accuracy of 93% with 84% recall. Under such conditions, we finally retrieved 59,515 text images through 748,471 Google Street View images from London. Although some biases remain in this recognition, the accuracy is acceptable for further analysis. Typeface recognition Supervised learning such as the deep learning requires a large amount of ground truth data. However, collecting typeface data is time consuming and expensive. Even accurately labeling typeface data would require professional knowledge. Therefore, synthesizing text images would be our best option to obtain adequately detailed annotation of the typeface dataset. We followed Gupta's30 synthetic approach to generate text on the buildings' outdoor images. The essential workflow is to determine images without text in the image and then identify an appropriate space to place the text. Technically, synthesizing text on the image is based on the result of image segmentation to determine a region with sufficient continuities to place text. Moreover, the depth data of the image can change text distortion and transform it according to the normal surface of the region. We use pre-generated data by Gupta, which rely on Arbelaez's segmentation data31 and Liu's depth data32. Some examples of synthesized images can be found in Fig. 6. Based on these data, we could finally add typeface as one additional attribute to Gupta's implementation30; the typeface class label is selected from our 11 typeface classes. Therefore, when we synthesize an image, we can obtain the text typeface and localization in an image. We then use this method to generate 91,398 text images as our ground truth images. Through the generating process, we use amenity names downloaded from the Google Shops API as the text letters to synthesize on the images, and each amenity name is considered as a text area. Following this method, we generate text on the possible segmentation area in the image. Building typeface training set: synthesizing typefaces on natural images based on segmentation and depth information. Confusion Matrix of 11 typefaces recognition result. Once we successfully synthesize the data, we fine-tune a ResNet-18 network33 that pretrained on ImageNet dataset34. The model achieved higher accuracy for the typeface recognition and saved a large amount of time rather than training the network from scratch. The dataset is splited into training set (80%) and testing set (20%). In the training process, we set the learning rate to 0.001, momentum to 0.9, and weight decay to 0.0001. The optimization took 50 epochs and obtained an accuracy of 76% mean accuracy for the 11 classes on the test dataset. Fig. 7 presents the confusion matrix of the classification model. Match typeface with amenity attributes The amenity type data, or similarly known as "Point of Interest" data, has been used widely to help evaluate the development and improvement of urban built environment35. However, given that collecting the amenity typefaces is extremely difficult, few studies have been conducted to understand a city through the perspective of amenity typefaces. With the text image data we obtained from the previous steps, we build a mapping system to match the text image to its corresponding amenity properties. As two preparatory steps, the details are shown below: Text recognition We use the Google Street View images as input images to train a Convolutional Recurrent Neural Network (CRNN)36 to recognize text characters; then, we can create a text image dataset with corresponding geo-locations. Amenity name collection We use the Google Places API to collect the ten most common amenity categories in the city. Totally, there are 21,905 shops being obtained, with their detailed information including locations and categories. According to35, the average amenity number in the city is around 26,800. In addition, considering the area we focused is only the city center of London, we believe that this amount of amenities cover more than 80% of the amenities in the city. Finally, we build a mapping system to link the amenities' names we collected from the Google Places API and the text images. The system works as follows: choosing a text image with a geographic location and recognized letters; then, use the location of this text image as the center of a 50-m circle. Then, we attempt to use the recognized word to match every amenity's name. For example, we have a text image in location A, and we recognized it by using the CRNN network36 to obtain the letters, such as "Burberr" From Google Places API, we know that an amenity named Burberry exists within 50 m of this text image. The letters "Burberr" have a high possibility of being Burberry. Thus, we match the text image "Burberr" with the amenity "Burberry". Therefore, we can also obtain all related amenity information of this text image "Burberr" such as the category of the amenity and the GPS coordinates. Through this method, we successfully retrieved 5,238 text images with corresponding amenity attributes. An amendment to this paper has been published and can be accessed via a link at the top of the paper. Wheaton, W. C. Income and urban residence: An analysis of consumer demand for location. The Am. Econ. Rev. 67, 620–631 (1977). Fujita, M. Urban economic theory: land use and city size (Cambridge university press, 1989). Alonso, W. et al. Location and land use. toward a general theory of land rent. Locat. land use. Toward a general theory land rent (1964). Brueckner, J. K., Thisse, J.-F. & Zenou, Y. Why is central paris rich and downtown detroit poor?: An amenity-based theory. Eur. economic review 43, 91–107 (1999). Jean, N. et al. Combining satellite imagery and machine learning to predict poverty. Sci. 353, 790–794 (2016). Gebru, T. et al. Using deep learning and google street view to estimate the demographic makeup of neighborhoods across the United States. Proc. Natl. Acad. Sci. 114, 13108–13113 (2017). Zhang, F., Wu, L., Zhu, D. & Liu, Y. Social sensing from street-level imagery: a case study in learning spatio-temporal urban mobility patterns. ISPRS J. Photogramm. Remote. Sens. 153, 48–58 (2019). Martineau, P. The personality of the retail store (Taylor & Francis, 1958). Jaderberg, M., Vedaldi, A. & Zisserman, A. Deep features for text spotting. In European conference on computer vision, 512–528 (Springer, 2014). Wang, Z. et al. Deepfont: Identify your font from an image. In Proceedings of the 23rd ACM international conference on Multimedia, 451–459 (ACM, 2015). Chen, G. et al. Large-scale visual font recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3598–3605 (2014). Zhang, F., Zhou, B., Ratti, C. & Liu, Y. Discovering place-informative scenes and objects using social media photos. Royal Soc. Open Sci. 6, 181375 (2019). Kang, Y. et al. Extracting human emotions at different places based on facial expressions and spatial clustering analysis. Transactions GIS, https://doi.org/10.1111/tgis.12552 (2019). LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nat. 521, 436–444 (2015). Parker, R. C. The 14 biggest e-book design mistakes. DT G (2004). Bringhurst, R. The elements of typographic style (Hartley & Marks Point Roberts, WA, 2004). Bateman, S., Gutwin, C. & Nacenta, M. Seeing things in the clouds: the effect of visual features on tag cloud selections. In Proceedings of the nineteenth ACM conference on Hypertext and hypermedia, 193–202 (ACM, 2008). Yager, D., Aquilante, K. & Plass, R. High and low luminance letters, acuity reserve, and font effects on reading speed. Vis. research 38, 2527–2531 (1998). Bernard, M., Liao, C. H. & Mills, M. The effects of font type and size on the legibility and reading time of online text by older adults. In CHI '01 Human Factors in Computing Systems, 175–176 (ACM, 2001). Henderson, P. W., Giese, J. L. & Cote, J. A. Impression management using typeface design. J. marketing 68, 60–72 (2004). Doyle, J. R. & Bottomley, P. A. Font appropriateness and brand choice. J. business research 57, 873–880 (2004). Orth, U. R. & Malkewitz, K. Holistic package design and consumer brand impressions. J. marketing 72, 64–81 (2008). Heller, S. To the letter born. https://campaignstops.blogs.nytimes.com/2008/04/02/to-the-letter-born/. The New York Times. Accessed: 2019-07-01 (2019). Chen, M. et al. Developing a data model for understanding geographical analysis models with consideration of their evolution and application processes. Transactions GIS 22, 1498–1521 (2018). Lü, G. et al. Reflections and speculations on the progress in geographic information systems (gis): A geographic perspective. Int. J. Geogr. Inf. Sci. 33, 346–367 (2019). Cook, V. The language of the street. Appl. Linguist. Rev. 4, 43–81 (2013). Li, X. et al. Assessing street-level urban greenery using google street view and a modified green view index. Urban For. & Urban. Green. 14, 675–685 (2015). Zhou, X. et al. East: an efficient and accurate scene text detector. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 5551–5560 (2017). Ren, S., He, K., Girshick, R. & Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, 91–99 (2015). Gupta, A., Vedaldi, A. & Zisserman, A. Synthetic data for text localisation in natural images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2315–2324 (2016). Arbelaez, P., Maire, M., Fowlkes, C. & Malik, J. Contour detection and hierarchical image segmentation. IEEE transactions on pattern analysis machine intelligence 33, 898–916 (2011). Liu, F., Shen, C. & Lin, G. Deep convolutional neural fields for depth estimation from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5162–5170 (2015). He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778 (2016). Deng, J. et al. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 248–255 (Ieee, 2009). Hidalgo, C. A. & Castañer, E. E. The amenity space and the evolution of neighborhoods. arXiv preprint arXiv:1509.02868 (2015). Shi, B., Bai, X. & Yao, C. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE transactions on pattern analysis machine intelligence 39, 2298–2304 (2016). We would like to thank Cesar Hidalgo for his advice about correlations between typeface and household income, we also like to thank Joan Giese for her advice about typeface impressions. Last but not least, we would like to thank Jiaxin Gao for her advice about manuscript. We acknowledge support from MIT Senseable City Lab. Senseable City Laboratory, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA Ruixian Ma , Wei Wang , Fan Zhang , Kyuha Shim & Carlo Ratti Alibaba Group, Hangzhou, 310000, China School of Design, Carnegie Mellon University, Pittsburgh, PA, 15213, USA Kyuha Shim Search for Ruixian Ma in: Search for Wei Wang in: Search for Fan Zhang in: Search for Kyuha Shim in: Search for Carlo Ratti in: R.M. conceived and conducted the experiment(s), analysed the results and selected typefaces for their classification, W.W. conceived and conducted the experiment(s), F.Z. analysed the results and conceived the experiment(s). K.S. selected typefaces and built structures for their classification. C.R. supervised the research. All authors reviewed the manuscript. Correspondence to Ruixian Ma or Fan Zhang. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Ma, R., Wang, W., Zhang, F. et al. Typeface Reveals Spatial Economical Patterns. Sci Rep 9, 15946 (2019) doi:10.1038/s41598-019-52423-y DOI: https://doi.org/10.1038/s41598-019-52423-y
CommonCrawl
give the name of the following simple binary ionic compounds If it can form more than one form of oxyanion, it gets a suffix of either -ate or -ite. Join Yahoo Answers and get 100 points today. Hgod. You can sign in to vote the answer. $\mathrm{CaO} \qquad$ f. $\mathrm{Rb}_{2} \mathrm{O}$, Give the name of each of the following simple binary ionic compounds.a. $\mathrm{Fe}_{3} \mathrm{P}_{2} \quad$ d. $\mathrm{PbCl}_{4}$b. a. Lil d. AIBr; b. MgF2 e. Cas c. Sro f. Na, 11. You have a 5000PPM standard of acetylsalicylic acid, detail how you would make up a standard cure of 6 standards between 0 and 15PPM.? $A I_{2} O_{3}$b. $\mathrm{CaBr}_{2}$e. Naming Binary Ionic Compounds. Naming Binary Ionic Compounds. Go to your Tickets dashboard to see if you won! View Winning Ticket, Identify each case in which the name is incorrect. I have a 1993 penny it appears to be half copper half zink is this possible? Do radioactive elements cause water to heat up? These are the elements in the middle of the periodic table – things like zinc, iron and copper. $\mathrm{AlCl}_{3} \qquad$ e. $\mathrm{Li}_{2} \mathrm{O}$c. $\mathrm{Ag}_{2} \mathrm{S}$e. Whoops, there might be a typo in your email. If I told you the compound was iron chloride, that wouldn't give you the full story. $\mathrm{Fe}_{3} \mathrm{P}_{2} \quad$ d. $\mathrm{PbCl}_{4}$b. Cslh. $\mathrm{NaBr} \qquad$ d. $\mathrm{SrBr}_{2}$b. But the free ions have that respective charges that we determine from our half full or empty shell stability, Give the name of each of the following simple binary ionic compounds.a. You can sign in to vote the answer. Now we've got two. susanna_eng. $\mathrm{SnCl}_{4}, \operatorname{tin}(\mathrm{IV})$ chlorided. Learn how to name all ionic compounds, including simple binary compounds, compounds containing transition metals and compounds containing polyatomic ions. Remember that positively charged ions are called cations. So this is going to be calcium floor ride for a C. We have aluminum solved and sulfur. Learning to name ionic compounds is both easy and hard depending on the complexity of the compound. ClO– is hypochlorite. hydrosulfuric acidf. Nomenclature, a collection of rules for naming things, is important in science and in many other situations.This module describes an approach that is used to name simple ionic and molecular compounds, such as NaCl, CaCO 3, and N 2 O 4.The simplest of these are binary compounds, those containing only two elements, but we will also consider how to name ionic compounds containing … Naming and Writing Formulas for Binary Ionic Compounds. $\mathrm{BaF}_{2} \qquad$ d. $\mathrm{Rb}_{2} \mathrm{O}$b. $\mathrm{BaF}_{2}$, Give the name of each of the following binary ionic compounds. Before we start, though, I … LiFg. You do this by adding Roman numerals in parenthesis to the cation. 22 removes 'dark cloud' for Uber and Lyft, Fox News' big Arizona call angered Trump camp: NYT. Lynette. Al 2 S 3 aluminum sulfide 1. $\mathrm{AlCl}_{3} \qquad$ e. $\mathrm{Li}_{2} \mathrm{O}$c. BaI 2 barium iodide 1. So this is going to be Liffe on our side, and yeah, has basically ends. \mathrm{CaH}_{2}$, a. BeOBeryllium oxideb. Carol_Grealis. $\mathrm{MgI}_{2}$Magnesium iodidec. $\mathrm{Li}_{2} \mathrm{O}$, Give the name of each of the following simple binary ionic compounds.a. An ionic compound is a compound held together by ionic bonds. Before we start, though, I just wanted to review a few terms. The formula would be FeCl2. Give the name of each of the following simple binary ionic compounds.? Learning to name ionic compounds is both easy and hard depending on the complexity of the compound. LiFg. All right, Silver problem six. $\mathrm{FeO} \qquad$ e. $\mathrm{Hg}_{2} \mathrm{Cl}_{2}$c. © 2003-2020 Chegg Inc. All rights reserved. What is the molar mass of a compound if 22.0 g of the gas occupies 5.60 L at STP? EMAILWhoops, there might be a typo in your email. NaBr. $\mathrm{Na}_{2} \mathrm{O}$ , disodium oxidec. $A I_{2} O_{3}$b. $\operatorname{CdBr}_{2}$f. $\mathrm{Fe}\left(\mathrm{C}_{2} \mathrm{H}_{3} \mathrm{O}_{2}\right)_{3} \qquad$ d. $\mathrm{SiBr}_{4}$b. It follows the same naming rules as the simple binary compounds, but with an extra rule added in. We have a seizure on ironing. Naming Binary Ionic Compounds Learning to name ionic compounds is both easy and hard depending on the complexity of the […] So, you still name the cation first, followed by the anion with the suffix -ide added to the end of it. volume of 4.2 M HCl required. give the name each of the following simple binary ionic compounds NaBr MgCl2. $\mathrm{NH}_{3}$f. In each of the following, identify which names are incorrect for the given formulas, and give the correct name. S2- is sulfide. $\mathrm{SnO}_{2} \qquad$ $\mathrm{f} . The fourth and largest oxyanion gets the prefix per- and the suffix -ate. $\mathrm{Fe}_{2} \mathrm{O}_{3},$ iron(II) oxided. Negatively charged ions are called anions. cesium fluoride. $\mathrm{PbO}$d. So this is my country strong. CaBr 2 calcium bromide 1. ClF, Name each of the following binary compounds, using the periodic table to determine whether the compound is likely to be ionic (containing a metal and a nonmetal) or nonionic (containing only nonmetals).a. It has the hypo- prefix and the -ite suffix because it is the smallest. a. NaI e. SrO b. Ca F2 f. AgCl c. A l 2 S 3 g. CsI d. CaB r2 h. L i 2 O Write the name of each of the following ionic substances, using the system that includes a Roman numeral to specify the charge of the cation. We Will Write a Custom Essay SpecificallyFor You For Only $13.90/page! $\mathrm{N}_{2} \mathrm{O}_{5} \qquad$ f. $\mathrm{Cu}_{2} \mathrm{O}$, Name each of the following binary compounds, using the periodic table to determine whether the compound is likely to be ionic (containing a metal and a nonmetal) or nonionic (containing only nonmetals).a. When compounds are formed with these metals, the different ions still have to be accounted for. $\mathrm{AgCl}$g. Name the following simple binary ionic compounds. For a year we have strong tea on an oxygen. CsBr cesium bromide 1. All you do is write the first element. $\mathrm{CaF}_{2}$c. a. Fel; d. Cu s b. MnCl2 e. Coo c. HgO f. SnBri, Give The Name Each Of The Following Simple Binary Ionic Compounds NaBr MgCl2. The cation is named first, followed by the anion. $\mathrm{SnCl}_{2}$b. The new rule is that transition metals form more than one ion, so this has to be accounted for in the naming. $\mathrm{CaBr}_{2} \quad$ e. $\mathrm{S}_{2} \mathrm{F}_{10}$c. $\mathrm{AlP} \qquad$ f. $\mathrm{K}_{2} \mathrm{S}$, Name each of the following binary compounds, using the periodic table to determine whether the compound is likely to be ionic (containing a metal and a nonmetal) or nonionic (containing only nonmetals).a. $\mathrm{Ag}_{2} \mathrm{S}$Silver $(\mathrm{I})$ sulfideh. $\mathrm{CaBr}_{2} \quad$ e. $\mathrm{S}_{2} \mathrm{F}_{10}$c. magnesium iodideh. $\mathrm{RaO} \qquad$ e. $\mathrm{As}_{2} \mathrm{O}_{5}$c. For instance, Fe2+ is iron (II). $\mathrm{Na}_{2} \mathrm{S}$d. $\mathrm{MnCl}_{2}$c. $\mathrm{SnCl}_{2} \qquad$ d. $\mathrm{SnO}_{2}$b. Go to your Tickets dashboard to see if you won! $\mathrm{SnO}_{2}$e. $\mathrm{SiO}_{2},$ silver dioxidee. a. CaHz, calcium hydride b. PbCl2, lead(IV) chloride c. Criz, chromium(III) iodide d. Na S, disodium sulfide e. CuBr2, cupric bromide 12. Simple Binary Ionic Compounds (Worksheet 2) 20 terms. If there are four versions of the oxyanion, the smallest gets the prefix hypo- and the suffix -ite. HClHydrogen chloridef. I want to understand the idea behind, is there a certain producer/steps to do it? GeH $_{4}$b. You must be logged in to bookmark a video. So this is my caesium. $\mathrm{CaBr}_{2}$e. Chem Ionic Compounds. Hello, Name the following simple binary ionic compounds. or I will have to memorize it all the time; I have a test tomorrow and I wish there is certain steps I can follow. \mathrm{BeO} \qquad e. \mathrm{HCl} b. a. Nal b. Nalb. 5 years ago . 14 terms. A simple binary compound is just what it seems – a simple compound with two elements in it. Give the name of each of the following simple binary ionic compounds. $\mathrm{CBr}_{4}$d. beryllium bromidee. NaBr. $\mathrm{BaF}_{2} \qquad$ e. $\mathrm{MgS}$c. The Study-to-Win Winning Ticket number has been announced! The Study-to-Win Winning Ticket number has been announced! Give the name of each of the following simple binary ionic compounds. The anion is ClO3, which is an oxyanion that you saw previously was named chlorate. CaH $_{2}$, Give the name of each of the following simple binary ionic compounds.a. Write the name of each of the following ionic substances, using the system that includes a Roman numeral to specify the charge of the cation. LiFLithium fluorideg. Could Jim Harbaugh go back to NFL after Michigan stint? I don't have an account. BQ2: From the names I made, I like Leo Alexander James, Elizabeth Juliana, and Felix Beckham! $\mathrm{Na}_{2} \mathrm{S}$d. $\mathrm{MgI}_{2}$c. Copy this to my account; E-mail to a friend; Find other activities; Start over; Help; Student will either name or write the formula for the following binary ionic compounds. It has per- for the prefix and -ate for the suffix because it is the largest. Um so fine for D. We have calcium and broening, so this may be calcium our own minds. K 2 S potassium sulfide 1. If an oxyanion can form only two different kinds of oxyanions, the name of the ion with the greater number of atoms ends in -ate and the smaller number of atoms ends in -ite. KClO3. The metal cation is named first, followed by the nonmetal anion. Still have questions? SrOf. compounds NaBr MgCl2, 9. CuSe. They are named like the binary compounds, with the cation first, then the anion with -ide added to it, but you have to take into account the variations of the metal ions. You learned that naming simple binary ionic compounds is easy. Give the name of each of the following simple binary ionic compounds.? 1. lithium bromide LiBr How about receiving a customized one? FeOc. MgCl 2 magnesium chloride 1. CaH $_{2}$, Give the name of each of the following simple binary ionic compounds.a. Brand New Animal Characters, Bmw 635 Csi Highline, Csgo Demo Remove Spectator Hud, Middle Name For Lila, Safra Catz Sons, 8th Grade Grammar Test Pdf, Tornado Worksheets For 3rd Grade, Civ 6 Luxury Resources Not Giving Happiness, Iain De Caestecker Us, Ark Genesis Element Dust, I Pray We All Be Ready Sermon, Youthberry Tea Benefits, Amaco Celadon Mixing Chart, give the name of the following simple binary ionic compounds 2020
CommonCrawl
\begin{document} \dedicatory{Dedicated to Rolf Schneider on the occasion of his 65th birthday} \begin{abstract} For a convex body $K\subset{\mathbb R}^n$, the $k$th projection function of $K$ assigns to any $k$-dimensional linear subspace of ${\mathbb R}^n$ the $k$-volume of the orthogonal projection of $K$ to that subspace. Let $K$ and $K_0$ be convex bodies in ${\mathbb R}^n$, and let $K_0$ be centrally symmetric and satisfy a weak regularity and curvature condition (which includes all $K_0$ with $\partial K_0$ of class $C^2$ with positive radii of curvature). Assume that $K$ and $K_0$ have proportional $1$st projection functions (i.e., width functions) and proportional $k$th projection functions. For $2\le k<(n+1)/2$ and for $k=3, n=5$ we show that $K$ and $K_0$ are homothetic. In the special case where $K_0$ is a Euclidean ball, we thus obtain characterizations of Euclidean balls as convex bodies of constant width and constant $k$-brightness. \end{abstract} \title{Nakajima's problem: convex bodies of constant width and constant brightness} \section{Introduction and statement of results} Let $K$ be a convex body (a compact, convex set with nonempty interior) in ${\mathbb R}^n$, $n\ge 3$. Assume that, for any line, the length of the projection of $K$ to the line is independent of that line and, for any hyperplane, the volume of the projection of $K$ to the hyperplane is independent of that hyperplane. Must $K$ then be a Euclidean ball? In dimension three, this problem has become known as Nakajima's problem \cite{Nak}; see \cite{Chakrel}, \cite{ChakGroe}, \cite{Croft}, \cite{Gardnerbook}, \cite{Goodey}, \cite{Heil}. It is easy to check that the answer to it is in the affirmative if $K$ is a convex body in ${\mathbb R}^3$ of class $C^2$. For general convex bodies in ${\mathbb R}^3$, the problem is much more difficult and a solution has only been found recently. Let $\mathbb{G}(n,k)$ denote the Grassmannian of $k$-dimensional linear subspaces of ${\mathbb R}^n$. A convex body $K$ in ${\mathbb R}^n$ is said to have {\em constant $k$-brightness}, $k\in\{1,\ldots,n-1\}$, if the $k$-volume $V_k(K\vert U)$ of the orthogonal projection of $K$ to the linear subspace $U\in\mathbb{G}(n,k)$ is independent of that subspace. The map $$ \pi_k\colon\mathbb{G}(n,k)\to{\mathbb R},\qquad U\mapsto V_k(K\vert U), $$ is referred to as the {\em $k$th projection function} of $K$. Hence a convex body $K$ has constant width (i.e.\ constant 1-brightness) if it has constant $1$st projection function (width function). \begin{thm}[\cite{Howard:NP}]\label{ThmHoward} Let $K$ be a convex body in ${\mathbb R}^n$ having constant width and constant $2$-brightness. Then $K$ is a Euclidean ball. \end{thm} This theorem provides a complete solution of the Nakajima problem in ${\mathbb R}^3$ for general convex bodies. In the present paper, we continue this line of research. Our main result complements Theorem \ref{ThmHoward} by covering the cases of convex bodies of constant width and constant $k$-brightness with $2\le k<(n+1)/2$ or $k=3$, $n=5$. \begin{thm}\label{ThmHHnew} Let $K$ be a convex body in ${\mathbb R}^n$ having constant width and constant $k$-brightness with $2\le k<(n+1)/2$, or $k=3, n=5$. Then $K$ is a Euclidean ball. \end{thm} The preceding two theorems can be generalized to pairs of convex bodies $K,K_0$ having proportional projection functions, provided that $K_0$ is centrally symmetric and has a minimal amount of regularity. \begin{thm}\label{ThmHHnewgen} Let $K,K_0$ be convex bodies in ${\mathbb R}^n$, and let $K_0$ be centrally symmetric with positive principal radii of curvature on some Borel subset of the unit sphere of positive measure. Let $2\le k<(n+1)/2$, or let $k=3,n=5$ in which case assume the surface area measure $S_4(K_0,\cdot)$ of $K_0$ is absolutely continuous with positive density. Assume that there are constants $\alpha,\beta>0$ such that $$ \pi_1(K)=\alpha\, \pi_1(K_0)\qquad\text{and}\qquad \pi_k(K)=\beta\, \pi_k(K_0). $$ Then $K$ and $K_0$ are homothetic. \end{thm} As the natural measure on the unit sphere, ${\mathbb S}^{n-1}$, we use the invariant Haar probability measure (i.e.\ spherical Lebesgue measure), or what is the same thing the $(n-1)$-dimensional Hausdorff measure, $\mathcal{H}^{n-1}$, normalized so that the total mass is one. We view the principal radii of curvature as functions of the unit normal, despite the fact that the unit normal map is in general a set valued function (cf.\ the beginning of Section 2 below). The assumption that the principal radii of curvature are positive on a set of positive measure means that there is a Borel subset of ${\mathbb S}^{n-1}$ of positive measure such that on this set the reverse Gauss map is single valued, differentiable (in a generalized sense) and the eigenvalues of the differential are positive. Explicitly, this condition can be stated in terms of second order differentiability properties of the support function (again see Section~\ref{sec:prelim}). In particular, it is certainly satisfied if $K_0$ is of class $C^2_+$, and therefore letting $K_0$ be a Euclidean ball recovers Theorem~\ref{ThmHHnew}. The required condition allows for parts of $K_0$ to be quite irregular. For example if $\partial K_0$ has a point that has a small neighborhood where $\partial K_0$ is $C^2$ with positive Gauss-Kronecker curvature, then the assumption will hold, regardless of how rough the rest of the boundary is. For example a ``spherical polyhedron'' constructed by intersecting a finite number of Euclidean balls in ${\mathbb R}^n$ will satisfy the condition. More generally if the convex body $K_0$ is an intersection of a finite collection of bodies of class $C^2_+$, it will satisfy the condition. Theorem~\ref{ThmHHnewgen} extends the main results in \cite{HH:NPsmooth} for the range of dimensions $k,n$ where it applies by reducing the regularity assumption on $K_0$ and doing away with any regularity assumptions on $K$. However, the classical Nakajima problem, which concerns the case $n=3$ and $k=2$, is not covered by the present approach. Despite recent progress on the Nakajima problem various questions remain open. For instance, can Euclidean balls be characterized as convex bodies having constant width and constant $(n-1)$-brightness if $n\ge 4$? This question is apparently unresolved even for smooth convex bodies. A positive answer is available for smooth convex bodies of revolution (cf.~\cite{HH:NPsmooth}). From the arguments of the present paper the following proposition is easy to check. \begin{prop} Let $K,K_0\subset{\mathbb R}^n$ be convex bodies that have a common axis of revolution. Let $K_0$ be centrally symmetric with positive principal radii of curvature almost everywhere. Assume that $K$ and $K_0$ have proportional width functions and proportional $k$th projection functions for some $k\in\{2,\ldots,n-2\}$. Then $K$ and $K_0$ are homothetic. \end{prop} It is a pleasure for the authors to dedicate this paper to Rolf Schneider. Professor Rolf Schneider has been a large source of inspiration for countless students and colleagues all over the world. His willingness to communicate and share his knowledge make contact with him a pleasurable and mathematically rewarding experience. The second named author has particularly been enjoying many years of support, personal interaction and joint research. \section{Preliminaries}\label{sec:prelim} Let $K$ be a convex body in ${\mathbb R}^n$, and let $h_K\colon {\mathbb R}^n\to {\mathbb R}$ be the support function of $K$, which is a convex function. For $x\in {\mathbb R}^n$ let $\partial h_K(x)$ be the subdifferential of $h_K$ at $x$. This is the set of vectors $v\in {\mathbb R}^n$ such that the function $h_K-\langle v,\cdot\rangle$ achieves its minimum at $x$. It is well known that, for all $x\in {\mathbb R}^n$, $\partial h_K(x)$ is a nonempty compact convex set and is a singleton precisely at those points where $h_K$ is differentiable in the classical sense (cf.~\cite[pp.~30--31]{Schneider1993}). For $u\in {\mathbb S}^{n-1}$ the set $\partial h_K(u)$ is exactly the set of $x\in \partial K$ such that $u$ is an outward pointing normal to $K$ at $x$ (cf.~\cite[Thm~1.7.4]{Schneider1993}). But this is just the definition of the reverse Gauss map (which in general is not single valued, but a set valued function) and so the function $u\mapsto \partial h_K(u)$ gives a formula for the reverse Gauss map in terms of the support function. In the following, by ``almost everywhere'' on the unit sphere or by ``for almost all unit vectors'' we mean for all unit vectors with the possible exclusion of a set of spherical Lebesgue measure zero. A theorem of Aleksandrov states that a convex function has a generalized second derivative almost everywhere, which we will view as a positive semidefinite symmetric linear map rather than a symmetric bilinear form. This generalized derivative can either be defined in terms of a second order approximating Taylor polynomial at the point, or in terms of the set valued function $x\mapsto \partial h_K(x)$ being differentiable in the sense of set valued functions (both these definitions are discussed in \cite[p.~32]{Schneider1993}). At points where the Aleksandrov second derivative exists $\partial h_K$ is single valued. Because $h_K$ is positively homogeneous of degree one, if it is Aleksandrov differentiable at a point $x$, then it is Aleksandrov differentiable at all points $\lambda x$ with $\lambda>0$. Then Fubini's theorem implies that not only is $h_K$ Aleksandrov differentiable at $\mathcal{H}^n$ almost all points of ${\mathbb R}^n$, but it is also Aleksandrov differentiable at ${\mathcal H}^{n-1}$ almost all points of ${\mathbb S}^{n-1}$. For points $u\in {\mathbb S}^{n-1}$ where it exists, let $d^2h_K(u)$ denote the Aleksandrov second derivative of $h_K$. Let $u^\perp$ denote the orthogonal complement of $u$. Then the restriction $d^2h_K(u) \vert u^\perp$ is the derivative of the reverse Gauss map at $u$. The eigenvalues of $ d^2h_K(u) \vert u^\perp$ are the principal radii of curvature at $u$. As the discussion above shows these exist at almost all points of ${\mathbb S}^{n-1}$. A useful tool for the study of projection functions of convex bodies are the surface area measures. An introduction to these Borel measures on the unit sphere is given in \cite{Schneider1993}, a more specialized reference (for the present purpose) is contained in the preceding work \cite{HH:NPsmooth}. The top order surface area measure $S_{n-1}(K,\cdot)$ of the convex body $K\subset{\mathbb R}^n$ can be obtained as the $(n-1)$-dimensional Hausdorff measure $\mathcal{H}^{n-1}$ of the reverse spherical image of Borel sets of the unit sphere $\mathbb{S}^{n-1}$. The Radon-Nikodym derivative of $S_{n-1}(K,\cdot)$ with respect to the spherical Lebesgue measure is the product of the principal radii of curvature of $K$. Since for almost every $u\in {\mathbb S}^{n-1}$, the radii of curvature of $K$ at $u\in\mathbb{S}^{n-1}$ are the eigenvalues of $d^2h_K(u)\vert u^\perp$, the Radon-Nikodym derivative of $S_{n-1}(K,\cdot)$ with respect to spherical Lebesgue measure is the function $u\mapsto \det\left( d^2h_K(u) \vert u^\perp\right)$, which is defined almost everywhere on ${\mathbb S}^{n-1}$. In particular, if $S_{n-1}(K,\cdot)$ is absolutely continuous with respect to spherical Lebesgue measure, the density function is just the Radon-Nikodym derivative. For explicit definitions of these and other basic notions of convex geometry needed here, we refer to \cite{Schneider1993} and \cite{HH:NPsmooth}. The following lemma contains more precise information about the Radon-Nikodym derivative of the top order surface area measure. We denote the support function of a convex body $K$ by $h$, if $K$ is clear from the context. For a fixed unit vector $u\in\mathbb{S}^{n-1}$ and $i\in\mathbb{N}$, we also put $ \omega_i:=\left\{v\in \mathbb{S}^{n-1}:\langle v,u\rangle\ge 1-(2i^2)^{-1}\right\}$, whenever $u$ is clear from the context. Hence $\omega_i\downarrow \{u\}$, as $i\to\infty$, in the sense of Hausdorff convergence of closed sets. \begin{lemma}\label{RND} Let $K\subset{\mathbb R}^n$ be a convex body. If $u\in\mathbb{S}^{n-1}$ is a point of second order differentiability of the support function $h$ of $K$, then $$ \lim_{i\to\infty}\frac{S_{n-1}(K,\omega_i)}{\mathcal{H}^{n-1}(\omega_i)}= \det\left(d^2h(u)\vert u^\perp\right). $$ \end{lemma} \begin{proof} This is implicitly contained in the proof of Hilfssatz 2 in \cite{Leichtweiss88}. A similar argument, in a slightly more involved situation, can be found in \cite{Hug.aff}. \end{proof} An analogue of Lemma \ref{RND} for curvature measures is provided in \cite[(3.6) Hilfssatz]{Schneider1979}. As another ingredient in our approach to Nakajima's problem, we need two simple algebraic lemmas. Here we write $|M|$ for the cardinality of a set $M$. If $x_1,\dots,x_n$ are real numbers and $I=\{i_1,\dots,i_k\}\subseteq \{1,\dots,n\}$ we set $x_I:=x_{i_1}\dots x_{i_k}$. We also put $x_{\varnothing}:=1$. \begin{lemma}\label{alg2} Let $b>0$ be fixed. Let $x_1,\ldots,x_{n-1},y_1,\ldots,y_{n-1}$ be nonnegative real numbers satisfying $$ x_i+y_i=2\qquad \text{and}\qquad x_I+y_I=2b $$ for all $i=1,\ldots,n-1$ and all $I\subset\{1,\ldots,n-1\}$ with $|I|=k$, where $k\in\{2,\ldots,n-2\}$. Then $|\{x_1,\ldots,x_{n-1}\}|\le 2$ and $|\{y_1,\ldots,y_{n-1}\}|\le 2$. \end{lemma} \begin{proof} We can assume that $x_1\le\dots\le x_{n-1}$. Then we have $y_1\ge \dots\ge y_{n-1}$. If $x_1=0$, then $y_1=2$. Further, for $I'\subset\{2,\ldots,n-1\}$ with $|I'|=k-1$, we have $y_1y_{I'}=2b$, hence $y_{I'}=b$. Since $k\ge 2$, we get $y_2,\ldots,y_{n-1}>0$. Moreover, since $k-1\le n-3$, we conclude that $y_2=\dots=y_{n-1}$. This shows that also $x_2=\dots=x_{n-1}$, and thus $|\{x_1,\ldots,x_{n-1}\}|\le 2$ and $|\{y_1,\ldots,y_{n-1}\}|\le 2$. If $y_{n-1}=0$, the same conclusion is obtained by symmetry. If $x_1>0$ and $y_{n-1}>0$, then $x_1,\ldots,x_{n-1},y_1,\ldots,y_{n-1}>0$. Now we fix any set $J\subseteq\{1,\ldots,n-1\}$ with $|J|=k+1$. The argument at the beginning of the proof of Lemma 4.2 in \cite{HH:NPsmooth} shows that $|\{x_i:i\in J\}|\le 2$. Since $k+1\ge 3$, we first obtain that $|\{x_1,\ldots,x_{n-1}\}|\le 2$, and then also $|\{y_1,\ldots,y_{n-1}\}|\le 2$. \end{proof} \begin{lemma}\label{alg3} Let $n\ge 4$, and let $b>0$ be fixed. Let $x_1,\ldots,x_{n-1},y_1,\ldots,y_{n-1}$ be nonnegative real numbers satisfying $$ x_i+y_i=2\qquad \text{and}\qquad x_I+y_I=2b $$ for all $i=1,\ldots,n-1$ and all $I\subset\{1,\ldots,n-1\}$ with $|I|=n-2$. Then \begin{equation}\label{newstar} \prod_{l\neq i,j}x_l=\prod_{l\neq i,j}y_l=b \end{equation} whenever $i,j\in\{1,\ldots,n-1\}$ are such that $x_i\neq x_j$. \end{lemma} \begin{proof} For the proof, we may assume that $i=1$ and $j=n-1$, to simplify the notation. Then we have \begin{align*} x_1\cdots x_{n-2}+y_1\cdots y_{n-2}&=2b,\\ x_2\cdots x_{n-1}+y_2\cdots y_{n-1}&=2b, \end{align*} which implies that $$ x_2\cdots x_{n-2}(x_{n-1}-x_1)+y_2\cdots y_{n-2}(y_{n-1}-y_1)=0. $$ Moreover, $x_1+y_1=2=x_{n-1}+y_{n-1}$ yields $$ x_{n-1}-x_1=y_1-y_{n-1}\neq 0, $$ and thus $$ x_2\cdots x_{n-2}=y_2\cdots y_{n-2}. $$ Hence \begin{align*} 2x_2\cdots x_{n-2}&=(x_1+y_1)x_2\cdots x_{n-2}=x_1x_2\cdots x_{n-2}+y_1x_2\cdots x_{n-2}\\ &=x_1x_2\cdots x_{n-2}+y_1y_2\cdots y_{n-2}=2b, \end{align*} and thus $$ b=x_2\cdots x_{n-2}=y_2\cdots y_{n-2}. $$ \end{proof} \section{Proofs} First, by possibly dilating $K$, we can assume that $\alpha=1$. Hence the assumption can be stated as \begin{equation}\label{a1} \pi_1(K)= \pi_1(K_0)\qquad\text{and}\qquad \pi_k(K)=\beta\, \pi_k(K_0) \end{equation} for some $k\in\{2,\ldots,n-2\}$. Let $K^*$ denote the reflection of $K$ in the origin. Then \eqref{a1} yields that $$ K+K^*=2K_0\qquad\text{and}\qquad V_{k}(K\vert U)=\beta\, V_{k}(K_0\vert U) $$ for all $U\in\mathbb{G}(n,k)$. Minkowski's inequality (cf.\ \cite{Schneider1993}) then implies that \begin{align*} V_{k}(2K_0\vert U)=\,&V_{k}(K\vert U+K^*\vert U)\\ \ge\,&\left(V_{k}(K\vert U)^{\frac{1}{k}}+V_{k} (K^*\vert U)^{\frac{1}{k}}\right)^{k}\\ =\,&\left(2V_{k}(K\vert U)^{\frac{1}{k}}\right)^{k}\\ =\,&\beta\, V_{k}(2K_0\vert U). \end{align*} Equality in Minkowski's inequality will hold if and only if $K^*\vert U$ and $K\vert U$ are homothetic. As they have the same volume this is equivalent to their being translates of each other, in which case $K\vert U$ is centrally symmetric. Hence $\beta\le 1$ with equality if and only if $K\vert U$ is centrally symmetric for all linear subspaces $U\in \mathbb{G}(n,k)$. Since $k\ge 2$, this is the case if and only if $K$ is centrally symmetric (cf.~\cite[Thm.~3.1.3]{Gardnerbook}). So if $\beta=1$, then $K$ and $K_0$ must be homothetic. In the following, we assume that $\beta\in (0,1)$. This will lead to a contradiction and thus prove the theorem. We write $h,h_0$ for the support functions of $K,K_0$. Here and in the following, ``almost all'' or ``almost every'' refers to the natural Haar probability measure on ${\mathbb S}^{n-1}$. Moreover a linear subspace ``$E$'' as an upper index indicates that the corresponding functional or measure is considered with respect to $E$ as the surrounding space. By assumption there is a Borel subset $P\subseteq {\mathbb S}^{n-1}$ with positive measure such that for all $u\in P$ all the radii of curvature of $K_0$ in the direction $u$ exist and are positive. As $K_0$ is symmetric we can assume that $u\in P$ if and only if $-u\in P$. Let $N$ be the set of points $u\in {\mathbb S}^{n-1}$ where the principal radii of curvature of $K$ do not exist. Since $N$ is the set of points where the Alexandrov second derivative of $h$ does not exist, it is a set of measure zero. By replacing $P$ by $P\smallsetminus(N\cup (-N))$ we can assume that the radii of curvature of both $K_0$ and $K$ exist at all points of $P$. As both $N$ and $-N$ have measure zero this set will still have positive measure. Let $u\in \mathbb{S}^{n-1}$ be such that $h$ and $h_0$ are second order differentiable at $u$ and at $-u$ and that the radii of curvature of $K_0$ at $u$ are positive. This is true of all points $u\in P$, which is not empty as it has positive measure. Let $E\in \mathbb{G}(n,k+1)$ be such that $u\in E$. Then the assumption implies that also $$ \pi^E_k(K\vert E)=\beta\, \pi^E_k(K_0\vert E). $$ Hence we conclude as in \cite{HH:NPsmooth} that $$ S_k^E(K\vert E,\cdot)+ S_k^E(K^*\vert E,\cdot)=2\beta\, S_k^E(K_0\vert E,\cdot). $$ Since $h(K\vert E,\cdot)=h_K\vert E$ and $h(K_0\vert E,\cdot)=h_{K_0}\vert E$ are second order differentiable at $u$ and at $-u$ with respect to $E$, Lemma \ref{RND} applied with respect to the subspace $E$ implies that \begin{multline*} \det\left(d^2h_{K\vert E}(u)\vert E\cap u^\perp\right)+ \det\left(d^2h_{K^*\vert E}(u)\vert E\cap u^\perp\right)\\ =2\beta \, \det\left(d^2h_{K_0\vert E}(u)\vert E\cap u^\perp\right). \end{multline*} Since $h$ and $h_0$ are second order differentiable at $u$ and at $-u$, the linear maps $$ L(h)(u)\colon T_u\mathbb{S}^{n-1}\to T_u\mathbb{S}^{n-1}, \quad v\mapsto d^2h(u)(v), $$ $$ L(h_0)(u)\colon T_u\mathbb{S}^{n-1}\to T_u\mathbb{S}^{n-1}, \quad v\mapsto d^2h_0(u)(v), $$ are well defined and positive semidefinite. Since the radii of curvature of $K_0$ at $u$ are positive, we can define $$ L_{h_0}(h)(u):=L(h_0)(u)^{-1/2}\circ L(h)(u)\circ L(h_0)(u)^{-1/2} $$ as in \cite{HH:NPsmooth} in the smooth case. In this situation, the arguments in \cite{HH:NPsmooth} can be repeated to yield that \begin{equation}\label{star2} \begin{aligned} L_{h_0}(h)(u) + L_{h_0}(h)(-u) =&\, 2\,{\rm id}\\ \wedge^{k}L_{h_0}(h)(u) + \wedge^{k}L_{h_0}(h)(-u) =&\, 2\beta\, \wedge^{k} {\rm id}, \end{aligned} \end{equation} where ${\rm id}$ is the identity map on $T_u\mathbb{S}^{n-1}$. Lemma 3.4 in \cite{HH:NPsmooth} shows that $L_{h_0}(h)(u)$ and $L_{h_0}(h)(-u)$ have a common orthonormal basis of eigenvectors $e_1,\ldots,e_{n-1}$, with corresponding eigenvalues (relative principal radii of curvature) $x_1,\ldots,x_{n-1}$ at $u$ and with eigenvalues $y_1,\ldots,y_{n-1}$ at $-u$. After a change of notation (if necessary), we can assume that $0\le x_1\le x_2\le \dots\le x_{n-1}$. By \eqref{star2} we thus obtain \begin{equation}\label{3.2s} x_i+y_i=2\qquad\text{and}\qquad x_I+y_I=2\beta \end{equation} for $ i=1,\ldots,n-1$ and $I\subset\{1,\ldots,n-1\}$ with $|I|=k$. {\bf Proof of Theorem \ref{ThmHHnewgen} when $2\le k < (n+1)/2$.}\ From \eqref{3.2s} and Lemma~\ref{alg2} we conclude that there is some $\ell\in\{0,\ldots,n-1\}$ such that $$ x_1=\dots=x_{\ell}<x_{\ell+1}=\dots=x_{n-1} \qquad\text{and}\qquad y_1=\dots=y_{\ell}>y_{\ell+1}=\dots=y_{n-1}. $$ (a) If $k\le \ell$, then $$ x_1+y_1=2\qquad\text{and}\qquad x_1^k+y_1^k=2\beta . $$ Hence $$ 1=\left(\frac{x_1+y_1}{2}\right)^k\le \frac{x_1^k+y_1^k}{2}=\beta, $$ contradicting the assumption that $\beta<1$. (b) Let $k>\ell$. Since $k<(n+1)/2$ we have $2k<n+1$ or $k<n+1-k$. Hence $k\le n-k<n-\ell$, and thus $k\le n-1-\ell$. But then $$ x_{\ell+1}+y_{\ell +1}=2\qquad\text{and}\qquad x_{\ell +1}^k+y_{\ell +1}^k=2\beta , $$ and we arrive at a contradiction as before. This proves Theorem \ref{ThmHHnewgen} when $2\le k < (n+1)/2$\qed {\bf Proof of Theorem \ref{ThmHHnewgen} when $k=3, n=5$.} In this case we are assuming that $K_0$ has positive radii of curvature at almost all points of ${\mathbb S}^{n-1}$. As $h$ has Alexandrov second derivatives at almost all points, for almost all $u\in{\mathbb S}^{n-1}$ the radii of curvature of $K$ exist at both $u$ and $-u$ and at these unit vectors $K_0$ has positive radii of curvature. Recall that $x_1\le\dots\le x_4$ are the eigenvalues of $L_{h_0}(h)(u)$. We distinguish three cases each of which will lead to a contradiction. (a) $x_1\neq x_2$. Then Lemma \ref{alg2} yields that $x_1<x_2=x_3=x_4$ and therefore also $y_2=y_3=y_4$. Hence $$ x_2^3+y_2^3=2\beta\qquad\text{and}\qquad x_2+y_2=2, $$ and thus $$ 1=\left(\frac{x_2+y_2}{2}\right)^3\le \frac{x_2^3+y_2^3}{2}=\beta, $$ contradicting that $\beta<1$. So this case can not arise. (b) $x_1=x_2$ and $x_1=x_3$, i.e.\ $x_1=x_2=x_3$. Then also $y_1=y_2=y_3$, and we get $$ x_1^3+y_1^3=2\beta\qquad\text{and}\qquad x_1+y_1=2, $$ which, as before, leads to a contradiction and thus this case can not arise. (c) $x_1=x_2$ and $x_1\neq x_3$, i.e.\ $x_1=x_2<x_3= x_4$ by Lemma \ref{alg2}. Since $x_1\neq x_3$, Lemma \ref{alg3} implies that \begin{equation}\label{i} x_2x_4=\beta=y_2y_4. \end{equation} In addition, we have \begin{equation}\label{ii} x_2+y_2=2=x_4+y_4. \end{equation} We show that these equations determine $x_2,x_4,y_2,y_4$ as functions of $\beta$. Substituting \eqref{i} into \eqref{ii}, we get $$ \frac{\beta}{x_4}+y_2=2,\qquad x_4+\frac{\beta}{y_2}=2. $$ Combining these two equations, we arrive at $$ y_2+\frac{\beta}{2-\frac{\beta}{y_2}}=2, $$ where we used that $x_4=2-\frac{\beta}{y_2}\neq 0$. This equation for $y_2$ can be rewritten as $$ y_2^2-2y_2+\beta=0. $$ Hence, we find that (recall that $0<\beta<1$) $$ y_2=1\pm \sqrt{1-\beta}. $$ Consequently, $$ x_2=2-y_2=1\mp\sqrt{1-\beta}. $$ From \eqref{i}, we also get $$ x_4=\frac{\beta}{x_2}=\frac{\beta}{1\mp\sqrt{1-\beta}}=1\pm \sqrt{1-\beta}, $$ and finally again by \eqref{i} $$ y_4=\frac{\beta}{y_2}=\frac{\beta}{1\pm\sqrt{1-\beta}}=1\mp \sqrt{1-\beta}. $$ Since $x_1=x_2< x_3=x_4$, this shows that \begin{equation}\label{radcurv} x_1=x_2=1-\sqrt{1-\beta},\qquad x_3=x_4=1+\sqrt{1-\beta}. \end{equation} By assumption the surface area measure $S_{4}(K_0,\cdot)$ of $K_0$ is absolutely continuous with density function $u\mapsto \det(d^2h_0(u)\vert u^\perp)$. Since $K+K^*=2K_0$, the non-negativity of the mixed surface area measures $S(K[i],K^*[4-i],\cdot)$ and the multilinearity of the surface area measures yields that \begin{align*} S_{4}(K,\cdot)\le\,&\sum_{i=0}^{4}\binom{4}{i}S(K[i],K^*[4-i],\cdot)\\ =\,&S_{4}(K+K^*,\cdot)=2^{4}\, S_{4}(K_0,\cdot). \end{align*} This implies that $S_{4}(K,\cdot)$ is absolutely continuous as well, with density function $ u\mapsto \det(d^2h(u)\vert u^\perp)$. Now observe that the cases (a) and (b) have already been excluded and therefore the present case (c) is the only remaining one. Hence, using the definition of $L_{h_0}(h)(u)$, $$ \frac{\det(d^2h(u)\vert u^\perp)}{\det(d^2h_0(u)\vert u^\perp)} =\det(L_{h_0}(h)(u))=x_1x_2x_3x_4 =\beta^2, $$ for almost all $u\in \mathbb{S}^{4}$. Thus we deduce that $$ S_{4}(K,\cdot)=\beta^2\, S_{4}(K_0,\cdot). $$ Minkowski's uniqueness theorem now implies that $K$ and $K_0$ are homothetic, hence $K$ is centrally symmetric. Symmetric convex bodies with the same width function are translates of each other. But then again $\beta=1$, a contradiction. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
arXiv
Ellipse Formula In geometry, an ellipse is described as a curve on a plane that surrounds two focal points such that the sum of the distances to the two focal points is constant for every point on the curve. In the following figure, F1 and F2 are called the foci of the ellipse. Ellipse has two types of axis – Major Axis and Minor Axis. The longest chord of the ellipse is the major axis. The perpendicular chord to the major axis is the minor axis which bisects the major axis at the center. \[\large Area\;of\;the\;Ellipse=\pi r_{1}r_{2}\] \[\large Perimeter\;of\;the\;Ellipse=2\pi \sqrt{\frac{r_{1}^{2}+r_{2}^{2}}{2}}\] r1 is the semi major axis of the ellipse. r2 is the semi minor axis of the ellipse. Question 1: Find the area and perimeter of a ellipse whose semi major axis is 10 cm and semi minor axis is 5 cm ? Semi major axis of the ellipse = r1 = 10 cm Semi minor axis of the ellipse = r2 = 5 cm Area of the ellipse = πr1r2 = π $\times$ 10 $\times$ 5 cm2 = 157 cm2 Perimeter of the ellipse = 2π $\sqrt{\frac{r_{1}^{2}+r_{2}^{2}}{2}}$= 2π $\sqrt{\frac{10^{2}+5^{2}}{2}}$ cm= 2π $\sqrt{\frac{100+25}{2}}$ cm = 2π $\sqrt{\frac{125}{2}}$ cm= 49.64 cm More topics in Ellipse Formula Volume of an Ellipsoid Formula Mathematical Formulas Free Download Geometry Formula Sheet Fahrenheit To Celsius Formula Sodium Acetate Chemical Formula Height Of Isosceles Triangle Chord Formula Kinematic Viscosity Formula Cylindrical Formula Unit Vector In Physics Amplitude Formula
CommonCrawl
Spatiotemporal epidemiology of cryptosporidiosis in the Republic of Ireland, 2008–2017: development of a space–time "cluster recurrence" index M. Boudou1, E. Cleary1, C. ÓhAiseadha2, P. Garvey3, P. McKeown3, J. O'Dwyer4,5 & Paul Hynds1,5 Ireland frequently reports the highest annual Crude Incidence Rates (CIRs) of cryptosporidiosis in the EU, with national CIRs up to ten times the EU average. Accordingly, the current study sought to examine the spatiotemporal trends associated with this potentially severe protozoan infection. Overall, 4509 cases of infection from January 2008 to December 2017 were geo-referenced to a Census Small Area (SA), with an ensemble of geo-statistical approaches including seasonal decomposition, Local Moran's I, and space–time scanning used to elucidate spatiotemporal patterns of infection. One or more confirmed cases were notified in 3413 of 18,641 Census SAs (18.3%), with highest case numbers occurring in the 0–5-year range (n = 2672, 59.3%). Sporadic cases were more likely male (OR 1.4) and rural (OR 2.4), with outbreak-related cases more likely female (OR 1.4) and urban (OR 1.5). Altogether, 55 space–time clusters (≥ 10 confirmed cases) of sporadic infection were detected, with three "high recurrence" regions identified; no large urban conurbations were present within recurrent clusters. Spatiotemporal analysis represents an important indicator of infection patterns, enabling targeted epidemiological intervention and surveillance. Presented results may also be used to further understand the sources, pathways, receptors, and thus mechanisms of cryptosporidiosis in Ireland. Cryptosporidium is an oocyst-forming protozoan parasite first identified as a causative agent of gastrointestinal infection in the mid-1970s [1]. Cryptosporidiosis is associated with a wide range of symptoms including watery diarrhoea, weight loss, vomiting, abdominal pain, nausea and fever [2]. In the most severe cases, infection may lead to acute dehydration and death, particularly among immuno-compromised individuals, including children aged ≤ 5 years, the elderly (≥ 65) and patients with underlying health conditions (i.e., immunosuppressed) [3]. To date, approximately 40 genetically distinct Cryptosporidium species have been identified, with C. parvum and C. hominis the most frequently confirmed species among cases of human infection [4]. Transmission typically occurs via the faecal-oral route through consumption of contaminated water or food, in addition to direct human-animal contact and exposure to contaminated environments including recreational water [2, 5,6,7]. A previous experimental study of healthy adult volunteers indicated that ingestion of 30 oocysts is sufficient to initiate infection, with a significantly lower threshold dose (≈ 10 oocysts) associated with specific C. hominis and C. parvum strains [5]. Cryptosporidiosis occurs in both rural and urban environments, with several studies indicating that C. hominis is more frequent in urban areas (due to increased rates of person-to-person transmission) while C. parvum predominates in rural areas [7]. Environmental transmission in rural areas represents a particular concern due to the ability of oocysts to survive for prolonged periods in the natural environment (e.g., soil, water) due to temperature buffering and high humidity [8]. Human cryptosporidiosis became a notifiable disease in Ireland on January 1st 2004 under the Infectious Diseases (Amendment) (No. 3) Regulations 2003 (S.I. 707 of 2003). As such, all medical practitioners are required to notify the regional Medical Officer of Health (MOH)/Director of Public Health of all confirmed cases. According to the most recent European Centre for Disease Prevention and Control (ECDC) report, Ireland consistently reports the highest Crude Incidence Rates (CIR) of confirmed cryptosporidiosis infection in the European Union [9]. For example, during 2017 Ireland reported a cryptosporidiosis CIR of 12.0/100,000 residents, compared with an EU mean CIR of 3.2/100,000 (including 15 member states with national notification rates < 1/100,000) [10]. Nationally, cryptosporidiosis represents the most frequently reported protozoan infection, with CIRs having remained relatively consistent over the past decade, ranging from 11.0/100,000 in 2004 to 13.2/100,000 in 2018 [10]. Unlike other gastroenteric infections (e.g., giardiasis), cryptosporidiosis in Ireland is primarily associated with domestic (indigenous) exposure and transmission. For example, 81% (556/629) of confirmed cases during 2018 were identified as sporadic domestic cases, 12% (n = 73) were associated with a recognised cluster/outbreak, while travel-related cases accounted for 7% (n = 43) of the total case number [10]. The largest Irish cryptosporidiosis outbreak to date was attributed to C. hominis and occurred in the west of Ireland during March/April 2007. This was concentrated around Galway city, with at least 242 confirmed cases caused by municipal wastewater ingress to Lough Corrib, a lake employed for public water supply in the region [11]. The economic and human health burden accruing from events like the "Galway outbreak", recently estimated at approximately €19 million [11] coupled with the high baseline incidence of cryptosporidiosis, create a need for a greater understanding of the sources and transmission routes for the disease. While several studies have examined the likely routes of exposure to Cryptosporidium spp. in Ireland [e.g., 12,13], few epidemiological investigations of the spatiotemporal dynamics of confirmed cryptosporidiosis infection has been undertaken. This represents a significant knowledge gap with respect to understanding pathogen sources and pathways, particularly in light of the endemic nature of cryptosporidiosis in Ireland. An improved mechanistic understanding of infection occurrence would enable earlier detection, enhanced surveillance, and more focused public-health and healthcare policies. The current study sought to explore the temporal and spatial patterns of domestically acquired (sporadic and outbreak-related) cases of cryptosporidiosis in Ireland via identification of infection clustering. To accurately describe the epidemiological patterns of this important zoonotic parasite, the study integrated several modelling approaches including seasonal decomposition, spatial autocorrelation (Anselin Local Moran's I), hot-spot analysis (Getis-Ord Gi*) and space–time scanning with a large georeferenced dataset of confirmed cryptosporidiosis cases (n = 4509) over a 10-year period (2008–2017). To the authors' knowledge, this represents the first spatio-temporal study of its kind in Ireland, which as previously described, exhibits the highest national cryptosporidiosis infection CIRs in the EU. Irreversibly anonymised cases of cryptosporidiosis reported by regional departments of public health between 1st January 2008 and 31st December 2017 were provided from the national Computerised Infectious Disease Reporting (CIDR) database. Data prior to 2008 were excluded to avoid potential bias being introduced by the large number of cases reported during the 2007 Galway outbreak. All confirmed cases, including patient-specific data fields [age, gender, date of reporting, and case outcome (severity)] were geo-spatially linked to the geographical centroid of their associated Census Small Area (SA) (the smallest administrative unit currently employed for census reporting in Ireland) using the Health Service Executive (HSE) Health Intelligence Unit's geocoding tools. Sporadic, outbreak-related, and travel-related (non-outbreak) cases were defined and discretized for analyses. Outbreak-related cases are defined as confirmed cases with an attached "CIDR outbreak ID", used for identifying cases associated with a recognised infection outbreak or cluster. Travel-related cases are specifically categorised for purposes of analytical exclusion or adjustment (i.e., national reporting) and defined as any patient self-reporting travel outside of Ireland within the likely incubation period. Sporadic cases were subsequently delineated via exclusion of the two previous categories from the total case dataset. All case data and analyses were granted full research ethics approval by the Royal College of Physicians of Ireland Research Ethics Committee (RCPI RECSAF_84). As cryptosporidiosis in Ireland is most prevalent among children ≤ 5 years of age and in rural areas [10], specific analyses were undertaken with respect to case age (≤ 5 years, ≥ 6 years) and land-use classification (rural/urban). The Central Statistics Office (CSO) Census of 2011 and 2016 were used to extract Electoral Division (ED)- and SA-specific human population counts, permitting calculation of cryptosporidiosis incidence rates at both spatial (administrative unit) scales. The CSO's 14 urban/rural categories were used to classify each spatial unit as rural or urban. Population density and settlement size were employed to verify all classifications. For reporting purposes within the current article, Ireland has been delineated into eight distinct geographical zones (Fig. 1). Zone NE (corresponding to Northern Ireland) is located outside Irish public health legislative jurisdiction and was not included for analyses. Pearsons χ2 test with Yates' continuity corrections and Fisher's exact test (where any cell had < 5 cases) were used to test for association between categorical case classifications. Geographical zonation of the Republic of Ireland Seasonal decomposition Seasonal decomposition was carried out using Seasonal and Trend (STL) decomposition via the LOESS (Locally Estimated Scatterplot Smoothing) method on different subsets of the case dataset, e.g., sporadic cases, outbreak-related cases, cases in children ≤ 5 years of age, cases in people ≥ 6 years of age, travel-related cases and cases in urban vs. rural areas. The monthly incidence of infection was calculated for each case sub-category. The STL method decomposes incidence data (Yv) time-series into three separate component series: seasonal variation (Sv), overall trend over time (Tv) and residuals (Rv), whereby the incidence data is equal to the sum of all three trends denoted by [14]: $$Y_{v} = \, T_{v} + \, S_{v} + \, R_{v}$$ An additive seasonal decomposition formula was used, as opposed to multiplicative, to remove seasonality (Sv) and trend (Tv) from the overall time-series (Yv) and filter random variation from long-term trends given by the residuals (Rv) so that: Residuals (Rv) = Time series (Yv) − Seasonal trend (Sv) − Trend (Tv). Spatial autocorrelation The total number of sporadic cryptosporidiosis cases, sporadic cases in children ≤ 5, people aged ≥ 6 years, and outbreak-related cases were mapped to individual SA centroids. Age-adjusted infection rates within each sub-category were calculated at both SA and ED level, based on 2011/2016 census data. Outbreak-related infection rates were calculated as a proportion of overall cases within each SA and ED. Data aggregation and infection rate calculation were carried out in R statistical software version 3.6.0 (R Foundation for Statistical Computing, Vienna, Austria). Anselin Local Moran's I was employed for spatial autocorrelation. Anselin Local Moran's I focuses on the relationship of individual features with nearby features and assigns clusters based on variance assigned to individual spatial units, thus negating the assumption underlying the Global Moran's I statistic that a single statistic appropriately accounts for clustering and dispersion of the spatial predominance of infection across the entire study area [15]. The Anselin Local Moran's I statistic is calculated by generating a neighbour list of spatially proximal SAs or EDs and calculating spatial autocorrelation of similar infection rates as a function of distance bands, thus identifying localised clusters which are correlated based on the variance assigned to all individual spatial units [15, 16]. Clusters of high-high (H–H) and low–low (L–L) infection, and outliers of high–low (H–L) and low high (L–H) infection are subsequently identified. Local Moran's I statistics were calculated using the cluster and analysis tool in ArcGIS version 10.6 (ESRI, Redlands, California) which generates a Moran's I statistic, z-score and pseudo p-value for each spatial unit. A positive I value is indicative of spatial units with a high or low infection rate, surrounded by SAs or EDs with similarly high or low infection rates. Conversely, a negative I value indicates outliers of infection where an SA or ED with a high rate of infection is surrounded by SAs or EDs with low rates of infection, and vice versa [15]. Hot-spot analysis (Getis-Ord GI*) Hot-spot analysis was carried out for all sporadic cases, sporadic cases among children ≤ 5, cases ≥ 6 years, and outbreak cases by calculating spatially specific Getis-Ord GI* statistics in ArcGIS. The Getis-Ord Gi* statistic is calculated for each feature (SA or ED) in the dataset, generating a unit-specific z-score and p-value, used to statistically determine significant spatial clustering of features in the dataset [17]. Statistically significant clusters are clusters which have high values surrounded by SAs or EDs with similarly high values, and vice versa [18]. Hot- and cold-spots of infection are determined based on the spatial proximity of high/low values statistically similar to neighbouring features. Compared with Anselin local Moran's I statistics, clusters based on the Getis-Ord GI* statistic are determined by comparing the sum of local features and their neighbours with the overall sum of all features. Getis-Ord GI* statistics were used to examine whether differing statistical analyses of spatial clustering of infection between spatial units yield varying results. The Getis-Ord GI* statistic is given as [18]: $$G_{i}^{*} = \frac{{\sum\limits_{j = 1}^{n} {w_{i,j} x_{j} - \overline{X}\sum\limits_{j = 1}^{n} {w_{i,j} } } }}{{S\sqrt {\frac{{\left[ {n\sum\limits_{j = 1}^{n} {w_{i,j}^{2} } - \left( {\sum\limits_{j = 1}^{n} {w_{i,j} } } \right)^{2} } \right]}}{n - 1}} }}$$ where χj is the attribute value for feature j, wi,j is the spatial weight between feature i and j, n is equal to the total number of features and: $$\begin{gathered} \overline{X} = \frac{{\sum\limits_{j = 1}^{n} {x_{j} } }}{n} \hfill \\ S = \sqrt {\frac{{\sum\limits_{j = 1}^{n} {x_{j}^{2} } }}{n} - \left( {\overline{X}} \right)^{2} } \hfill \\ \end{gathered}$$ Space–time scanning Space–time scanning was undertaken using SaTScan v9.6 software (Kulldorf and Information Management Services, Inc., MA, USA). SaTScan detects spatial clusters of areal units (i.e., SAs/EDs) by imposing an infinite number of overlapping circular (or elliptical) scanning windows of predetermined sizes across a defined geographic area [19]. Temporal clusters were simultaneously assessed using the scan statistic, which includes an infinite number of overlapping cylindrical windows defined by a base (spatial scan statistic) and height (temporal scan statistic) [20]. A discrete Poisson model was employed for space–time scanning to account for the high-resolution spatial scale (n = 18,488 SAs), resulting in high zero/one inflation (i.e., high numbers of SAs with 0 or 1 case). A case threshold of 10 cases (minimum) per cluster was selected to ensure that identified clusters were significant i.e., avoidance of single household clusters. Similarly, a maximum of 10% of population at risk (PAR) was employed concurrently with a maximum cluster radius of 50 kms to account for low case numbers within individual Small Areas. Data were aggregated at a monthly scale, with maximum cluster duration set to 3 months to account for known seasonal variation of cryptosporidiosis in Ireland. SaTScan analyses produce two primary outputs; a spatial cluster location(s) (cluster centroid and diameter) and descriptive cluster data (start/end dates, total population, number of observed-expected cases, relative risk, and p-value). The authors have developed a novel mapping approach for representing SaTScan results, whereby all significant clusters (p < 0.05) are selected and mapped in ArcGIS (ArcGIS 10.6), with binary cluster location [i.e., Cluster Membership (0/1)] for annual space–time scans summed at the CSO SA scale. The final mapping provides a "cluster recurrence" index ranging from 0 to 10 (i.e., annual absence/presence of cluster over 10-year study period). Occurrence of cryptosporidiosis infection in the Republic of Ireland (2008–2017) The dataset comprised 4,633 confirmed cases of cryptosporidiosis from 2008 to 2017, of which 4509 cases (97%) were successfully geo-linked to a distinct spatial unit (SA/ED centroid). Overall, 1964 Electoral Divisions (58% of 3,409), 3413 Small Areas (18.3% of 18,488) and all (26/26) Irish counties were associated with at least one confirmed case. Most cases were associated with children ≤ 5 years (n = 2672, 59.3%) (Fig. 2), with a slightly higher incidence rate reported among males (53%) (Table 1). Age and gender distributions of cryptosporidiosis cases in the Republic of Ireland (2008–2017) Table 1 Pearson χ2 test results for cryptosporidiosis cases in the Republic of Ireland, delineated by case type (sporadic, outbreak-related, travel), and age, gender and CSO classification As shown (Table 1), sporadic cases were statistically more likely to be male (OR 1.4, 95% CI 1.2, 1.6), ≤ 5 years of age (OR 1.5, 95% CI 1.3, 1.8), and associated with a categorically rural area (OR 2.4, 95% CI 2, 28). Conversely, outbreak-related cases were associated with females (OR 1.4, 95% CI 1.2, 1.7) and urban areas (OR 1.5, 95% CI 1.3, 1.9). Travel-related cases were more likely to be female (OR 1.3, 95% CI 1, 1.6), > 5 years of age (OR 2.4, 95% CI 1.5, 3.1), and resident in an urban conurbation (OR 3.6, 95% CI 2.8, 4.6). Temporal cumulative incidence rates (Fig. 3) indicate a marked annual peak in late spring (n = 1812), with a maximum incidence rate occuring during April (n = 916). Lowest incidence rates were recorded during winter months (n = 493) with the lowest incidence rate being recorded in January (n = 136). Case numbers peaked in 2017 (n = 584). Temporal distribution of cryptosporidiosis cases in Ireland (2008–2017) Seasonal decomposition of sporadic infection over the ten-year study period indicates a clear seasonal peak in mid-spring (April) annually (Fig. 4). Residual trends show a generally consistent annual and long-term trend with a notable peak of infection in April 2016 (Residual: + 56). Outbreak cases exhibit a similar seasonal trend to that of sporadic cases with annual peaks occurring in April, followed by a secondary peak in September. The overall long-term trend in outbreak cases displayed a marked increase during 2011, continuing until 2014. Residuals calculated for outbreak cases point to more variation in 10-year trends with peaks observed during the late winter/early spring months (January to March) of 2011, 2012 and 2017 while late spring/early summer peaks (April to June) were observed in 2013. A peak in outbreak-associated cases was also observed during the winter months (October to November) of 2013. Seasonal decomposition of cryptosporidiosis in the Republic of Ireland (2008–2017), delineated by sporadic (left) and outbreak-related cases (right) There was an increasing trend in the number/rate of travel-associated cases with an annual peak occurring in August/September (Fig. 5). The long-term trends varied significantly between delineated age categories with considerably more variation noted among children ≤ 5 years of age (Fig. 6), albeit annual peaks were observed among both age cohorts during April of each year. Residuals again point to a large transmission peak (Residuals: + 22, + 34) within both sporadic and outbreak cohorts during April 2016. Seasonal decomposition of travel-related cryptosporidiosis in the Republic of Ireland (2008–2017) Seasonal decomposition of cryptosporidiosis in the Republic of Ireland (2008–2017), delineated by epidemiologically relevant age sub-categories Annual decomposed patterns of infection peaked in April of each year followed by a significantly smaller peak during September in both urban and rural areas (Fig. 7). Calculated residuals point to an infection peak in April 2016 in both urban (+ 18) and rural (+ 38) areas, consistent with trends observed among sporadic and age-delineated infection peaks. Seasonal decomposition of cryptosporidiosis in the Republic of Ireland (2008–2017), delineated by CSO urban/rural classification A significant H–H cluster of sporadic cases was observed in the midland (M) region, with a large L–L cluster identified along the eastern seaboard (E), surrounding the greater Dublin urban area and commuter belt (Fig. 8a). L–L clusters were also observed in the S and SE regions, spatially proximal to the urban conurbations of Cork, Waterford, and Limerick cities. Smaller H–H clusters of infection were observed in the S, SE and W regions of the country, consistent with an overarching urban/rural pattern. Notable L–L outbreak-related case clusters were observed in the east of the country surrounding Dublin city and in the south surrounding Limerick city (Fig. 8b). Few H–H clusters were associated with outbreak-related cases, however H–H cases identified in the midland region (M) were surrounded by L–H clusters, thus indicating potential neighbouring outliers. H–H and L–L clusters of infection in children ≤ 5 followed a broadly similar spatial pattern to that observed within the sporadic case cohort, due to the large proportion of cases from this cohort comprising the total dataset (Fig. 8c). A large H–H cluster was observed in the M region, with smaller H–H clusters again identified in S, SE and W regions. L–L clusters of infection were also consistent with sporadic case clusters and typically identified around urban areas in the S and SE of the country. The spatial predominance of infection cold spots (L–L) among people age > 5 (Fig. 8d) followed a similar pattern of infection cold spots among sporadic cases and paediatric (≤ 5 years) cases (Fig. 8c). However, the spatial predominance of infection hot spots among this cohort was markedly different to sporadic and ≤ 5-year hot spots, with smaller and more spatially dispersed hot spots identified, primarily in the midlands (region M) and SW regions. a Sporadic cryptosporidiosis case clusters and outliers determined by Anselin Local Moran's I clusters b Outbreak-related cryptosporidiosis case clusters and outliers determined by Anselin Local Moran's I clusters c Sporadic cryptosporidiosis case clusters and outliers among children aged 5 years and younger determined by Anselin Local Moran's I clusters d Sporadic cryptosporidiosis case clusters and outliers among the cohort of people age 6 years and older determined by Anselin Local Moran's I clusters Getis-Ord GI* analyses identified notable hot spots among sporadic cases in the midlands (M), east and north-east of Galway city, with smaller hot spots also evident in the midlands, south and south-east (M, S and SE) (Fig. 9a). Again, a spatially extensive cold spot was identified in the east of the country (E), encompassing the greater Dublin metropolitan urban area, and in the south and south-east (S and SE) around Waterford, Limerick and Cork cities. a Sporadic cryptosporidiosis case hot and cold spots determined by Getis-Ord Gi* hot-spot analysis—b Outbreak-related cryptosporidiosis hot and cold spots determined by Getis-Ord Gi* hot-spot analysis—c Sporadic cryptosporidiosis case hot and cold spots among children aged 5 years and younger determined by Getis-Ord Gi* hot-spot analysis—d Sporadic cryptosporidiosis case hot and cold spots among the cohort of people age 6 years and older determined by Getis-Ord Gi* hot-spot analysis The spatial predominance of hot and cold spots among children ≤ 5 years again followed a similar pattern to clustering of infection among all sporadic cases (Fig. 9c). Large hot spots were observed in the midlands and south (M and S), with a previously identified sporadic infection hot spot in the west (W) demonstrating a significantly more pronounced occurrence among the paediatric subpopulation (NE of Galway city). A large cold spot among children ≤ 5 was also observed in the in the greater Dublin area (E), albeit significantly reduced when compared with that observed among all sporadic infections. The spatial distribution of hot and cold spots of infection among people aged > 5 varied (Fig. 9d), with the spatial distribution of hot and cold spots of sporadic infection and infection in children ≤ 5. One hot spot was identified in the SW region, which was not observed using other statistical methods or among other subcategories of infection. Space–Time clustering recurrence and cluster temporality for sporadic cryptosporidiosis cases are presented in Fig. 10, with results of year-on-year space–time scanning presented in Additional file 1: Appendix 1. Annual space-time clusters of Cryptosporidiosis in Ireland from 2008 to 2017; Appendix 2. Space-time clusters of Cryptosporidiosis in Ireland during 2008. As shown (Fig. 10), three primary hot spots were identified: south-west and east of Limerick city (SW, S, SE), and north-east of Galway city (M). Cold spots are persistent along much of the eastern seaboard, and particularly around the larger urban conurbations of greater Dublin and Cork city, in addition to significant areas of the western coastline. The temporal window for space–time clusters mirrors the general seasonal distribution of cryptosporidiosis infection (Sect. 3.2), with peak cluster identification occurring from March to June and peaking in April. Space–time "cluster recurrence" index (0–10) for sporadic cryptosporidiosis cases in the Republic of Ireland, 2008–2017 Significantly lower levels of space–time clustering were found among outbreak-related cases (Fig. 11), with largest hot spots located in western and midland regions (M, W), and a maximum cluster recurrence of 30% (i.e., geographic area included in 3 identified clusters over 10 annual iterations). Two additional space–time clusters were identified to the north-east of Cork city (S) and County Donegal (N). Most (8/9) outbreak-related clusters were observed from March to June, with one cluster occurring during October/November (2013). Cluster index mapping for ≤ 5 year sub-population mirrored that of sporadic cases, with three primary hot spots identified; again, a large area located north-east of Galway city (M), and two "secondary" (i.e., lower cluster recurrence indices) areas located south-west and south-east of Limerick city (SW,S) (Fig. 12). Results for the sub-population > 5 years point to a lower level of clustering, with hot spots located south-west of Limerick city (SW), the Midlands (M) and south-east (SE) (Fig. 13). Space–time "cluster recurrence" index (0–10) for outbreak-related cryptosporidiosis cases in the Republic of Ireland, 2008–2017 Space–time "cluster recurrence" index (0–10) for sporadic cryptosporidiosis cases in the Republic of Ireland, 2008–2017 (Delineated by epidemiologically relevant age category—Population ≤ 5 years) Space–time "cluster recurrence" index (0–10) for sporadic cryptosporidiosis cases in the Republic of Ireland, 2008–2017 (Delineated by epidemiologically relevant age category—Population > 5 years) Occurrence of sporadic and outbreak-associated cryptosporidiosis Cryptosporidiosis exhibits a relatively wide geographical distribution in Ireland with 58% and 18.3% of Electoral Divisions and Small Areas associated with at least one confirmed case during the study period, respectively. Crude Incidence Rates (CIRs) of infection indicate a moderately increasing trend, ranging from 9.8/100,000 in 2008 to 12.4/100,000 in 2017 [10]. Most (59.3%) sporadic cases were associated with children ≤ 5 years, which concurs with several previous studies [3, 7]. Within the ≤ 5 years cohort, cases were more frequently associated with male children (OR 1.3873), potentially reflecting the tendency of male children to mount weaker immune responses [21], an enhanced susceptibility to environmental exposures via gender-related outdoor activities [22], or a gender-related bias in healthcare-seeking behaviours [23]. Conversely, female children were statistically associated with outbreak-related cryptosporidiosis, potentially reflecting higher levels of direct contact (and subsequent transmission) between parents/family members and female children [24]. A recent small-scale investigation of the regional epidemiology of cryptosporidiosis in County Cork, Ireland, demonstrated moderately increased infection rates among 20–34-year olds, suggesting likely anthroponotic transmission via caregiver contact with infected children [25]. Geographically, most sporadic cases (65.8%) occurred in categorically rural areas (χ2 = 110.493, p < 0.001; Table 1), where approximately 37.3% of the Irish populace reside [26]. A previous Scottish study by Pollock et al. similarly found C. parvum infection was associated with areas characterised by lower human population density and a higher ratio of farms to humans, both indicators of rurality [27]. While the current study represents the first nationwide study of the spatiotemporal epidemiology of cryptosporidiosis in Ireland, this finding was expected, and likely attributable to increased exposure to sources of Cryptosporidium spp. oocysts in rural areas, including farmyard animals [28], direct exposure to contaminated surface waters [29] and the use of groundwater as a drinking water source [6]. Conversely, urban areas exhibited a significantly higher secondary (OR 1.5383) and travel-related (OR 3.5742) case occurrence, likely indicative of C. hominis infections as opposed to the agriculturally (rural) associated C. parvum, however, as Cryptosporidium spp. is not identified within the Irish disease reporting system, this is somewhat speculative. Seasonal decomposition points to an overall increasing temporal trend over the ten-year study period (Fig. 4), consistent with previously reported trends in the west of Ireland during 2004–2007 [30]. Specifically, the annual peak found during April is consistent with previously reported regional peaks (March/April) [30], in addition to those reported in Scotland (April/May) [27], likely associated with agricultural cycles in temperate regions i.e. lambing/calving and manure spreading. While not reported in the current study, seasonal patterns may vary among differing Cryptosporidium species; for example, C. hominis is more prevalent during autumn in the UK and New Zealand (increased travel and school/childcare attendance), whereas C. parvum is more typically encountered during spring in Canada, Ireland and The Netherlands [9]. The secondary peak observed among outbreak-related cases during September (Fig. 4) is consistent with the bimodal peaks observed in C. hominis in Scotland in August and October [27], and may reflect the increase in national/international travel and children returning to childcare/school after summer break. Seasonal decomposition also identified several notable deviations (i.e., residual peaks) from the overarching temporal trend which merit closer investigation, particularly regarding dynamic drivers of exposure/transmission such as extreme weather events [28, 31]. A marked positive residual was identified during April 2016 (Fig. 4), initiating further exploration with respect to dynamic meteorological events, particularly in light of severe flooding experienced across Ireland and the UK [32]. Winter 2015/2016 was characterised by a succession (n = 6) of Atlantic storms across Ireland, resulting in exceptional and widespread flooding with all synoptic weather stations reporting rainfall volumes significantly above their Long-Term Average (LTA) [32]. Recent work by Boudou et al. have shown that excess cases of cryptosporidiosis were widespread during and after the flood period, with areas characterised by the presence of a surface water body exhibiting significantly higher incidence rates (OR 1.363; p < 0.001) [32]. Time-series modelling of the event presented a clear association between rainfall, surface water discharge, groundwater levels and infection incidence, with lagged associations from 16 to 20 weeks particularly strong, thus indicating a link between infection peaks (April 2016) and the flood event which began approximately 18 weeks earlier [32]. Thus, it was concluded that increases in storm water, soil saturation and surface runoff increased pathogen mobility for a significant period, thus exacerbating transmission of cryptosporidiosis both directly (i.e., contaminated 'raw' water and food) and indirectly (i.e., long-term soil saturation) [32]. Similarly, a cryptosporidiosis outbreak which occurred during August 2013 in Halle, Germany, began six weeks after the river Saale inundated the floodplain and parts of the city centre [3], thus emphasising the (lagged) impact of local meteorological conditions on the incidence of infection. Spatial autocorrelation and Hot-Spot analysis Incorporating a spatial dimension into investigations of infectious disease epidemiology is of primary importance considering the spatial variation of environmental exposure such as land use, local climate, and socioeconomic status, particularly in Ireland which has previously been described as "the perfect storm" with regard to potential gastroenteric infection risk factors [33]. Results of Anselin Local Moran's I statistics and the Getis Ord GI* statistic provided relatively similar spatial patterns. High incidence (H-Hs) clusters were identified in the Irish Midlands (M), a predominantly rural area with a high level of dependence on pastoral agriculture and "private infrastructure" (e.g., one-off housing with on-site wastewater treatment and domestic water supplies). Several previous studies have documented strong associations between cryptosporidiosis and cattle density [27, 34]. Similarly, a study from central Wisconsin previously found the incidence of endemic diarrhoeal infections significantly higher in areas characterised by elevated septic tank (OR 1.22) and private water supply (OR 6.18) densities among a population-based cohort [35]. Conversely, low incidence (L-L) clusters were primarily located in the vicinity of Ireland's capital (Dublin) and other relatively large cities (Waterford, Cork, Limerick, Galway), thus likely highlighting the protective effect of urban living within the Irish context, where reduced environmental exposure to pathogen sources coupled with reduced pathogen transport (i.e., treated drinking water supply) may reduce the risk of exposure and subsequent infection [36]. Conversely, recent studies have shown rates of cryptosporidiosis are typically higher in urban areas characterised by elevated human population densities, for example Cohen et al. previously reported that higher population density and above average household sizes were associated with increased odds of reported cases of cryptosporidiosis in Massachusetts [37]. Likewise, Greenwood & Reid have found that most cryptosporidiosis clusters identified across Queensland, Australia from 2001 to 2015 centred on major and regional cities [38]. Both geostatistical techniques suggest a disparity exists with respect to outbreak-related clustering over the 10-year study period, as they relate to clustering of sporadic cases (Figs. 8a, b, 9a, b), with outbreak-related clusters occurring in the north Midlands and "border area", regions traditionally characterised by relatively low population densities. This merits further investigation within the context of population age structure, household size and domestic water source, along with close monitoring and surveillance by the relevant Departments of Public Health. Space–Time scanning and cluster recurrence Space–time scan statistics detect temporally-specific clusters characterised by a significantly higher observed case number than expected (e.g., space–time randomness not present), based on calculated baseline incidence rates [20] with the approach employing a 3-dimensional (cylindrical) scanning window comprising both height (time) and space (geographic area) [19]. Over the past decade, space–time scan statistics have been recognised as a powerful tool for endemic disease surveillance and early outbreak detection [39], however to the authors knowledge, this represents the first time it has been applied to infectious disease incidence in Ireland. A total of 69 space–time clusters (≥ 10 confirmed cases) were identified over the 10-year study period, of which 55 (79.7%) were clusters of sporadic infection, ranging from a minimum of 4 (7.3%) during 2017 to a maximum of 7 (12.7%) during 2009. No statistical association was found between annual sporadic and outbreak-related cluster number during the study period, however development of the "cluster recurrence" index (e.g., Figs. 10, 11, 12, 13) permits identification of discernible spatial and temporal patterns defining the formation of clusters across the decade-long period. Three regions exhibited particularly recurrent space–time clusters of infection, with occurrences during ≥ 8 out of 10 years, namely south-west and east of Limerick city (SW, S, SE), and north-east of Galway city (M), with neither urban conurbation actually located within a high recurrence region. The spatiotemporal frequency of space–time clusters suggests the presence of persistent reservoirs in these areas thus maintaining community and/or transmission pathways [38]. The proximity of large urban centres to each high-recurrence region may potentially reflect relatively narrow transitional zones between urban fabric and populated rural regions i.e., rural commuter belts which remain un-serviced with respect to municipal wastewater treatment and/or drinking supplies. Additionally, all three regions are predominantly underlain by karstified carboniferous limestone aquifers [40] which have previously been associated with the presence of Cryptosporidium spp. in private and small public drinking water supplies [12, 41]. Conversely, the Greater Dublin area, characterised by a large urban commuter belt, spatially extensive consolidated bedrocks and high levels of municipal water and wastewater infrastructure, did not exhibit any space–time clusters over the study period. A significant majority of space–time clusters occurred over the 4-month period May–June, thus mirroring findings from the overall case cohort, and further highlighting the likely association between agricultural cycles and the incidence of infection in temperate regions including Ireland, Scotland and New Zealand [27, 42]. Additionally, Lal et al. have signalled a need to study the effect of spatial and temporal variations in ecological and social risk factors on the incidence of cryptosporidiosis with specific emphasis on the potential for socioeconomic disadvantage to amplify disease risk within populations, e.g., in areas of low educational attainment and lower income levels, which are often associated with rural living [28]. From a public-health surveillance perspective, identification of 55 space–time clusters of sporadic cryptosporidiosis infection over a 10-year period represents a concern, while underscoring the major challenges involved in decreasing the incidence of infection via enhanced surveillance and subsequent intervention. For example, during 2008, a spatially restricted space–time cluster which was identified in the northern Midlands (Cluster 2, Additional file 1: Appendix) was characterised by almost 18 times more cases of infection than would be expected (RR 17.95) over a three-month period (February–April), with several identified space–time clusters occurring over time periods as short as 4 weeks. As such, this level of clustering may suggest the need for new surveillance and/or analytical methods to elucidate hitherto unidentified sources and pathways of infection, and to identify space–time clusters while they exist i.e., real-time or prospective scanning [43]. It is important to note that a lack of species information, and particularly the inability to discern between C. parvum and C. hominis, the two most frequently encountered Cryptosporidium species in Ireland, represents a study limitation. As previously outlined, Pollock et al. found C. parvum infection to be associated with lower population density and higher ratio of farms to humans, indicators of rurality, while C. hominis was more likely to be found in the more urban area of southern Scotland [27]. Speciation would thus permit closer elucidation of sociodemographic influences on rural/urban distribution. Further investigation is required to elucidate potential sources and pathways of infection, with particular regard to livestock densities, climate, hydrogeology and socioeconomic status. In conclusion, despite mandatory surveillance of cryptosporidiosis due to its communicable disease status in Ireland, it is widely regarded that cryptosporidiosis remains under-reported in Ireland and on a broader European level. The spatiotemporal epidemiology of cryptosporidiosis in Ireland reflects the diverse population and geography of the country, albeit with a markedly higher rate of occurrence in rural areas, likely due to the ubiquity of Cryptosporidium spp. sources (e.g., cattle) and pathways (e.g., karstic limestone bedrocks). The elevated burden among children ≤ 5-years is likely related to both immunological status and specific routes of exposure and warrants further study. The presented study represents a significant advance in efforts to investigate the spatiotemporal epidemiology of cryptosporidiosis with a view to further elucidating pathways of infection to guide public-health interventions through an improved understanding of its spatio-temporal occurrence, clustering mechanisms, levels of recurrence, and associated drivers, pathways, and receptors. Due to the sensitive nature of the study data, datasets are not publicly available. For further information related to data acquisition, please contact the corresponding author, Dr Paul Hynds (email: Paul.Hynds@tudublin. i.e., phone: 0,838,256,888). Nime FA, Burek JD, Page DL, Holscher MA, Yardley JH. Acute enterocolitis in a human being infected with the protozoan Cryptosporidium. Gastroenterology. 1976;70(4):592–8. Fayer R, Ungar BL. Cryptosporidium spp. and cryptosporidiosis. Microbiol Rev. 1986;50(4):458. Chalmers RM, Cacciò S. Towards a consensus on genotyping schemes for surveillance and outbreak investigations of Cryptosporidium, Berlin, June 2016. Eurosurveillance. 2016;21(37):30338. Feng Y, Ryan UM, Xiao L. Genetic diversity and population structure of Cryptosporidium. Trends Parasitol. 2018;34(11):997–1011. Chappell CL, Okhuysen PC, Langer-Curry R, Widmer G, Akiyoshi DE, Tanriverdi S, Tzipori S. Cryptosporidium hominis: experimental challenge of healthy adults. Am J Trop Med Hyg. 2006;75(5):851–7. Chique C, Hynds P, Andrade L, Burke L, Morris D, Ryan MP, O'Dwyer J. Cryptosporidium spp. in groundwater supplies intended for human consumption–a descriptive review of global prevalence, risk factors and knowledge gaps. Water Res. 2020;115726. Putignani L, Menichella D. Global distribution, public health and clinical impact of the protozoan pathogen Cryptosporidium. Interdisciplinary perspectives on infectious diseases, 2010. Thompson RA, Koh WH, Clode PL. Cryptosporidium—what is it? Food Waterborne Parasitol. 2016;4:54–61. European Centre for Disease Prevention and Control. Cryptosporidiosis. In: ECDC. Annual epidemiological report for 2017. Stockholm: ECDC; 2019. Health Protection Surveillance Centre (HPSC). (2019) Cryptosporidiosis in Ireland, 2018. Dublin, Ireland. https://www.hpsc.ie/a-z/gastroenteric/cryptosporidiosis/publications/epidemiologyofcryptosporidiosisinirelandannualreports/. Chyzheuskaya A, Cormican M, Srivinas R, O'Donovan D, Prendergast M, O'Donoghue C, Morris D. Economic assessment of waterborne outbreak of cryptosporidiosis. Emerg Infect Dis. 2017;23(10):1650. Zintl A, Proctor AF, Read C, Dewaal T, Shanaghy N, Fanning S, Mulcahy G. The prevalence of Cryptosporidium species and subtypes in human faecal samples in Ireland. Epidemiol Infect. 2009;137(2):270–7. Cummins E, Kennedy R, Cormican M. Quantitative risk assessment of Cryptosporidium in tap water in Ireland. Sci Total Environ. 2010;408(4):740-753.12. Cleveland RB, Cleveland WS, McRae JE, Terpenning I. STL: a seasonal-trend decomposition. J Off Stat. 1990;6(1):3–73. Anselin L, Syabri I, Smirnov O. Visualizing multivariate spatial correlation with dynamically linked windows. In Proceedings, CSISS Workshop on New Tools for Spatial Data Analysis, Santa Barbara. 2002. Mao Y, Zhang N, Zhu B, Liu J, He R. A descriptive analysis of the Spatio-temporal distribution of intestinal infectious diseases in China. BMC Infect Dis. 2019;19(1):766. Guo C, Du Y, Shen SQ, Lao XQ, Qian J, Ou CQ. Spatiotemporal analysis of tuberculosis incidence and its associated factors in mainland China. Epidemiol Infect. 2017;145(12):2510–9. Varga C, Pearl DL, McEwen SA, Sargeant JM, Pollari F, Guerin MT. Area-level global and local clustering of human Salmonella Enteritidis infection rates in the city of Toronto, Canada, 2007–2009. BMC Infect Dis. 2015;15(1):1–13. Kulldorff M, Heffernan R, Hartman J, Assunçao R, Mostashari F. A space–time permutation scan statistic for disease outbreak detection. PLoS Med. 2005;2(3):e59. Linton SL, Jennings JM, Latkin CA, Gomez MB, Mehta SH. Application of space-time scan statistics to describe geographic and temporal clustering of visible drug activity. J Urban Health. 2014;91(5):940–56. Muenchhoff M, Goulder PJ. Sex differences in pediatric infectious diseases. J Infect Dis. 2014;209(suppl_3):S120–6. Jarman AF, Long SE, Robertson SE, Nasrin S, Alam NH, McGregor AJ, Levine AC. Sex and gender differences in acute pediatric diarrhea: a secondary analysis of the Dhaka study. J Epidemiol Global Health. 2018;8(1):42–7. Sarker AR, Sultana M, Mahumud RA, Sheikh N, Van Der Meer R, Morton A. Prevalence and health care–seeking behavior for childhood diarrheal disease in Bangladesh. Global Pediatric Health, 2016;3: 2333794X16680901. Guerra-Silveira F, Abad-Franch F. Sex bias in infectious disease epidemiology: patterns and processes. PLoS ONE. 2013;8(4):e62390. O'Leary JK, Blake L, Corcoran D, Elwin K, Chalmers R, Lucey B, Sleator RD. Cryptosporidium spp surveillance and epidemiology in Ireland: a longitudinal cohort study employing duplex real-time PCR based speciation of clinical cases. J Clin Pathol. 2020;73(11):758–61. Central Statistics Office (CSO). Census of Population, 2016 (Ireland)—Profile 2 Population Distribution and Movements. 2019. https://www.cso.ie/en/releasesandpublications/ep/p-cp2tc/cp2pdm/pd/. Pollock KGJ, Ternent HE, Mellor DJ, Chalmers RM, Smith HV, Ramsay CN, Innocent GT. Spatial and temporal epidemiology of sporadic human cryptosporidiosis in Scotland. Zoonoses Public Health. 2010;57(7–8):487–92. Lal A, Hales S, French N, Baker MG. Seasonality in human zoonotic enteric diseases: a systematic review. PLoS ONE. 2012;7(4):e31883. Hamilton KA, Waso M, Reyneke B, Saeidi N, Levine A, Lalancette C, et al. Cryptosporidium and Giardia in wastewater and surface water environments. J Environ Qual. 2018;47(5):1006–23. Callaghan M, Cormican M, Prendergast M, Pelly H, Cloughley R, Hanahoe B, O'Donovan D. Temporal and spatial distribution of human cryptosporidiosis in the west of Ireland 2004–2007. Int J Health Geogr. 2009;8(1):1–9. Britton E, Hales S, Venugopal K, Baker MG. The impact of climate variability and change on cryptosporidiosis and giardiasis rates in New Zealand. J Water Health. 2010;8(3):561–71. Boudou M, ÓhAiseadha C, Garvey P, O'Dwyer J, Hynds P. Flood hydrometeorology and gastroenteric infection: the Winter 2015–2016 flood event in the Republic of Ireland. J Hydrol. 2021;599:126376. O'Dwyer J, Hynds PD, Byrne KA, Ryan MP, Adley CC. Development of a hierarchical model for predicting microbiological contamination of private groundwater supplies in a geologically heterogeneous region. Environ Pollut. 2018;237:329–38. Luffman I, Tran L. Risk factors for E. coli O157 and cryptosporidiosis infection in individuals in the karst valleys of east Tennessee, USA. Geosciences. 2014;4(3):202–18. Borchardt MA, Chyou PH, DeVries EO, Belongia EA. Septic system density and infectious diarrhea in a defined population of children. Environ Health Perspect. 2003;111(5):742–8. Lal A, Dobbins T, Bagheri N, Baker MG, French NP, Hales S. Cryptosporidiosis risk in New Zealand children under 5 years old is greatest in areas with high dairy cattle densities. EcoHealth. 2016;13(4):652–60. Cohen SA, Egorov AI, Jagai JS, Matyas BT, DeMaria A Jr, Chui KK, et al. The SEEDs of two gastrointestinal diseases: socioeconomic, environmental, and demographic factors related to cryptosporidiosis and giardiasis in Massachusetts. Environ Res. 2008;108(2):185–91. Greenwood KP, Reid SA. Clustering of cryptosporidiosis in Queensland, Australia, is not defined temporally or by spatial diversity. Int J Parasitol. 2020;50(3):209–16. Tango T, Takahashi K, Kohriyama K. A space–time scan statistic for detecting emerging outbreaks. Biometrics. 2011;67(1):106–15. Woodcock NH, Strachan RA. Geological history of Britain and Ireland. John Wiley & Sons; 2009. Darnault CJ, Peng Z, Yu C, Li B, Jacobson AR, Baveye PC. Movement of Cryptosporidium parvum oocysts through soils without preferential pathways: exploratory test. Front Environ Sci. 2017;5:39. Khan A, Shaik JS, Grigg ME. Genomics and molecular epidemiology of Cryptosporidium species. Acta Trop. 2018;184:1–14. Jones RC, Liberatore M, Fernandez JR, Gerber SI. Use of a prospective space-time scan statistic to prioritize shigellosis case investigations in an urban jurisdiction. Public Health Rep. 2006;121(2):133–9. The authors would like to acknowledge the CIDR Review Committee for data acquisition and the Royal College of Physicians of Ireland (RCPI) Research Ethics Review Committee. The authors also wish to acknowledge the Irish Research Council (COALESCE Research Programme) and Irish Environmental Protection Agency (STRIVE Research Programme) for provision of research funding. This work was funded by the Environmental Protection Agency (EPA) under the STRIVE Research Programme (2018-W-MS-33) and the Irish Research Council under the COALESCE Funding Programme (COALESCE/2019/53). Environmental Sustainability and Health Institute (ESHI), Technological University Dublin, Greenway Hub, Grangegorman, Dublin 7, D07 H6K8, Republic of Ireland M. Boudou, E. Cleary & Paul Hynds Department of Public Health, Health Service Executive (HSE), Dr. Steevens' Hospital, Dublin 8, Republic of Ireland C. ÓhAiseadha Health Protection Surveillance Centre, 25 Middle Gardiner Street, Dublin 1, Republic of Ireland P. Garvey & P. McKeown School of Biological, Earth and Environmental Sciences, Environmental Research Institute (ERI), University College Cork, Cork, Republic of Ireland J. O'Dwyer Irish Centre for Research in Applied Geosciences (iCRAG), University College Dublin, Dublin 4, Republic of Ireland J. O'Dwyer & Paul Hynds M. Boudou E. Cleary P. Garvey P. McKeown Paul Hynds MB: Methodology, software, validation, formal analysis, writing, preparation of Figures 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13. EC: Preparation of Figs. 8, 9, Writing. CO: resources, data curation, writing—review and editing. PG: resources, data curation, writing—review and editing. PM: resources, data curation, writing—review and editing. JO: Conceptualization, supervision, funding acquisition, writing—review and editing. PH: conceptualization, supervision, funding acquisition, writing—review and editing. All authors read and approved the final manuscript. Correspondence to M. Boudou or Paul Hynds. With the exception of age, gender and minimal clinical data such as date of onset, no personal data, as defined by the Irish Health Research Regulations, were used for this research. The employed anonymisation protocol is considered equivalent to irreversible anonymisation, appropriate for release to academic researchers, as approved by the Irish Data Protection Commissioner, and as such, informed consent was considered unnecessary, as set out in the Research Ethical Approval documents provided by the Royal College of Physicians of Ireland, comprising both data usage and epidemiological methodologies employed. Additionally, the authors can confirm that all methods and analyses have been carried out in accordance with the International Ethical Guidelines for Epidemiological Studies as stipulated in the conditions of the aforementioned project Ethical Approval document. All authors consent with study submission for publication. The authors declare they have no competing interests. Appendix 1. Annual space-time clusters of Cryptosporidiosis in Ireland from 2008 to 2017; Appendix 2. Space-time clusters of Cryptosporidiosis in Ireland during 2008. Boudou, M., Cleary, E., ÓhAiseadha, C. et al. Spatiotemporal epidemiology of cryptosporidiosis in the Republic of Ireland, 2008–2017: development of a space–time "cluster recurrence" index. BMC Infect Dis 21, 880 (2021). https://doi.org/10.1186/s12879-021-06598-3 Cryptosporidiosis Cryptosporidium Spatiotemporal epidemiology
CommonCrawl
Orthogonal polynomials on the unit circle In mathematics, orthogonal polynomials on the unit circle are families of polynomials that are orthogonal with respect to integration over the unit circle in the complex plane, for some probability measure on the unit circle. They were introduced by Szegő (1920, 1921, 1939). Definition Suppose that $\mu $ is a probability measure on the unit circle in the complex plane, whose support is not finite. The orthogonal polynomials associated to $\mu $ are the polynomials $\Phi _{n}(z)$ with leading term $z^{n}$ that are orthogonal with respect to the measure $\mu $. The Szegő recurrence Szegő's recurrence states that $\Phi _{0}(z)=1$ $\Phi _{n+1}(z)=z\Phi _{n}(z)-{\overline {\alpha }}_{n}\Phi _{n}^{*}(z)$ where $\Phi _{n}^{*}(z)=z^{n}{\overline {\Phi _{n}(1/{\overline {z}})}}$ is the polynomial with its coefficients reversed and complex conjugated, and where the Verblunsky coefficients $\alpha _{n}$ are complex numbers with absolute values less than 1. Verblunsky's theorem Verblunsky's theorem states that any sequence of complex numbers in the open unit disk is the sequence of Verblunsky coefficients for a unique probability measure on the unit circle with infinite support. Geronimus's theorem Geronimus's theorem states that the Verblunsky coefficients of the measure μ are the Schur parameters of the function $f$ defined by the equations ${\frac {1+zf(z)}{1-zf(z)}}=F(z)=\int {\frac {e^{i\theta }+z}{e^{i\theta }-z}}d\mu .$ Baxter's theorem Baxter's theorem states that the Verblunsky coefficients form an absolutely convergent series if and only if the moments of $\mu $ form an absolutely convergent series and the weight function $w$ is strictly positive everywhere. Szegő's theorem Verblunsky's form of Szegő's theorem states that $\prod _{n=1}^{\infty }(1-|\alpha _{n}|^{2})=\exp {\big (}\int _{0}^{2\pi }\log(w(\theta ))d\theta /2\pi {\big )}$ where $wd\theta /2\pi $ is the absolutely continuous part of the measure $d\mu (\theta )=wd\theta /2\pi +d\mu _{s}$. Verblunsky's form also allows for a non-zero singular part while $d\mu _{s}=0$ in Szegő's original version.[1] Rakhmanov's theorem Rakhmanov's theorem states that if the absolutely continuous part $w$ of the measure $\mu $ is positive almost everywhere then the Verblunsky coefficients $\alpha _{n}$ tend to 0. Examples The Rogers–Szegő polynomials are an example of orthogonal polynomials on the unit circle. References • Koornwinder, Tom H.; Wong, Roderick S. C.; Koekoek, Roelof; Swarttouw, René F. (2010), "Orthogonal Polynomials on the unit circle", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. • Simon, Barry (2005), Orthogonal polynomials on the unit circle. Part 1. Classical theory, American Mathematical Society Colloquium Publications, vol. 54, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-3446-6, MR 2105088 • Simon, Barry (2005), Orthogonal polynomials on the unit circle. Part 2. Spectral theory, American Mathematical Society Colloquium Publications, vol. 54, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-3675-0, MR 2105089 • Szegő, Gábor (1920), "Beiträge zur Theorie der Toeplitzschen Formen", Mathematische Zeitschrift, 6 (3–4): 167–202, doi:10.1007/BF01199955, ISSN 0025-5874, S2CID 118147030 • Szegő, Gábor (1921), "Beiträge zur Theorie der Toeplitzschen Formen", Mathematische Zeitschrift, 9 (3–4): 167–190, doi:10.1007/BF01279027, ISSN 0025-5874, S2CID 125157848 • Szegő, Gábor (1939), Orthogonal Polynomials, Colloquium Publications, vol. XXIII, American Mathematical Society, ISBN 978-0-8218-1023-1, MR 0372517 1. Simon, Barry (2011). Szegő's theorem and its descendants: spectral theory for L² perturbations of orthogonal polynomials. Princeton University Press. p. 29. ISBN 978-0-691-14704-8.
Wikipedia
Making Mathematics with Needlework Making Mathematics with Needlework: Ten Papers and Ten Projects is an edited volume on mathematics and fiber arts. It was edited by Sarah-Marie Belcastro and Carolyn Yackel, and published in 2008 by A K Peters, based on a meeting held in 2005 in Atlanta by the American Mathematical Society.[1][2] Topics The book includes ten different mathematical fiber arts projects, by eight contributors.[3] An introduction provides a history of the connections between mathematics, mathematics education, and the fiber arts.[2] Each of its ten project chapters is illustrated by many color photographs and diagrams,[4] and is organized into four sections: an overview of the project, a section on the mathematics connected to it, a section of ideas for using the project as a teaching activity, and directions for constructing the project.[3] Although there are some connections between topics, they can be read independently of each other, in any order.[4] The thesis of the book is that directed exercises in fiber arts construction can help teach both mathematical visualization and concepts from three-dimensional geometry.[1] The book uses knitting, crochet, sewing, and cross-stitch, but deliberately avoids weaving as a topic already well-covered in mathematical fiber arts publications.[5] Projects in the book include a quilt in the form of a Möbius strip, a "bidirectional hat" connected to the theory of Diophantine equations, a shawl with a fractal design, a knitted torus connecting to discrete approximations of curvature, a sampler demonstrating different forms of symmetry in wallpaper group, "algebraic socks" with connections to modular arithmetic and the Klein four-group, a one-sided purse sewn together following a description by Lewis Carroll, a demonstration of braid groups on a cable-knit pillow, an embroidered graph drawing of an Eulerian graph, and topological pants.[1][2][6] Beyond belcastro and Yackel, the contributors to the book include Susan Goldstine, Joshua Holden, Lana Holden, Mary D. Shepherd, Amy F. Szczepański, and D. Jacob Wildstrom.[7] Audience and reception Reviewers had mixed opinions on the appropriate audience for the book and its success in targeting that audience. Ketty Peeva writes that the book is "of interest to mathematicians, mathematics educators and crafters",[8] and Mary Fortune writes that a wide group of people would enjoy browsing its contents,[4] However, Kate Atherley warns that it is "not for the faint-of-heart" (either among mathematicians or crafters),[9] and Mary Goetting complains that the audience for the book is not clearly defined, and is inconsistent across the book, with some chapters written for professional mathematicians and others for mathematical beginners. She writes that most readers will have to pick and choose among the chapters for material appealing to them.[3] Similarly, reviewer Michelle Sipics writes that in aiming at multiple audiences, the book "sacrifices some accessibility".[5] And although reviewer Gwen Fisher downplays the potential pedagogical applications of this book, complaining that its teaching ideas do not provide enough detail to be usable, and are not a good fit for typical teaching curricula,[7] Sipics calls mathematics teachers "perhaps the greatest beneficiaries of this text".[5] Fortune writes that, though the book increased her appreciation of and understanding of needlework, she didn't gain much new mathematical insight from reading it.[4] In contrast, Fisher argues that by using only "straightforward applications of traditional needlework skills" the book is accessible even to beginners in the fiber arts, and that the book is "much more about maths than about fibre technique". The real value of the book, she argues, is in the scholarly connection it forges between traditional women's activities and mathematics.[7] Pao-Sheng Hsu says that it would be "a great coffee table book" for browsing. And Anna Lena Phillips calls the book "an excellent synthesis" of textile crafts and mathematics, providing inspiration to those interested in either topic.[6] References 1. Cross, Alison (February 2008), "Review of Making Mathematics with Needlework" (PDF), The London Mathematical Society Newsletter, 367: 28 2. Hsu, Pao-Sheng (January–February 2010), "Review of Making Mathematics with Needlework", AWM Newsletter, Association for Women in Mathematics, 40 (1): 20–23 3. Goetting, Mary (November 2008), "Review of Making Mathematics with Needlework", The Mathematics Teacher, 102 (4): 319, JSTOR 20876356 4. Fortune, Mary (July 2010), "Review of Making Mathematics with Needlework", The Mathematical Gazette, 94 (530): 378–379, doi:10.1017/s0025557200007014, JSTOR 25759714 5. Sipics, Michelle (December 2007), "Math in a material world (review of Making Mathematics with Needlework)", SIAM News 6. Phillips, Anna Lena (2008), "Picking up stitches (review of Making Mathematics with Needlework)", American Scientist, 96 (3): 259, doi:10.1511/2008.71.3591 7. Fisher, Gwen (June 2008), "Review of Making Mathematics with Needlework", Journal of Mathematics and the Arts, 2 (2): 101–103, doi:10.1080/17513470802222827 8. Peeva, Ketty, "Review of Making Mathematics with Needlework", zbMATH, Zbl 1142.00003 9. Atherley, Kate (Spring 2009), "Review of Making Mathematics with Needlework", Cool stuff!, Knitty External links • Home page Mathematics and art Concepts • Algorithm • Catenary • Fractal • Golden ratio • Hyperboloid structure • Minimal surface • Paraboloid • Perspective • Camera lucida • Camera obscura • Plastic number • Projective geometry • Proportion • Architecture • Human • Symmetry • Tessellation • Wallpaper group Forms • Algorithmic art • Anamorphic art • Architecture • Geodesic dome • Islamic • Mughal • Pyramid • Vastu shastra • Computer art • Fiber arts • 4D art • Fractal art • Islamic geometric patterns • Girih • Jali • Muqarnas • Zellij • Knotting • Celtic knot • Croatian interlace • Interlace • Music • Origami • Sculpture • String art • Tiling Artworks • List of works designed with the golden ratio • Continuum • Mathemalchemy • Mathematica: A World of Numbers... and Beyond • Octacube • Pi • Pi in the Sky Buildings • Cathedral of Saint Mary of the Assumption • Hagia Sophia • Pantheon • Parthenon • Pyramid of Khufu • Sagrada Família • Sydney Opera House • Taj Mahal Artists Renaissance • Paolo Uccello • Piero della Francesca • Leonardo da Vinci • Vitruvian Man • Albrecht Dürer • Parmigianino • Self-portrait in a Convex Mirror 19th–20th Century • William Blake • The Ancient of Days • Newton • Jean Metzinger • Danseuse au café • L'Oiseau bleu • Giorgio de Chirico • Man Ray • M. C. Escher • Circle Limit III • Print Gallery • Relativity • Reptiles • Waterfall • René Magritte • La condition humaine • Salvador Dalí • Crucifixion • The Swallow's Tail • Crockett Johnson Contemporary • Max Bill • Martin and Erik Demaine • Scott Draves • Jan Dibbets • John Ernest • Helaman Ferguson • Peter Forakis • Susan Goldstine • Bathsheba Grossman • George W. Hart • Desmond Paul Henry • Anthony Hill • Charles Jencks • Garden of Cosmic Speculation • Andy Lomas • Robert Longhurst • Jeanette McLeod • Hamid Naderi Yeganeh • István Orosz • Hinke Osinga • Antoine Pevsner • Tony Robbin • Alba Rojo Cama • Reza Sarhangi • Oliver Sin • Hiroshi Sugimoto • Daina Taimiņa • Roman Verostko • Margaret Wertheim Theorists Ancient • Polykleitos • Canon • Vitruvius • De architectura Renaissance • Filippo Brunelleschi • Leon Battista Alberti • De pictura • De re aedificatoria • Piero della Francesca • De prospectiva pingendi • Luca Pacioli • De divina proportione • Leonardo da Vinci • A Treatise on Painting • Albrecht Dürer • Vier Bücher von Menschlicher Proportion • Sebastiano Serlio • Regole generali d'architettura • Andrea Palladio • I quattro libri dell'architettura Romantic • Samuel Colman • Nature's Harmonic Unity • Frederik Macody Lund • Ad Quadratum • Jay Hambidge • The Greek Vase Modern • Owen Jones • The Grammar of Ornament • Ernest Hanbury Hankin • The Drawing of Geometric Patterns in Saracenic Art • G. H. Hardy • A Mathematician's Apology • George David Birkhoff • Aesthetic Measure • Douglas Hofstadter • Gödel, Escher, Bach • Nikos Salingaros • The 'Life' of a Carpet Publications • Journal of Mathematics and the Arts • Lumen Naturae • Making Mathematics with Needlework • Rhythm of Structure • Viewpoints: Mathematical Perspective and Fractal Geometry in Art Organizations • Ars Mathematica • The Bridges Organization • European Society for Mathematics and the Arts • Goudreau Museum of Mathematics in Art and Science • Institute For Figuring • Mathemalchemy • National Museum of Mathematics Related • Droste effect • Mathematical beauty • Patterns in nature • Sacred geometry • Category
Wikipedia
Skip to main content Skip to sections December 2019 , 19:437 | Cite as Prevalence and income-related equity in hypertension in rural China from 1991 to 2011: differences between self-reported and tested measures Dan Cao Zhongliang Zhou Yafei Si Chi Shen Yangling Ren Min Su Shuyi He Jianmin Gao First Online: 01 July 2019 Part of the following topical collections: Along with economic growth and living standard improvement, hypertension has become one of the most prevalent chronic diseases in China. Self-reported measures and tested measures of hypertension may differ significantly due to the low awareness of prevalence. The objective of this study is to figure out whether and how self-reported measures differ from tested measures in terms of prevalence and equity. We have used data from the China Health and Nutrition Survey database from 1991 to 2011 and extracted the data of rural areas using hukou system. Hypertension is categorized into two groups: self-reported hypertension and tested hypertension. To evaluate the equity of self-reported hypertension and tested hypertension, we calculated their Concentration Index (C) and decomposed C based on which we have obtained the horizontal-inequity index (HI) of each year. Probit Model was deployed to analyze the key determinants of hypertension prevalence. We found that the prevalence of both self-reported hypertension and tested hypertension have sharply increased from 1991 to 2011 in rural China and the population of tested hypertension was significantly larger than that of self-reported hypertension. For self-reported hypertension, prevalence rate increased from 2.72 to 13.2% and for tested hypertension it increased from 11.01 to 25.05%. Both of the Concentration Index (C) and horizontal-inequity index (HI) of self-reported hypertension and tested hypertension appeared to be contradictory. The C and HI of self-reported hypertension in 2011 were 0.032 and 0.060 respectively while the C and HI of tested hypertension were − 0.024 and − 0.015 respectively. More efforts should be put into for improving the poor's health, especially in equal access to health services. Symptom-based measures such as tested hypertension should be adopted more widely in empirical studies. Tested hypertension Self-reported hypertension Equity Concentration index 95% Confidence interval Concentration Index Diastolic blood pressure Horizontal-inequity Index Systolic blood pressure The online version of this article ( https://doi.org/10.1186/s12913-019-4289-5) contains supplementary material, which is available to authorized users. Chronic diseases, such as cardiovascular and cerebrovascular diseases, are becoming increasingly prevalent [1] and hypertension is one of the most prevalent but preventable one amongst them [2]. The number of adults with hypertension in 2025 is predicted to be 1.56 billion and the total number of them in developing countries is substantially higher than developed countries [3]. Despite of the great economic growth, since the reform and opening-up policy was implemented, one of the most alarming issue is that the morbidity rate of hypertension has increased from 1.19% in 2003 to 9.89% in 2013 [4, 5]. Although prior studies have proved that a substantial health inequity exists not only in China but also in other countries [6, 7, 8, 9]. This issue in China has to be addressed rigorously since the new objective of "healthy China" has been put forward in the National Health Conference in 2016. Also, hypertension burden appeared differently in rural and urban areas in China [10, 11]. In 2012, the hypertension cases in China had reached up to 266 million [12], yet the prevalence of hypertension remains inflating, whereas the awareness, treatment and control of hypertension remained at a inadequate especially in the rural areas [13]. Several studies have stressed that the population of hypertension in rural China has increased rapidly. In the last decades, the growth of hypertension in rural China was higher than that in urban areas and the prevalence in rural areas nearly reached the same level in urban areas [14, 15]. Recently, researchers were even found that rural residents have higher hypertension prevalence than urban residents in Southwest China [16]. Socioeconomic differences in chronic disease prevalence have been found worldwide. For instance, previous studies have proved that socioeconomic inequalities exist among patients with some fatal chronic diseases, such as cancer and heart diseases [9]. A study focusing on the chronic diseases in Slovenia also suggests that the prevalence is significantly higher in the population with lower socioeconomic and employment status [17]. Moreover, the relationship between hypertension prevalence and several potentially modifiable factors such as education, profession and income level have been studied [18, 19]. Over the last decades, previous researchers have agreed on that the socioeconomic status can significantly affect hypertension prevalence and the severity of hypertension [10]. A related study also shows the same result that lower education is associated with a higher risk of pre-hypertension [20]. China has started equalizing the basic public health services since 2009, granting sufficient access to basic public health services [21]. Thus hypertension management was improved substantially from 2008 to 2012 and the inequity across regions declined over time [22]. Access to some services such as chronic diseases screening is still far away from being equalized. Regardless of significant improvement in coverage of basic public health services, more equalization needs to be improved [23]. Unbalanced access and utilization results in uneven awareness and thus it is generally recognized that wealthier people have more opportunities to become aware of the prevalence of chronic diseases. Researchers usually use two measures to evaluate hypertension prevalence: self-reported prevalence and tested prevalence. These two measures may differ a lot due to the uneven awareness of hypertension prevalence. The disparity between the poor and the rich may lead to a biased result and mislead the government to implement related policies [24]. Researchers have also found that self-reported measures can lead to significant deviation from the real prevalence and inequality, and using symptom-based measures can be an effective way to eliminate the reporting bias [25, 26]. To find out whether and how self-reported measures differ from tested measures, we conducted this study. In this study, we extracted the rural data from a national representative database-- the China Health and Nutrition Surveys from 1991 to 2011 using the hukou system. We deployed both of these two methods to measure hypertension: one is self-reported hypertension and the other is tested hypertension namely symptom-based. Concentration index was adopted to estimate inequity. Previous literature has addressed that unequal access to health care utilization may cause a prevalence deviation between self-reported measures and tested measures. Self-reported measure causes an underestimation of real prevalence, especially for those with low socioeconomic status. Hence, in this study, we hypothesize that both the prevalence and the equity of self-reported and tested measures differ a lot. We used a national representative database from China Health and Nutrition Surveys (CHNS). CHNS is a longitudinal survey from the late 1980s conducted by the University of North Carolina Center for Population Studies, the National Institute of Nutrition and Food Safety and the China's Center for Disease Control. The CHNS data contains new household formation, replacement communities and households, and all household members [27]. The questionnaire contains 12 dimensions: population density, economic activity, traditional markets, modern markets, transportation infrastructure, sanitation, communications, housing, education, diversity, health infrastructure and social services. CHNS survey covers nine provinces that vary substantially in geography, economic level, public resources and health indicators. The samples in each province were selected with a multistage, random cluster process. The counties were stratified by income (low, middle and high) in each province and a weighted sampling scheme was used to select four counties randomly. Villages and towns within the counties and urban and suburban neighborhoods within the cities were chosen randomly. Approximately there were 4400 households in the whole survey covering 19,000 individuals [28]. The CHNS contains a weakness; the follow-ups were missing some chunks of data every year. There were three major reasons: 1) missing population that couldn't be found because of travel, hours of work or play, 2) school children who were in boarding schools, 3) migrants work for working population. But the CHNS considered loss follow-ups into their design and recruited new participants as replenishment population if there were no more than 20 households or if respondents had constituted a new family [29]. This design of replenishment sample made up the weakness caused by the loss of enrolled subjects. The cross sectional data of each wave was regarded as national representative in many other researches [30, 31, 32]. Measures/variables Dependent variables The design of this study contains two dependent variables: tested prevalence and self-reported prevalence. For the tested prevalence, the CHNS measured respondent's blood pressure three times and we took their average value. We established the database of tested hypertensive persons whose SBP (systolic blood pressure) were higher than 140 mmHg and their DBP (diastolic blood pressure) were higher than 90 mmHg. Self-reported hypertensive persons were classified as; who knew they were suffering from high blood pressure or taking any anti-hypertension drugs by answering the questions: "Have you ever been diagnosed with hypertension by a doctor?" or "Are you currently taking any anti-hypertensive medication?" Independent variables According to prior studies, we adopted age, gender, BMI, economic level, smoking, drinking, schooling, marital status, region and physical examinations of the past 4 weeks [33]. The economic level in this study was defined by grouping inflation to 2011 household income per capita into five cohorts: the poorest, the poorer, the middle, the richer and the richest. If the respondent has smoked even once, the person is classified as a Smoker. Also if the respondent drinks any form of alcohol for more than 3 times a week, the person is classified as Drinker. Regions are categorized into three: east, middle and west, which are consistent with the standard in Statistic Book of PRC. More details about independent variables are presented in Table 1. Unavoidable variables Avoidable variables Agegroup 18~45a =0 46~59 = 1 60 and above =2 Economic level The pooresta = 0 The poorer = 1 The middle = 2 The richer = 3 The richest = 4 Malea = 0 Female = 1 Noa = 0 Yes = 1 Less than 3 times a weeka = 0 More than 3 times a week = 1 Have physical examination in the past 4 weeks Easta = 0 Middle = 1 West = 2 Illiteracya = 0 Primary and junior high school = 1 High and technical secondary school = 2 Junior college and above = 3 BMI < 18.5a =0 18.5 ≤ BMI < 24 = 1 24 ≤ BMI < 28 = 2 BMI ≥ 28 = 3 Unmarrieda = 0 Married = 1 Others = 2 ais the control group of dummy variables All independent variables are grouped into unavoidable variables and avoidable variables. The unavoidable variables refer to factors that couldn't be avoided in hypertension including age and gender, while avoidable variables contain economic level, smoking history, drinking history, physical examinations of the past 4 weeks, region, schooling, BMI and marital status [34, 35]. Measure of equity The feasibility and reliability for Concentration Index(C) and decomposition of C to measure health equity have been well documented [36, 37]. The concentration index can expose the relationship between health outcomes, such as self-reported health status and living standards like income level and wealth index. More widely, the concentration index can examine inequality not only in health outcomes but also in any health sector variable of interest [38], such as hypertension prevalence in this article. In our study, the two key variables underlying the concentration index are hypertension prevalence, the distribution of which is the subject of interest, and income level against which the distribution is to be assessed. We can see the degree of inequality of the hypertension prevalence distributes among different living standards. Further more, we decomposed the C to figure out how such inequality can be explained. The following specifically shows how we computed C and sub-section 2.4 decomposition of C. In general, the Concentration Index (C) is considered to be a good indicator reflecting inequality in health status caused by socioeconomic factors [35, 39]. In this study, we used the concentration index to measure the inequality of hypertension prevalence of people in different income groups. The range of the concentration index is from − 1 to 1. If people with different economic levels have the same probability to suffer from hypertension, the concentration index equals to 0. If the concentration index is negative, it indicates hypertension prevalence is pro-poor and if the concentration index is positive, it indicates hypertension prevalence is pro-rich. We calculated the concentration index with the equation below: $$ \mathrm{C}=\frac{2}{\mu}\mathit{\operatorname{cov}}\left(y,{R}_i\right) $$ Where Ri represents the proportion of individual i in sample sorted by economic level (inflated to 2011 per capita household income), yi is hypertension prevalence, μ represents the average of hypertension prevalence. Measures of horizontal inequity Decomposition of concentration index Decomposition of concentration index can provide a reliable way to analyze the contribution of various factors to the inequality of hypertension by estimating each factor's effect on hypertension prevalence using a Probit model [40]. The Probit equation is as below. $$ \Pr \left(\mathrm{Y}=1|\mathrm{X}\right)=\varnothing \left({\mathrm{X}}^{\prime}\upbeta \right), $$ Where Pr is the probability of suffering from hypertension, ∅ represents the cumulative function of the normal distribution, β is the parameter evaluated by maximum likelihood method. After decomposing the concentration index into the contribution of various factors to the inequality of hypertension and summing up the C's of all avoidable variables, we obtained the horizontal inequity of hypertension prevalence, of which the unavoidable variables contained demographic variables and prevalence variables, and the avoidable variables contained economic level, risk behaviors of hypertension and other avoidable variables. In this study, we decomposed both the C's of tested prevalence and self-reported prevalence of each year. We estimated each factor's effect on hypertension prevalence by the model below: $$ {y}_i={\alpha}^m+\sum \limits_j{\beta}_j^m{x}_{ji}+\sum \limits_k{\gamma}_k^m{z}_{ki}+{\mu}_i, $$ Where yi represents the dependent variable, xji represents the unavoidable variable, and zki is the avoidable variable, \( {\beta}_j^m \) and \( {\gamma}_k^m \) represent the partial effects, μi is the residual term. The concentration index formula for the horizontal inequity is presented as below: $$ C=\sum \limits_j\left({\beta}_j^m{x}_{ji}/\mu \right){C}_j+\sum \limits_k\left({\gamma}_k^m{z}_{ji}/\mu \right){C}_k+\frac{GC_k}{\mu_i}, $$ Where C represents the concentration index of hypertension prevalence, Cj represents the concentration index of xj, Ck is the concentration index of zk, GCk is the concentration index of residual terms. This formula indicates that the concentration index of hypertension prevalence is obtained by adding weight-sum of avoidable variables' and unavoidable variables' C's. Furthermore, the horizontal-inequity index can be measured by controlling the contribution of the unavoidable variables. Descriptive results of 2011 Excluding respondents under 18 and singular values we have a sample of 122,945 observations. The sample values contain 11,119 in 1991, 10,828 in 1993, 11,891 in 1997, 13,324 in 2000, 13,194 in 2004, 15,922 in 2006, 16,313 in 2009 and 19,722 in 2011 respectively. The descriptive results of our sample are presented in Table 2. A table about baseline subjects involved in 1991 and new subjects of each wave is shown in the (Additional file 1: Table S1). Descriptive results (%) < 18.5 18.5~24 > 28 Primary or junior high school High school or technical secondary school Junior college and above Comparison of prevalence rate Figure 1 displays the prevalence rate of self-reported hypertension and tested hypertension, which suggests that the prevalence rate of self-reported hypertension in rural China has been increasing from 1991 (2.72%) to 2011 (13.2%). The prevalence of tested hypertension in rural China also has increased from 11.01% in 1991 to 25.05% in 2011. The increasing trend of self-reported prevalence and tested prevalence appears consistent. Figure 1 also indicates that the morbidity rate of self-reported hypertension increased more rapidly after 2000. Open image in new window The prevalence of self-reported hypertension and tested hypertension Considering the age of follow-ups would increase across the time, this study also evaluated the age-adjusted prevalence of both self-reported and tested hypertension. Both the prevalence's, self-reported and tested hypertension, was adjusted to age distribution of the corresponding year in order correct the prevalence deviation caused by the follow-up getting older across time. The results of age-adjusted prevalence are placed in the [Additional file 2: Figure S1]. Additionally, to guarantee the hypertension screening in this survey would not affect the self-reported prevalence, identified baseline subjects and new subjects of each year and conducted Chi-Squared test. The null hypothesis was that the respondent's self-reported hypertension status was independent of the respondent being in baseline population or new population. The results of Chi-Squared test suggested that the prevalence of two kinds of respondents varied in some years, but had no significant difference in most years, which is shown in the [Additional file 3: Table S2]. The results confirmed that hypertension screening across time has little effect on self-reported hypertension prevalence. Comparison of equity In this study, we used the concentration index to measure the inequality of hypertension prevalence of people with different income groups. The concentration indexes of self-reported hypertension from 1991 to 2011 are presented in Table 3. Concentration Index from 1991 to 2011 Self-reported hypertension Tested hypertension 95%CI −0.003 − 0.043 − 0.00009 It is evident that the concentration indexes of self-reported hypertension from 1991 to 2011 are all positive and statistically significant in most of the years. Nonetheless, when tested hypertension is included, the concentration indexes present the opposite bias. In addition, the concentration indexes of tested hypertension for the most years indicate an opposite trend, for example, − 0.003 [95%CI (− 0.043,0.038)] in 1993, − 0.008 [95%CI (− 0.040,0.025)] in 1997, − 0.030 [95%CI (− 0.058,-0.001)] in 2006, and − 0.024 [95%CI (− 0.047,-0.00009)] in 2011. Therefore, tested hypertension is not pro-rich, instead pro-poor in these years. Table 3 also indicates that the concentration indexes are getting closer to 0 since 2009, that might be due to the start of basic public services equalization in 2009. Probit Model was adopted in this study to analyze the effects of independent variables on hypertension prevalence. Taking decomposition results of year 2011 in Table 4 as an example, controlling for confounding variables compared to those under 45, people with older age have more probability to have both tested hypertension and self-reported hypertension. Decomposition results of other years are presented in the [Additional file 4: Table S3-S4]. In 2011, the difference between people underweight and people with higher BMI is statistically significant suggesting that the latter are more likely to get hypertension (either based on tested hypertension or self-reported hypertension). People with higher education level have a lower probability of tested hypertension compared with people who are illiterate. The results of self-reported hypertension are not so statistically significant. Compared to unmarried, married people are more likely to suffer from hypertension. People living in Middle and West China have more probability to get hypertension. Females are less likely to suffer from tested hypertension compared to males, which is opposite to the results of self-reported hypertension but not statistically significant. Drinking is also a risk factor of tested hypertension but not self-reported hypertension. Those who had a physical examination in the past 4 weeks are more likely to have self-reported hypertension. The regression results of 2011 dy/dx Std. Err Demand elasticity The richer The richest 0.138c 18.5 ≤ BMI < 24 24 ≤ BMI < 28 BMI ≥ 28 Primary and junior high school − 0.038b −0.075c − 0.067a Other status The middle region The western region −0.035b − 0.047c Having physical examination a, b, c: significantly different from zero at the 0.1, 0.05 and 0.01 level, respectively According to the decomposition results of 2011, considering only one variable effect on hypertension by controlling other factors, the prevalence of hypertension will be concentrated on the rich if the contribution is positive, otherwise, the prevalence of hypertension will be concentrated on the poor. Excluding the total contribution of all variables from the concentration index of hypertension, we obtain the contribution of unexplained variables. In Table 4 it is apparent that in the rural area the prevalence of self-reported hypertension in 2011 can be explained mainly by aged 45~59(67.3%), aged 60 and above (197.5%) and other marital status (− 68.8%). While the prevalence of tested hypertension can be explained mainly by the richest (52.6%), aged 45~59 (66.6%) and aged 60 and above (170.4%). We calculated the horizontal-inequity indexes of the two hypertension groups from 1991 to 2011. As presented in Table 5, the horizontal-inequity indexes of self-reported hypertension are positive in all 8 years and are statistically significant in most years, while the indexes of tested hypertension are negative in some years, such as − 0.004 in 1993, − 0.028 in 1997, − 0.002 in 2000, − 0.033 in 2006, − 0.009 in 2009 and − 0.015 in 2011. Although the horizontal-inequity indexes of tested hypertension are not statistically significant in some years, but they still show clear differences compared with horizontal-inequity indexes of self-reported hypertension. Horizontal-inequity of the two hypertension groups from 1991 to 2011 Contribution of unavoidable variables To build the confidence in our concentration index results, we excluded the results whose income level ranks the first 1% and the last 1% in our sample and calculated the concentration indexes of 8 years again. Table 6 shows the results of our sensitivity analysis. The table suggests that after exclusion of extreme data, the concentration indexes were consistent with the former results. Beyond that, the trend of 8 years in altered sample was also identical with the trend in the original sample. Concentration index of samples Altered sample Original sample In addition, to find out whether the new subjects of each wave would affect signs of concentration index of tested hypertension prevalence in total sample, we also conducted the Cs of baseline subjects in each wave and compared them with Cs of total population. The results are shown in the [Additional file 5: Table S5]. It is apparent that the signs of Cs of baseline subjects in each wave were generally consistent with signs of total population and the 95% CIs overlap in each year. We computed separately self-reported prevalence and tested prevalence from 1991 to 2011 in this article. Consistent with those studies, we find that the prevalence of both self-reported hypertension and tested hypertension have rapidly increased from 1991 to 2011 in rural China. This rapid increase may be because of the change of health behaviors in rural China. Overweight rose from 15.56 to 32.49%, and obesity raised from 3.04 to 12.05%. The role that obesity plays in hypertension prevalence has already been discussed in other literatures [41]. Our study emphasizes on this opinion and verifies that obesity in China has a rapid grow in the past decades. The population with tested hypertension was always significantly larger than that with self-reported hypertension [40]. The rise of self-reported prevalence is from 2.72 to 13.2% from 1991 to 2011, while the tested is from 11.01 to 25.05%. However, compared with prior studies, we found a lower prevalence of both self-reported hypertension and tested hypertension in rural China and a possible reason is that the new subjects of each year may drag the prevalence rate down. One study found that the prevalence of self-reported hypertension and tested hypertension in 2009 is 12.6 and 29.6% respectively [42], while the prevalence in our study is 9.46 and 21.07% respectively. Another study on rural resident aged 35–74 indicates that hypertension prevalence increased by 20% from 1991 to 2011 [40], however, in our study self-reported prevalence and tested prevalence increased by 6.52 and 4.25% respectively. This difference suggests that the growth of hypertension prevalence is lower in younger people than in elder people. Our study also indicates that the prevalence rate of tested hypertension is nearly twice as that of self-reported hypertension; the ratio is lower than previous findings using a national survey [42]. A potential reason is that our results are from rural areas where basic public health services are less developed compare with urban areas and thus the ratio between tested hypertension and self-reported hypertension is a bit less than nationwide area. In other literatures, some researchers did not adopt the definitions of self-reported hypertension and tested hypertension but conducted research on hypertension prevalence and awareness. In a sense, the awareness of hypertension can describe self-reported prevalence. Based on the uneven access and utilization of health resources, both our findings and prior findings indicate that different measurements of hypertension prevalence vary significantly [43]. Using self-reported hypertension measures implies substantial bias against the real prevalence of hypertension, and thus, the findings based on self-reported measures can be expected to mislead the government's policy. The deviation caused by self-reported measures exists in many countries and is supposed to be higher in low-income and middle-income countries such as China [44]. The main determinants of self-reported hypertension and tested hypertension Our study indicates that age, BMI, region and marital status are all risk factors for hypertension, which is consistent with prior studies. For instance, in 2011, aging 60 and above shares more than 150% in the contribution to the concentration indexes of both self-reported prevalence and tested prevalence. BMI and region have significant impacts on hypertension prevalence. Some studies also suggested that income level could have an impact on hypertension prevalence [45], but in our study the impact of income level on the self-reported hypertension and the tested hypertension is respectively insignificant and negative. While many studies suggested that higher education level could reduce the probability of hypertension prevalence [8], our study shows a conflicting result that education level can strikingly affect tested hypertension prevalence, but not self-reported hypertension. The disparity may reflect education can help to improve individuals' health consciousness, which in turn affects the actual control of blood pressure. Additionally, we find people having physical examinations in the past 4 weeks are more likely to get self-reported hypertension but unexpectedly, this result is not so significant for tested hypertension. A possible explanation is that those people who take physical examinations usually have more chances to be diagnosed by doctors and therefore it appears that they have more probability to have self-reported hypertension. More efforts should be put into equalizing basic public health services especially popularizing physical examination as it plays an important role in hypertension awareness and control. The equity of self-reported hypertension and tested hypertension There are both similarities and differences between the findings of our study and prior studies. Some of them have proved that not only hypertension, but also some other chronic diseases are inequitable and pro-poor, such as diabetes and heart diseases [8]. We find that this is also true for tested hypertension. It is generally considered that the poor possess less health resources than the rich and thus suffer a worse health status. However, in our findings, the concentration index of self-reported prevalence is positive, for example, 0.118 [95%CI (0.023,0.213)] in 1991 and 0.065 [95%CI (0.026,0.104)] in 2009, which means that self-reported prevalence concentrates on the rich from 1991 to 2011, while the Cs of tested prevalence were negative in some years. This conflicting result might be due to the ignorance of differences in two measurements of hypertension in prior studies. The significant disparities between the rich and the poor, in the access to and utilization of basic health services are taken into consideration for our study. Some researchers shared the same reasoning – it is evident that people in states that provide more education and better medical and health facilities are in a better position to be diagnosed and aware of their own particular illnesses than the people in states providing less education and worse medical and health facilities, where there is less awareness of treatable conditions [46]. Combining the equity with the prevalence of the two hypertensions, we can find that the income-related inequality of self-reported prevalence is pro-rich as its concentration indexes and horizontal-inequity indexes are always positive in 8 years. In addition, although the tested prevalence has rapidly increased from 1991 to 2011, the concentration indexes are always close to 0. This indicates that the prevalence of tested hypertension is less related to income level. This reveals the access to health resources and services in rural China are pro-rich, even though China is strongly pushing the equalization of basic public health services. There is no doubt that China's basic public health services are getting increasingly equal to everyone and the quantity of health funds has been devoted into basic services such as hypertension screening ever since the equalization policy of basic public health services was carried out. Nevertheless, there still is a gap which cannot be ignored between the poor and rich in health accessibility and utilization. The imbalanced accessibility and utilization of health services might be the cause of the contradictory result in this study and this finding should raise our attention to put more effort into health service equity. Hence, if we focus on self-reported hypertension solely, a biased conclusion or policy will probably come out. Additionally, for the realization of the right to maintain health, China stressed the importance of universal health coverage (UHC) [47]. As a result, in 2011, about 95.7% of the Chinese population is covered by three main health insurances. In spite of the achievement of UHC, the access to health services and resources is not sufficient yet. Researchers conducted a study to assess the effective coverage of health insurance to explore whether the expansion of health insurance can improve health status [48], while the insufficient access to health services and resources indicated in our study may reveal that the effective coverage of health insurance in rural China is still low. The government may implement relevant policies to promote effective coverage of health services, not only crude coverage. China has implemented a series of policies to improve the access to health care in rural areas. But current policies cannot sufficiently meet the challenges of promoting effective coverage of health care. Developing social capital in rural area can be a potential solution to promote the management of chronic diseases [49], but systematic measurements have not been well documented yet. It is advisable to establish free health management model in rural China to solve pro-rich access problem. Currently, free physical examination can be received by rural residents. But for those who have been monitored for chronic diseases, free management measures are not provided except outpatient reimbursement for certain kinds of diseases [50]. The feasibility of providing national essential medicines for free in chronic diseases among the elderly has been studied and researchers found that it could be financially guaranteed but a further systematic study is needed [51]. Our study indicates that there are deviating results between self-reported hypertension and tested hypertension both in prevalence and equity. There are several suggestions proposed by our research. First of all, more efforts should be put into raising the health status of the poor, especially in equal access to health services. Furthermore, adopting self-reported measures solely in research may mislead our policy-making and thus symptom-based measures such as tested hypertension should be adopted more widely in empirical studies. We acknowledge some limitations in our analysis. The most recent year of our study is 2011 and we have no access to more recent data, thus analysis using data of more recent years are necessary for further study. Another limitation is that in some earlier years of our study such as 1991 and 1993, several independent variables had too few observations and thus were excluded in our regression model. This may result in a minor error of horizontal-inequity index. Lastly, difference between self-reported hypertension prevalence of baseline subjects and new subjects has been proved that have no significance in most years and the Cs of baseline subjects and total subjects have no significant difference, but still there are some potential factors that may affect prevalence and concentration index, although we have tried our best to solve the problem. The authors wish to thank Doctor Nawaz, for his useful comments and language editing which have greatly improved the manuscript. Highly tribute shall be paid to Mr. Bo Li, for his great support and considerable time and effort on the comments of this paper. DC processed the data and was a major contributor in writing the manuscript. ZZ and YS participated in the design this study. XX and CS acquired data and provided administrative support for data analysis. MS, YR and JG was involved in revising the manuscript critically for important intellectual content. XW and SH offered suggestions to complete this study and made substantial contributions to revised the English of this article. All authors have read and approved the final manuscript. This study was funded by China Medical Board (15–277 and 16–262), National Natural Science Foundation of China (71874137), Shaanxi Social Science Foundation (2017S024), Research Program of Shaanxi Soft Science (2015KRM117), the National high-level talents special support plan (thousands of people plan), Shaanxi provincial youth star of science and technology in 2016. The foundations were not involved in the design of the study or in activities related to data collection and analysis, and manuscript writing. We are thankful to the National Institute of Nutrition and Food Safety, China Center for Disease Control and Prevention, Carolina Population Center, the University of North Carolina at Chapel Hill, the NIH (R01-HD30880, DK056350, and R01-HD38700) and the Fogarty International Center, NIH for financial support for the CHNS data collection and analysis files from 1989 to 2006 and both parties plus the China-Japan Friendship Hospital, Ministry of Health for support for CHNS 2009 and future surveys. The ethics approval was obtained by the review board from the University of North Carolina at Chapel Hill, National Institute for Nutrition and Food Safety, China Center for Disease Control and Prevention and China-Japan Friendship Hospital. Informed consent was obtained, and data were anonymized for the analysis. The authors declare that they have no competing interests. We declare that Prof. Zhongliang Zhou is a member of the editorial board (Associate Editor) of this journal. 12913_2019_4289_MOESM1_ESM.pdf (82 kb) Additional file 1: Table S1. Subjects and new subjects of each year. (PDF 56 kb) Additional file 2: Figure S1. Age-adjusted prevalence of self-reported hypertension and tested hypertension. (PDF 49 kb) 12913_2019_4289_MOESM3_ESM.pdf (129 kb) Additional file 3: Table S2. The Chi-Square test of prevalence of baseline subjects and new subjects. (PDF 68 kb) Additional file 4: Table S3. The regression effect of 1991, 1993 and 1997. Table S4. the regression effect of 2000, 2004, 2006 and 2009. (PDF 238 kb) Additional file 5: Table S5. Concentration index of tested hypertension prevalence. (PDF 63 kb) Liu Z, Albanese E, et al. Chronic disease prevalence and care among elderly in urban and rural Beijing, China. BMC Public Health. 2009;9(1):1–11.CrossRefGoogle Scholar He J, Gu D, Wu X, Reynolds K, Duan X, et al. Major causes of death among men and women in China. N Engl J Med. 2005;353:1124–34.PubMedCrossRefGoogle Scholar Kearney PM, Whelton M, Reynolds K, et al. Global burden of hypertension: analysis of worldwide data. Lancet. 2005;365(9455):217–23.PubMedCrossRefGoogle Scholar Ying X, Ke-qin R, Ling XU. Analysis of the economic burden of hypertension in urban and rural households in China. Chin Health Econ. 2010;29(5):69–71.Google Scholar National Health and Family Planning commission of PRC. Health and Family Planning statistical yearbook of China. 2014.Google Scholar Tu S. Socioeconomic inequalities in prevalence and control of hypertension of the rural elderly in Shandong province, China. Master's dissertation. Shandong University; 2009.Google Scholar Regidor E, Gutiérrez-Fisac JL, Banegas JR, et al. Association of adult socioeconomic position with hypertension in older people. J Epidemiol Community Health. 2006;60(1):74–80.PubMedPubMedCentralCrossRefGoogle Scholar Dalstra J, Kunst AE, Borrell C, et al. Socioeconomic differences in the prevalence of common chronic diseases: an overview of eight European countries. Int J Epidemiol. 2005;34(2):316.PubMedCrossRefGoogle Scholar De Gaudemaris R, Lang T, Chatellier G, et al. Socioeconomic inequalities in hypertension prevalence and care: the IHPAF study. Hypertension. 2002;39(6):1119–25.PubMedCrossRefGoogle Scholar ZHOU W, HUANG X, YOU C J, et al. Application of antihypertensive drugs and blood pressure control among hypertension population in Jiangxi Province. Chin Gen Pract. 2018;21(22):2729–35.Google Scholar Yang J, Lu F, Zhang C, et al. Prevalence of prehypertension and hypertension in a Chinese rural area from 1991 to 2007. Hypertens Res. 2010;33(4):331–7.PubMedCrossRefGoogle Scholar Zhaosu W, Yong H, Wen W, et al. Education guidelines for hypertensive patients in China. Chin J Front Med Sci. 2014;6(3):78-98.Google Scholar Fan G, Wang Z, et al. Prevalence, awareness, treatment and control of hypertension in rural areas in North China in 2013. Zhonghua Yi Xue Za Zhi. 2015;95(8):616.PubMedGoogle Scholar Chen X, Li L, Zhou T, Li Z. Prevalence of hypertension in rural areas of China: a meta-analysis of published studies. PLoS One. 2014;9(12):e115462.PubMedPubMedCentralCrossRefGoogle Scholar Li J, Shi L, Li S, et al. Urban-rural disparities in hypertension prevalence, detection, and medication use among Chinese adults from 1993 to 2011. Int J Equity Health. 2017;16(1):50.PubMedPubMedCentralCrossRefGoogle Scholar Liu X, et al. Hypertension prevalence, awareness, treatment, control, and associated factors in Southwest China: an update. J Hypertens. 2017;35(3):637–44.PubMedCrossRefGoogle Scholar Softič N, et al. Prevalence of chronic diseases among adult Slovene population. Slov J Public Health. 2011;50(3):185–90.Google Scholar Grotto I, Huerta M, Sharabi Y. Hypertension and socioeconomic status. Curr Opin Cardiol. 2008;23(4):335–9.PubMedCrossRefGoogle Scholar Setiawan SI, et al. Analysis of socioeconomic status and personal behavior on hypertension in Jakarta, Indonesia: a cross-sectional study. J Comput Theor Nanosci. 2017;23(7):6729–33.Google Scholar Zhang R, et al. Prehypertension and socioeconomic status: a cross-sectional study in Chongqing, China. Clin Exp Hypertens. 2017;39(1):1.CrossRefGoogle Scholar Liu S, Ding G, et al. Basic public health services' equalization in Gansu province: a cross-sectional study. Chin Health Qual Manag. 2014;21(5):117-20.Google Scholar Hou Z, Meng Q, Zhang Y. Hypertension prevalence, awareness, treatment, and control following China's healthcare reform. Am J Hypertens. 2016;29(4):428–31.PubMedCrossRefGoogle Scholar Yulong T. The reform situation of basic health service equalization problem. Chin Continuing Med Educ. 2016;8(18):27–8.Google Scholar Mackebach JP. Differences in the misreporting of chronic conditions, by level of education: the effect on inequalties in prevalence rates. Am J Public Health. 1996;86(5):706.CrossRefGoogle Scholar Chrestani MA, Santos IS, Matijasevich AM. Self-reported hypertension: validation in a representative cross-sectional survey. Cad Saúde Pública. 2009;25(11):2395.PubMedCrossRefGoogle Scholar Vellakkal S, Subramanian SV, Millett C, et al. Socioeconomic inequalities in non-communicable diseases prevalence in India: disparities between self-reported diagnoses and standardized measures. PLoS One. 2013;8(7):e68219.PubMedPubMedCentralCrossRefGoogle Scholar Zhang B, Zhai F. The China health and nutrition survey, 1989–2011. NIH public access; 2014. https://doi.org/10.1111/obr.12119.CrossRefGoogle Scholar The China Health and Nutrition Survey Database. http://www.cpc.unc.edu/projects/china/about/proj_desc/survey. Accessed 1 Sept 2018. Popkin BM, Du S, Zhai F, et al. Cohort profile: the China health and nutrition survey—monitoring and understanding socio-economic and health change in China, 1989–2011. Int J Epidemiol. 2009;39(6):1435–40.PubMedPubMedCentralCrossRefGoogle Scholar Zhang B, Zhai FY, Du SF, et al. The China health and nutrition survey, 1989–2011. Obes Rev. 2014;15:2–7.PubMedCrossRefGoogle Scholar Xi B, Liang Y, He T, et al. Secular trends in the prevalence of general and abdominal obesity among Chinese adults, 1993–2009. Obes Rev. 2012;13(3):287–96.PubMedCrossRefGoogle Scholar Wang H, Du S, Zhai F, et al. Trends in the distribution of body mass index among Chinese adults, aged 20–45 years (1989–2000). Int J Obes. 2007;31(2):272.CrossRefGoogle Scholar Wilking SV, Belanger A, Kannel WB, et al. Determinants of isolated systolic hypertension. JAMA. 1988;260(23):3451.PubMedCrossRefGoogle Scholar Wang Y. Prevent hypertension with mind-body therapy. Gansu Med J. 2013;32(6):428–9.Google Scholar Li N, He H. Multivariate statistical analysis on risk factors of hypertension among the elderly. Chin J Public Health. 1990;6(5):210–2.Google Scholar Kakwani N, Wagstaff A, Van Doorslaer E. Socioeconomic inequalities in health: measurement, computation, and statistical inference. J Econ. 1997;77(1):87–103.CrossRefGoogle Scholar Su M, Si Y, Zhou Z, et al. Comparing the income-related inequity of tested prevalence and self-reported prevalence of hypertension in China. Int J Equity Health. 2018;17(1):82.PubMedPubMedCentralCrossRefGoogle Scholar O'Donnell O, Van Doorslaer E, Wagstaff A, et al. Analyzing health equity using household survey data : a guide to techniques and their implementation. World Bank. 2008;86(10):816.Google Scholar Vellakkal S, Millett C, et al. Are estimates of socioeconomic inequalties in chronic disease artefactually narrowed by self-reported measures of prevalence in low-income and middle-income countries? Findings from the WHO-SAGE survey. J Epidemiol Community Health. 2015;69(3):218.PubMedCrossRefGoogle Scholar Wang J, ning X. Trends of hypertension prevalence, awareness, treatment, and control in rural areas of northern China during 1991 to 2011. J Hum Hypertens. 2014;28:25–31.PubMedCrossRefGoogle Scholar Hall J E, da Silva A A, do Carmo J M, et al. Obesity-induced hypertension: role of sympathetic nervous system, leptin, and melanocortins. J Biol Chem. 2010;285(23):17271-6.PubMedCrossRefGoogle Scholar wang J, Zhang L, et al. Prevalence, awareness, treatment and control of hypertension in China: results from a national survey. Am J Hypertens. 2014;27(11):1355.PubMedPubMedCentralCrossRefGoogle Scholar Li H, Meng Q. Prevalence, awareness, treatment, and control of hypertension in rural China: results from Shandong Province. 2010;28(3):432–8.Google Scholar Zhan Y. Health inequality and its measurement—a literature review. World Econ Papers. 2009;(3):109-19.Google Scholar Graham H. Understanding health inequalities. McGraw-Hill Education; 2009.Google Scholar Sen A. Health: perception versus observation: self reported morbidity has severe limitations and can be extremely misleading. Bmj British Medical Journal. 2002;324(7342):860.PubMedPubMedCentralCrossRefGoogle Scholar Marten R, et al. An assessment of progress towards universal health coverage in Brazil, Russia, India, China, and South Africa (BRICS). Lancet. 2014;384(9960):2164–71.PubMedCrossRefGoogle Scholar Ng M, et al. Effective coverage: a metric for monitoring universal health coverage. PLoS Med. 2014;11(9):e1001730.PubMedPubMedCentralCrossRefGoogle Scholar Minghui W, et al. A dicussion on the matter of the chronic disease management of peasants in the backdrop of the new rural cooperative medical system. Chin Health Serv Manage. 2013;30(4):283–4.Google Scholar Dai BZ. Analysis on the current dilemma and cause of chronic disease management in rural China. Chin J Public Health Manag. 2017;33(4):38-41.Google Scholar Yuan X, Xiaoli F, et al. Economic feasibility study on providing National Essential Medicine for free in chronic disease among elderly in rural areas. J Med Forum. 2018;39(05):74–6.Google Scholar © The Author(s). 2019 1.School of Public Policy and AdministrationXi'an Jiaotong UniversityXi'anPeople's Republic of China 2.International Business School SuzhouXi'an Jiaotong-Liverpool UniversitySuzhouPeople's Republic of China Cao, D., Zhou, Z., Si, Y. et al. BMC Health Serv Res (2019) 19: 437. https://doi.org/10.1186/s12913-019-4289-5 Received 26 November 2018 Accepted 23 June 2019 First Online 01 July 2019 Publisher Name BioMed Central
CommonCrawl
Research Opportunities for Undergraduates First-years Math Organizations on Campus Hyperbolic geometry and Riemann surfaces, and Systolic geometry Mentor: Bjoern Muetzel If you are an undergraduate interested in a reading course, independent study or working on a research project, feel free to contact me. I am particularly interested in the following topics. The hyperbolic plane is a space of constant negative curvature minus one, where different rules than in Euclidean space apply for geodesics, the geometry of polygons and the area of disks. A hyperbolic surface can be seen as a polygon in the hyperbolic plane with identified sides. We call such a surface a Riemann surface. Many questions about Riemann surfaces are still open or under study. Hyperbolic geometry is used in the theory of special relativity, particularly Minkowski spacetime. A systole of a surface is a shortest non-contractible loop on a surface. Every surface has a genus \( g \), where informally \( g \) denotes the number of holes. Surprisingly given any surface of fixed genus \( g \) and area one, the systole can not take a value larger than \(c \cdot \frac{\log(g)}{ \sqrt{g}} \), where \( c \) is a constant. A large number of families of short curves on surfaces satisfy this upper bound and example surfaces can be found among the hyperbolic Riemann surfaces. Research in Algebraic Combinatorics Advisor: Prof. Orellana I have a number of projects accessible to undergraduate students in Combinatorics, Algebra and Graph Theory. These projects can lead to a senior thesis for honors or high honors. The ideal student should have taken math 24 and preferably (although not required) Math 28, 31 (71), 38 and have some programming skills. For more details schedule an appointment. Explicit methods in number theory Advisor: Prof. Voight Classical unsolved problems often serve as the genesis for the formulation of a rich and unified mathematical fabric. Diophantus of Alexandria first sought solutions to algebraic equations in integers almost two thousand years ago. For instance, he stated that if a two numbers can be written as the sum of squares, then their product is also a sum of two squares: since $5=2^2+1^2$ and $13=3^2+2^2$, then also $13\cdot 5=65$ can be written as the sum of two squares, indeed $65=8^2+1^2$. Equations in which only integer solutions are sought are now called Diophantine equations in his honor. Diophantine equations may seem perfectly innocuous, but in fact within them can be found the deep and wonderously complex universe of number theory. Pierre de Fermat, a seventeenth century French lawyer and mathematician, famously wrote in his copy of Diophantus' treatise "Arithmetica" that "it is impossible to separate a power higher than two into two like powers", i.e., if $n>2$ then the equation $x^n+y^n=z^n$ has no solution in integers $x,y,z\ge 1$; provocatively, that he had "discovered a truly marvelous proof of this, which this margin is too narrow to contain." This deceptively simple mathematical statement known as "Fermat's last 'theorem'" remained without proof until the pioneering work of Andrew Wiles, who in 1995 (building on the work of many others) used the full machinery of modern algebra to exhibit a complete proof. Over 300 years, attempts to prove there are no solutions to this innocent equation gave birth to some of the great riches of modern number theory. Even before the work of Wiles, mathematicians recognized that geometric properties often govern the behavior of arithmetic objects. For example, Diophantus may have asked if there is a cube which is one more than a square, i.e., is there a solution in integers x,y to the equation $E : x^3-y^2=1$? This equation describes a curve in the plane called an elliptic curve, and a property of elliptic curves known as modularity was the central point in Wiles's proof. One sees visibly the solution $(x,y)=(1,0)$ to the equation—but are there any others? What happens if 1 is replaced by 2 or another number? Computational tools provide a means to test conjectures and can sometimes furnish partial solutions; for example, one can check in a fraction of a second on a desktop computer that there is no integral point on E other than $(1,0)$ with the coordinates x,y at most a million. Although this experiment does not furnish a proof, it is strongly suggestive. (Indeed, one can prove there are no other solutions following an argument of Leonhard Euler.) At the same time, theoretical advances fuel dramatic improvements in computation, allowing us to probe further into the Diophantine realm. My research falls into this area of computational arithmetic geometry: I am concerned with algorithmic aspects of the problem of finding rational and integral solutions to polynomial equations, and I investigate the arithmetic of moduli spaces and elliptic curves. My work blends number theory with the explicit methods in algebra, analysis, and geometry in the exciting context of modern computation. This research is primarily theoretical, but it has potential applications in the areas of cryptography and coding theory. The foundation of modern cryptography relies upon the apparent difficulty of certain computational problems in number theory, in particular, the factorization of integers (in RSA) or the discrete logarithm problem (in elliptic curve cryptography). I have several problems in the area of computational and explicit methods in number theory suitable for experimentation and possible resolution by motivated students. These problems can be tailored to the student based on interests, background, and personality, so there is little need to present the details here; but they all will feature a explicit mathematical approach and, very likely, some computational aspects. Mathematical maturity and curiosity is essential; some background (at the level of MATH 71) is desirable. [Past projects] Last modified on July 25, 2019
CommonCrawl
Printed from https://ideas.repec.org/p/vie/viennp/0318.html Monotone Methods for Equilibrium Selection under Perfect Foresight Dynamics Deisuke Oyama Satoru Takahashi Josef Hofbauer Daisuke Oyama This paper studies equilibrium selection in supermodular games based on perfect foresight dynamics. A normal form game is played repeatedly in a large society of rational agents. There are frictions: opportunities to revise actions follow independent Poison processes. Each agent forms his belief about the future evolution of the action distribution in the society, and takes an action that maximizes his expected discounted payoff. A perfect foresight path is defined to be a feasible path of the action distribution along which every agent with a revision opportunity takes a best response to this path itself. A Nash equilibrium is said to be absorbing if any perfect foresight path converges to this equilibrium whenever the initial distribution is suffciently close to the equilibrium; a Nash equilibrium is said to be globally accessible if for each initial distribution, there exists a perfect foresight path converging to this equilibrium. By exploiting the monotone structure of the dynamics, the unique Nash equilibrium that is absorbing and globally accessible for any small degree of friction is identified for certain classes of supermodular games. For games with monotone potentials, the selection of the monotone potential maximizer is obtained. Complete characterizations for absorption and global accessibiltiy are given for binary supermodular games. An example demonstrates that unanimity games may have multiple globally accessible equilibria for a small friction. Deisuke Oyama & Satoru Takahashi & Josef Hofbauer, 2003. "Monotone Methods for Equilibrium Selection under Perfect Foresight Dynamics," Vienna Economics Papers 0318, University of Vienna, Department of Economics. Handle: RePEc:vie:viennp:0318 File URL: http://homepage.univie.ac.at/Papers.Econ/RePEc/vie/viennp/vie0318.pdf , & , & ,, 2008. "Monotone methods for equilibrium selection under perfect foresight dynamics," Theoretical Economics, Econometric Society, vol. 3(2), June. Josef Hofbauer & Daisuke Oyama & Satoru Takahashi, 2004. "Monotone Methods for Equilibrium Selection under Perfect Foresight Dynamics," Econometric Society 2004 North American Winter Meetings 339, Econometric Society. Daisuke Oyama & Satoru Takahashi & Josef Hofbauer, 2003. "Monotone Methods for Equilibrium Selection under Perfect Foresight Dynamics," Levine's Bibliography 666156000000000420, UCLA Department of Economics. Oyama, Daisuke & Takahashi, Satoru & Hofbauer, Josef, 2003. "Monotone Methods for Equilibrium Selection under Perfect Foresight Dynamics," MPRA Paper 6721, University Library of Munich, Germany. John C. Harsanyi & Reinhard Selten, 1988. "A General Theory of Equilibrium Selection in Games," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262582384, February. Cooper,Russell, 1999. "Coordination Games," Cambridge Books, Cambridge University Press, number 9780521578967. Frankel, David M. & Morris, Stephen & Pauzner, Ady, 2003. "Equilibrium selection in global games with strategic complementarities," Journal of Economic Theory, Elsevier, vol. 108(1), pages 1-44, January. David M. Frankel & Stephen Morris & Ady Pauzner, 2000. "Equilibrium Selection in Global Games with Strategic Complementarities," Econometric Society World Congress 2000 Contributed Papers 1490, Econometric Society. David M. Frankel & Stephen Morris & Ady Pauzner, 2001. "Equilibrium Selection in Global Games with Strategic Complementarities," Cowles Foundation Discussion Papers 1336, Cowles Foundation for Research in Economics, Yale University. Frankel, David M. & Morris, Stephen & Pauzner, Ady, 2003. "Equilibrium Selection in Global Games with Strategic Complementarities," ISU General Staff Papers 200301010800001098, Iowa State University, Department of Economics. Frankel, David M. & Morris, Stephen & Pauzner, Ady, 2003. "Equilibrium Selection in Global Games with Strategic Complementarities," Staff General Research Papers Archive 11920, Iowa State University, Department of Economics. Carlsson, Hans & van Damme, Eric, 1993. "Global Games and Equilibrium Selection," Econometrica, Econometric Society, vol. 61(5), pages 989-1018, September. Carlsson, H. & van Damme, E.E.C., 1990. "Global games and equilibrium selection," Other publications TiSEM 698f4897-46c6-4097-8265-2, Tilburg University, School of Economics and Management. Hans Carlsson & Eric van Damme, 1993. "Global Games and Equilibrium Selection," Levine's Working Paper Archive 122247000000001088, David K. Levine. Carlsson, H. & Van Damme, E., 1990. "Global Games And Equilibrium Selection," Papers 9052, Tilburg - Center for Economic Research. Carlsson, H. & van Damme, E.E.C., 1990. "Global games and equilibrium selection," Discussion Paper 1990-52, Tilburg University, Center for Economic Research. Carlsson, H. & van Damme, E.E.C., 1993. "Global games and equilibrium selection," Other publications TiSEM 49a54f00-dcec-4fc1-9488-4, Tilburg University, School of Economics and Management. Kandori Michihiro & Rob Rafael, 1995. "Evolution of Equilibria in the Long Run: A General Theory and Applications," Journal of Economic Theory, Elsevier, vol. 65(2), pages 383-414, April. M. Kandori & R. Rob, 2010. "Evolution of Equilibria in the Long Run: A General Theory and Applications," Levine's Working Paper Archive 502, David K. Levine. Young, H Peyton, 1993. "The Evolution of Conventions," Econometrica, Econometric Society, vol. 61(1), pages 57-84, January. Matsui Akihiko & Matsuyama Kiminori, 1995. "An Approach to Equilibrium Selection," Journal of Economic Theory, Elsevier, vol. 65(2), pages 415-434, April. Akihiko Matsui & Kiminori Matsuyama, 1990. "An Approach to Equilibrium Selection," Discussion Papers 970, Northwestern University, Center for Mathematical Studies in Economics and Management Science. Akihiko Matsui & Kiminori Matsuyama, 1991. "An Approach to Equilibrium Selection," Discussion Papers 1065, Northwestern University, Center for Mathematical Studies in Economics and Management Science. Hofbauer, Josef & Sorger, Gerhard, 1999. "Perfect Foresight and Equilibrium Selection in Symmetric Potential Games," Journal of Economic Theory, Elsevier, vol. 85(1), pages 1-23, March. Josef HOFBAUER & Gerhard SORGER, 1998. "Perfect Foresight and Equilibrium Selection in Symmetric Potential Games," Vienna Economics Papers vie9802, University of Vienna, Department of Economics. Gerhard SORGER, 1998. "Perfect Foresight and Equilibrium Selection in Symmetric Potential Games," Vienna Economics Papers 9802, University of Vienna, Department of Economics. Kandori, Michihiro & Mailath, George J & Rob, Rafael, 1993. "Learning, Mutation, and Long Run Equilibria in Games," Econometrica, Econometric Society, vol. 61(1), pages 29-56, January. Kandori, M. & Mailath, G.J., 1991. "Learning, Mutation, And Long Run Equilibria In Games," Papers 71, Princeton, Woodrow Wilson School - John M. Olin Program. M. Kandori & G. Mailath & R. Rob, 1999. "Learning, Mutation and Long Run Equilibria in Games," Levine's Working Paper Archive 500, David K. Levine. Matsui, Akihiko & Oyama, Daisuke, 2006. "Rationalizable foresight dynamics," Games and Economic Behavior, Elsevier, vol. 56(2), pages 299-322, August. Akihiko Matsui & Daisuke Oyama, 2002. "Rationalizable Foresight Dynamics: Evolution and Rationalizability," Vienna Economics Papers 0302, University of Vienna, Department of Economics. Atsushi Kajii & Stephen Morris, 1997. "The Robustness of Equilibria to Incomplete Information," Econometrica, Econometric Society, vol. 65(6), pages 1283-1310, November. Atsushi Kajii & Stephen Morris, "undated". "The Robustness of Equilibria to Incomplete Information," Penn CARESS Working Papers ed504c985fc375cbe719b3f60, Penn Economics Department. Atsushi Kajii & Stephen Morris, "undated". ""The Robustness of Equilibria to Incomplete Information*''," CARESS Working Papres 95-18, University of Pennsylvania Center for Analytic Research and Economics in the Social Sciences. Gilboa, Itzhak & Matsui, Akihiko, 1991. "Social Stability and Equilibrium," Econometrica, Econometric Society, vol. 59(3), pages 859-867, May. Itzhak Gilboa & Akihiko Matsui, 1991. "Social Stability and Equilibrium," Post-Print hal-00753235, HAL. I. Gilboa & A. Matsui, 2010. "Social Stability and Equilibrium," Levine's Working Paper Archive 534, David K. Levine. Selten, Reinhard, 1995. "An axiomatic theory of a risk dominance measure for bipolar games with linear incentives," Games and Economic Behavior, Elsevier, vol. 8(1), pages 213-263. Josef Hofbauer & Gerhard Sorger, 2002. "A Differential Game Approach To Evolutionary Equilibrium Selection," International Game Theory Review (IGTR), World Scientific Publishing Co. Pte. Ltd., vol. 4(01), pages 17-31. Kaneda Mitsuhiro, 1995. "Industrialization under Perfect Foresight: A World Economy with a Continuum of Countries," Journal of Economic Theory, Elsevier, vol. 66(2), pages 437-462, August. Tercieux, Olivier, 2006. "p-Best response set," Journal of Economic Theory, Elsevier, vol. 131(1), pages 45-70, November. Olivier Tercieux, 2006. "p-Best response set," Post-Print halshs-00754120, HAL. Athey, Susan, 2001. "Single Crossing Properties and the Existence of Pure Strategy Equilibria in Games of Incomplete Information," Econometrica, Econometric Society, vol. 69(4), pages 861-889, July. Athey, S., 1997. "Sigle Crossing Properties and the Existence of Pure Strategy Equilibria in Games of Incomplete Information," Working papers 97-11, Massachusetts Institute of Technology (MIT), Department of Economics. Kim, Youngse, 1996. "Equilibrium Selection inn-Person Coordination Games," Games and Economic Behavior, Elsevier, vol. 15(2), pages 203-227, August. Vives, Xavier, 1990. "Nash equilibrium with strategic complementarities," Journal of Mathematical Economics, Elsevier, vol. 19(3), pages 305-321. Vives, X., 1988. "Nash Equilibrium With Strategic Complementarities," UFAE and IAE Working Papers 107-88, Unitat de Fonaments de l'Anàlisi Econòmica (UAB) and Institut d'Anàlisi Econòmica (CSIC). Kiminori Matsuyama, 1991. "Increasing Returns, Industrialization, and Indeterminacy of Equilibrium," The Quarterly Journal of Economics, Oxford University Press, vol. 106(2), pages 617-650. Kiminori Matsuyama, 1990. "Increasing Returns, Industrialization and Indeterminacy of Equilibrium," Discussion Papers 878, Northwestern University, Center for Mathematical Studies in Economics and Management Science. Morris, Stephen & Ui, Takashi, 2005. "Generalized potentials and robust sets of equilibria," Journal of Economic Theory, Elsevier, vol. 124(1), pages 45-78, September. Stephen Morris & Takashi Ui, 2003. "Generalized Potentials and Robust Sets of Equilibria," Cowles Foundation Discussion Papers 1394, Cowles Foundation for Research in Economics, Yale University. smorris & Takashi Ui, 2004. "Generalized Potentials and Robust Sets of Equilibria," Econometric Society 2004 North American Winter Meetings 45, Econometric Society. Stephen Morris & Takashi Ui, 2003. "Generalized Potentials and Robust Sets of Equilibria," Levine's Working Paper Archive 506439000000000325, David K. Levine. Josef Hofbauer & William H. Sandholm, 2002. "On the Global Convergence of Stochastic Fictitious Play," Econometrica, Econometric Society, vol. 70(6), pages 2265-2294, November. Oyama, Daisuke, 2002. "p-Dominance and Equilibrium Selection under Perfect Foresight Dynamics," Journal of Economic Theory, Elsevier, vol. 107(2), pages 288-310, December. Matsuyama, Kiminori, 1992. "The market size, entrepreneurship, and the big push," Journal of the Japanese and International Economies, Elsevier, vol. 6(4), pages 347-364, December. Milgrom, Paul & Roberts, John, 1990. "Rationalizability, Learning, and Equilibrium in Games with Strategic Complementarities," Econometrica, Econometric Society, vol. 58(6), pages 1255-1277, November. J. Hofbauer, 1999. "The spatially dominant equilibrium of a game," Annals of Operations Research, Springer, vol. 89(0), pages 233-251, January. Oyama, Daisuke & Tercieux, Olivier, 2009. "Iterated potential and robustness of equilibria," Journal of Economic Theory, Elsevier, vol. 144(4), pages 1726-1769, July. Oyama, Daisuke & Tercieux, Olivier, 2004. "Iterated Potential and Robustness of Equilibria," MPRA Paper 1599, University Library of Munich, Germany. Daisuke Oyama & Olivier Tercieux, 2009. "Iterated potential and robustness of equilibria," PSE-Ecole d'économie de Paris (Postprint) halshs-00754349, HAL. Daisuke Oyama & Olivier Tercieux, 2009. "Iterated potential and robustness of equilibria," Post-Print halshs-00754349, HAL. Daisuke Oyama & Satoru Takahashi & Josef Hofbauer, 2011. "Perfect foresight dynamics in binary supermodular games," International Journal of Economic Theory, The International Society for Economic Theory, vol. 7(3), pages 251-267, September. Stephen Morris & Hyun Song Shin, 2000. "Global Games: Theory and Applications," Cowles Foundation Discussion Papers 1275, Cowles Foundation for Research in Economics, Yale University. Stephen Morris & Hyun S Shin, 2001. "Global Games: Theory and Applications," Levine's Working Paper Archive 122247000000001080, David K. Levine. Stephen Morris & Hyun Song Shin, 2000. "Global Games: Theory and Applications," Cowles Foundation Discussion Papers 1275R, Cowles Foundation for Research in Economics, Yale University, revised Aug 2001. Kojima, Fuhito & Takahashi, Satoru, 2008. "p-Dominance and perfect foresight dynamics," Journal of Economic Behavior & Organization, Elsevier, vol. 67(3-4), pages 689-701, September. Kojima, Fuhito, 2006. "Risk-dominance and perfect foresight dynamics in N-player games," Journal of Economic Theory, Elsevier, vol. 128(1), pages 255-273, May. Jun Honda, 2018. "Games with the total bandwagon property meet the Quint–Shubik conjecture," International Journal of Game Theory, Springer;Game Theory Society, vol. 47(3), pages 893-912, September. Honda, Jun, 2015. "Games with the Total Bandwagon Property," Department of Economics Working Paper Series 197, WU Vienna University of Economics and Business. Jun Honda, 2015. "Games with the Total Bandwagon Property," Department of Economics Working Papers wuwp197, Vienna University of Economics and Business, Department of Economics. Zhang, Boyu, 2016. "Quantal response methods for equilibrium selection in normal form games," Journal of Mathematical Economics, Elsevier, vol. 64(C), pages 113-123. Iijima, Ryota, 2015. "Iterated generalized half-dominance and global game selection," Journal of Economic Theory, Elsevier, vol. 159(PA), pages 120-136. Boyu Zhang & Josef Hofbauer, 2015. "Equilibrium selection via replicator dynamics in $$2 \times 2$$ 2 × 2 coordination games," International Journal of Game Theory, Springer;Game Theory Society, vol. 44(2), pages 433-448, May. Oyama, Daisuke, 2009. "Agglomeration under forward-looking expectations: Potentials and global stability," Regional Science and Urban Economics, Elsevier, vol. 39(6), pages 696-713, November. Oyama, Daisuke, 2006. "Agglomeration under Forward-Looking Expectations: Potentials and Global Stability," MPRA Paper 15239, University Library of Munich, Germany. Maruta, Toshimasa & Okada, Akira, 2012. "Stochastically stable equilibria in n-person binary coordination games," Mathematical Social Sciences, Elsevier, vol. 63(1), pages 31-42. Hofbauer, Josef & Sandholm, William H., 2007. "Evolution in games with randomly disturbed payoffs," Journal of Economic Theory, Elsevier, vol. 132(1), pages 47-69, January. Hofbauer,J. & Sandholm,W.H., 2003. "Evolution in games with randomly disturbed payoffs," Working papers 20, Wisconsin Madison - Social Systems. Sandholm, William H., 2015. "Population Games and Deterministic Evolutionary Dynamics," Handbook of Game Theory with Economic Applications,, Elsevier. Kets, Willemien & Kager, Wouter & Sandroni, Alvaro, 2022. "The value of a coordination game," Journal of Economic Theory, Elsevier, vol. 201(C). Kets, Willemien & Kager, Wouter & Sandroni, Alvaro, 2021. "The Value of a Coordination Game," SocArXiv ymzrd, Center for Open Science. Kager, Wouter & Kets, Willemien & Sandroni, Alvaro, 2021. "The Value of a Coordination Game," CEPR Discussion Papers 16229, C.E.P.R. Discussion Papers. Willemien Kets & Wouter Kager & Alvaro Sandroni, 2021. "The Value of the Coordination Game," Economics Series Working Papers 938, University of Oxford, Department of Economics. C72 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Noncooperative Games C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games This item is featured on the following reading lists, Wikipedia, or ReplicationWiki pages: Socio-Economics of Innovation All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:vie:viennp:0318. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: . General contact details of provider: https://econ.univie.ac.at/ . For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Paper Administrator (email available below). General contact details of provider: https://econ.univie.ac.at/ .
CommonCrawl
\begin{document} \title{Wigner operator's new transformation in phase space quantum mechanics and its applications \thanks{{\small Work supported by the National Natural Science Foundation of China under grant: 10775097, 10874174, and Specialized research fund for the doctoral program of higher education of China}} } \author{$^{1,2}$Hong-yi Fan\\$^{1}${\small Department of Physics, Shanghai Jiao Tong University, \ Shanghai, 200030, China}\\{\small \ }$^{2}${\small Department of Material Science and Engineering,}\\{\small \ University of Science and Technology of China, Hefei, Anhui 230026, China}} \maketitle \begin{abstract} Using operators' Weyl ordering expansion formula (Hong-yi Fan,\emph{\ }J. Phys. A 25 (1992) 3443) we find new two-fold integration transformation about the Wigner operator $\Delta \left( q^{\prime},p^{\prime}\right) $ ($q $-number transform) in phase space quantum mechanics, \[ \iint_{-\infty}^{\infty}\frac{\mathtt{d}p^{\prime}\mathtt{d}q^{\prime}}{\pi }\Delta \left( q^{\prime},p^{\prime}\right) e^{-2i\left( p-p^{\prime }\right) \left( q-q^{\prime}\right) }=\delta \left( p-P\right) \delta \left( q-Q\right) , \] and its inverse \[ \iint_{-\infty}^{\infty}\mathtt{d}q\mathtt{d}p\delta \left( p-P\right) \delta \left( q-Q\right) e^{2i\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }=\Delta \left( q^{\prime},p^{\prime}\right) , \] where $Q,$ $P$ are the coordinate and momentum operators, respectively. We apply it to studying mutual converting formulas among $Q-P$ ordering, $P-Q$ ordering and Weyl ordering of operators. In this way, the contents of phase space quantum mechanics can be enriched. PACS: 03.65.-w, 02.90.+p Keywords: Wigner operator; Weyl ordering; two-fold integration transformation \end{abstract} \section{Introduction} Phase space quantum mechanics (PSQM) pioneered by Wigner [1] and Weyl [2] has been paid more and more attention since the foundation of quantum mechanics, because it has wide applications in quantum statistics, quantum optics, and quantum chemistry. In PSQM observables and states are replaced by functions on classical phase space so that expected values are calculated, as in classical statistical physics, by averaging over the phase space. The phase-space approaches provides valuable physical insight and allows us to describe alike classical and quantum processes using the similar language. Development of phase space quantum mechanics [3-5] always accompanies with solving operator ordering problems. Weyl proposed a scheme for quantizing classical coordinate and momentum quantity $q^{m}p^{n}$ ($c$-number) as the quantum operators ($q$-number) in the following way \begin{equation} q^{m}p^{n}\rightarrow \left( \frac{1}{2}\right) ^{m}\sum_{l=0}^{m}\binom {m}{l}Q^{m-l}P^{n}Q^{l},\label{1} \end{equation} where $Q,$ $P$ are the coordinate and momentum operators, respectively, $[Q,P]=\mathtt{i}\hbar.$ (Later in this work we set $\hbar=1).$ The right-hand side of (\ref{1}) is in Weyl ordering, so we introduced the symbol $ \genfrac{}{}{0pt}{}{:}{:} \genfrac{}{}{0pt}{}{:}{:} $ to characterize it [6-7], and \begin{align} q^{m}p^{n} & \rightarrow \left( \frac{1}{2}\right) ^{m}\sum_{l=0}^{m} \binom{m}{l}Q^{m-l}P^{n}Q^{l}\nonumber \\ & = \genfrac{}{}{0pt}{}{:}{:} \left( \frac{1}{2}\right) ^{m}\sum_{l=0}^{m}\binom{m}{l}Q^{m-l}P^{n}Q^{l} \genfrac{}{}{0pt}{}{:}{:} = \genfrac{}{}{0pt}{}{:}{:} Q^{m}P^{n} \genfrac{}{}{0pt}{}{:}{:} ,\label{2} \end{align} where in the second step we have used the property that Bose operators are commutative within $ \genfrac{}{}{0pt}{}{:}{:} \genfrac{}{}{0pt}{}{:}{:} .$ This is like the fact that Bose operators are commutative within the normal ordering symbol $:$ $:$. The Weyl quantization rule between an operator $H\left( P,Q\right) $ and its classical correspondence is \begin{equation} H\left( P,Q\right) =\iint_{-\infty}^{\infty}\mathtt{d}q\mathtt{d}ph\left( p,q\right) \Delta \left( q,p\right) ,\label{3} \end{equation} where $\Delta \left( q,p\right) $ is the Wigner operator [2-5] [8]. Using $ \genfrac{}{}{0pt}{}{:}{:} \genfrac{}{}{0pt}{}{:}{:} $ we have invented the integration technique within Weyl ordered product of operators with which we constructed an operators' Weyl ordering expansion formula (see Eq. (21) below), which is the same as Eq. (53) in Ref. [6]). In this work we shall use this formula to find new two-fold $q$-number integration transformation about the Wigner operator $\Delta \left( q^{\prime },p^{\prime}\right) $ in phase space quantum mechanics (see Eqs. (33) and (34) below), which helps to convert P-Q ordering and Q-P ordering to Weyl ordering, and vice versa. The work is arranged as follows: In Sec. 2 we briefly review the Weyl ordered form of Wigner operator. In Sec. 3 we derive the Weyl ordering forms of $\delta \left( p-P\right) \delta \left( q-Q\right) $ and $\delta \left( q-Q\right) \delta \left( p-P\right) ,$ their transformation to the Wigner operator is shown in Sec. 4. Based on Sec. 4 we in Sec. 5 propose a new $c$-number integration transformation in $p-q$ phase space, see Eq. (35) below, and its inverse transformation, which possesses Parsval-like theorem. Secs. 6-8 are devoted to deriving mutual converting formulas among $Q-P$ ordering, $P-Q$ ordering and Weyl ordering of operators. In this way, the contents of phase space quantum mechanics can be enriched. \section{The Weyl ordered form of Wigner operator} According to Eq. (3) we can rewrite Eq. (2) as \begin{equation} \genfrac{}{}{0pt}{}{:}{:} Q^{m}P^{n} \genfrac{}{}{0pt}{}{:}{:} =\iint \mathtt{d}q\mathtt{d}pq^{m}p^{n}\Delta \left( q,p\right) ,\label{4} \end{equation} which implies that the integration kernel (the Wigner operator) is [6-7] \begin{equation} \Delta \left( q,p\right) = \genfrac{}{}{0pt}{}{:}{:} \delta \left( q-Q\right) \delta \left( p-P\right) \genfrac{}{}{0pt}{}{:}{:} = \genfrac{}{}{0pt}{}{:}{:} \delta \left( p-P\right) \delta \left( q-Q\right) \genfrac{}{}{0pt}{}{:}{:} .\label{5} \end{equation} Substituting (5) into (3) yields $H\left( P,Q\right) = \genfrac{}{}{0pt}{}{:}{:} h\left( P,Q\right) \genfrac{}{}{0pt}{}{:}{:} ,$ where $ \genfrac{}{}{0pt}{}{:}{:} h\left( P,Q\right) \genfrac{}{}{0pt}{}{:}{:} $ is just the result of replacing $p\rightarrow P,q\rightarrow Q$ in $h\left( p,q\right) $ and then putting it within $ \genfrac{}{}{0pt}{}{:}{:} \genfrac{}{}{0pt}{}{:}{:} .$ Further, using \begin{equation} Q=\frac{a+a^{\dagger}}{\sqrt{2}},\text{ \ }P=\frac{a-a^{\dagger}}{\sqrt {2}\mathtt{i}},\text{ }\alpha=\frac{q+\mathtt{i}p}{\sqrt{2}},\text{ }\left[ a,a^{\dagger}\right] =1,\label{6} \end{equation} we can express \begin{equation} \Delta \left( q,p\right) \rightarrow \Delta \left( \alpha,\alpha^{\ast }\right) =\frac{1}{2} \genfrac{}{}{0pt}{}{:}{:} \delta \left( \alpha-a\right) \delta \left( \alpha^{\ast}-a^{\dagger}\right) \genfrac{}{}{0pt}{}{:}{:} .\label{7} \end{equation} It then follows \begin{align} \genfrac{}{}{0pt}{}{:}{:} K\left( a^{\dagger},a\right) \genfrac{}{}{0pt}{}{:}{:} & =\int \mathtt{d}^{2}\alpha K\left( \alpha^{\ast},\alpha \right) \genfrac{}{}{0pt}{}{:}{:} \delta \left( \alpha-a\right) \delta \left( \alpha^{\ast}-a^{\dagger}\right) \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =2\int \mathtt{d}^{2}\alpha K\left( \alpha^{\ast},\alpha \right) \Delta \left( \alpha,\alpha^{\ast}\right) ,\label{8} \end{align} Thus the neat expression of $\Delta \left( q,p\right) $ in Dirac's delta function form is very useful, one of its uses is that the marginal distributions of Wigner operator can be clearly shown, due to the coordinate and momentum projectors are respectively \begin{equation} \left \vert q\right \rangle \left \langle q\right \vert =\delta \left( q-Q\right) = \genfrac{}{}{0pt}{}{:}{:} \delta \left( q-Q\right) \genfrac{}{}{0pt}{}{:}{:} ,\label{9} \end{equation} \begin{equation} \left \vert p\right \rangle \left \langle p\right \vert =\delta \left( p-P\right) = \genfrac{}{}{0pt}{}{:}{:} \delta \left( p-P\right) \genfrac{}{}{0pt}{}{:}{:} ,\label{10} \end{equation} we immediately know that the following marginal integration \begin{equation} \int_{-\infty}^{\infty}\mathtt{d}q\Delta \left( q,p\right) =\int_{-\infty }^{\infty}\mathtt{d}q \genfrac{}{}{0pt}{}{:}{:} \delta \left( q-Q\right) \delta \left( p-P\right) \genfrac{}{}{0pt}{}{:}{:} = \genfrac{}{}{0pt}{}{:}{:} \delta \left( p-P\right) \genfrac{}{}{0pt}{}{:}{:} =\left \vert p\right \rangle \left \langle p\right \vert ,\label{11} \end{equation} similarly, \begin{equation} \int_{-\infty}^{\infty}\mathtt{d}p\Delta \left( q,p\right) = \genfrac{}{}{0pt}{}{:}{:} \delta \left( q-Q\right) \genfrac{}{}{0pt}{}{:}{:} =\left \vert q\right \rangle \left \langle q\right \vert .\label{12} \end{equation} It then follows the completeness of $\Delta \left( q,p\right) ,$ \begin{equation} \iint \limits_{-\infty}^{\infty}\mathtt{d}q\mathtt{d}p\Delta \left( q,p\right) =1,\label{13} \end{equation} so the Weyl rule for $H\left( P,Q\right) $ in (3) can also be viewed as $H$'s expansion in terms of $\Delta \left( q,p\right) .$ When $H\left( P,Q\right) $ is in Weyl ordered, which means $H\left( P,Q\right) = \genfrac{}{}{0pt}{}{:}{:} H\left( P,Q\right) \genfrac{}{}{0pt}{}{:}{:} ,$ then using the completeness (13) we see \begin{equation} \genfrac{}{}{0pt}{}{:}{:} H\left( P,Q\right) \genfrac{}{}{0pt}{}{:}{:} = \genfrac{}{}{0pt}{}{:}{:} H\left( P,Q\right) \genfrac{}{}{0pt}{}{:}{:} \iint \limits_{-\infty}^{\infty}\mathtt{d}q\mathtt{d}p\Delta \left( q,p\right) =\iint \limits_{-\infty}^{\infty}\mathtt{d}q\mathtt{d}pH\left( q,p\right) \Delta \left( q,p\right) ,\label{14} \end{equation} as if $\Delta \left( q,p\right) $ was the "eigenvector" of $ \genfrac{}{}{0pt}{}{:}{:} H\left( P,Q\right) \genfrac{}{}{0pt}{}{:}{:} .$ On the other hand, due to the normally ordered forms of $\left \vert q\right \rangle \left \langle q\right \vert $ and $\left \vert p\right \rangle \left \langle p\right \vert $ [8] \begin{equation} \left \vert q\right \rangle \left \langle q\right \vert =\frac{1}{\sqrt{\pi} }\colon e^{-\left( q-Q\right) ^{2}}\colon,\label{15} \end{equation} \begin{equation} \left \vert p\right \rangle \left \langle p\right \vert =\frac{1}{\sqrt{\pi} }\colon e^{-\left( p-P\right) ^{2}}\colon,\label{16} \end{equation} we know the normally ordered form of $\Delta \left( q,p\right) $ [9] \begin{equation} \Delta \left( q,p\right) =\frac{1}{\pi}\colon e^{-\left( q-Q\right) ^{2}-\left( p-P\right) ^{2}}\colon=\frac{1}{\pi}\colon e^{-2\left( \alpha^{\ast}-a^{\dagger}\right) \left( \alpha-a\right) }\colon =\Delta \left( \alpha,\alpha^{\ast}\right) .\label{17} \end{equation} Using the completeness relation of the coherent state $\left \vert \beta \right \rangle ,$ \begin{equation} \int \frac{d^{2}\beta}{\pi}\left \vert \beta \right \rangle \left \langle \beta \right \vert =1,\text{\ }\left \vert \beta \right \rangle =\exp[-\frac {|\beta|^{2}}{2}+\beta a^{\dagger}]\left \vert 0\right \rangle ,\text{ \ }a\left \vert \beta \right \rangle =\beta \left \vert \beta \right \rangle ,\label{18} \end{equation} where $\left[ a,a^{\dagger}\right] =1,$ $\left \vert \beta \right \rangle $ is the coherent state [10-11], we have \begin{align} 2\pi \mathtt{Tr}\Delta \left( \alpha,\alpha^{\ast}\right) & =2\mathtt{Tr} \left[ \colon e^{-2\left( \alpha^{\ast}-a^{\dagger}\right) \left( \alpha-a\right) }\colon \int \frac{\mathtt{d}^{2}\beta}{\pi}\left \vert \beta \right \rangle \left \langle \beta \right \vert \right] \nonumber \\ & =2\int \frac{\mathtt{d}^{2}\beta}{\pi}e^{-2\left( \alpha^{\ast}-\beta^{\ast }\right) \left( \alpha-\beta \right) }=1,\label{19} \end{align} this is equivalent to (13). Using (17) we also easily obtain \begin{align} & \mathtt{Tr}\left[ \Delta \left( \alpha,\alpha^{\ast}\right) \Delta \left( \alpha^{\prime},\alpha^{\prime \ast}\right) \right] \nonumber \\ & =\frac{1}{\pi^{2}}\mathtt{Tr}\left[ \colon e^{-2\left( \alpha^{\ast }-a^{\dagger}\right) \left( \alpha-a\right) }\colon \int \frac{\mathtt{d} ^{2}\beta}{\pi}\left \vert \beta \right \rangle \left \langle \beta \right \vert \colon e^{-2\left( \alpha^{\prime \ast}-a^{\dagger}\right) \left( \alpha^{\prime}-a\right) }\colon \right] \nonumber \\ & =\mathtt{Tr}\left[ \int \frac{\mathtt{d}^{2}\beta}{\pi^{3}}e^{-2\left( \alpha^{\ast}-a^{\dagger}\right) \left( \alpha-\beta \right) }\left \vert \beta \right \rangle \left \langle \beta \right \vert e^{-2\left( \alpha ^{\prime \ast}-\beta^{\ast}\right) \left( \alpha^{\prime}-a\right) }\right] \nonumber \\ & =\int \frac{\mathtt{d}^{2}\beta}{\pi}\left \langle \beta \right \vert e^{-2\left( \alpha^{\prime \ast}-\beta^{\ast}\right) \left( \alpha^{\prime }-a\right) }e^{-2\left( \alpha^{\ast}-a^{\dagger}\right) \left( \alpha-\beta \right) }\left \vert \beta \right \rangle \nonumber \\ & =\int \frac{\mathtt{d}^{2}\beta}{\pi}e^{-2\left( \alpha^{\ast}-\beta^{\ast }\right) \left( \alpha-\beta \right) -2\left( \alpha^{\prime \ast} -\beta^{\ast}\right) \left( \alpha^{\prime}-\beta \right) }e^{4\left( \alpha-\beta \right) \left( \alpha^{\prime \ast}-\beta^{\ast}\right) }\nonumber \\ & =\int \frac{\mathtt{d}^{2}\beta}{\pi^{3}}e^{2\beta^{\ast}\left( \alpha^{\prime}-\alpha \right) -2\beta \left( \alpha^{\prime \ast}-\alpha ^{\ast}\right) -2|\alpha|^{2}-2|\alpha^{\prime}|^{2}+4\alpha \alpha ^{\prime \ast}}\nonumber \\ & =\frac{1}{4\pi}\delta \left( \alpha-\alpha^{\prime}\right) \delta \left( \alpha^{\ast}-\alpha^{\prime \ast}\right) .\label{20} \end{align} \section{Weyl ordering of $\delta \left( p-P\right) \delta \left( q-Q\right) $ and $\delta \left( q-Q\right) \delta \left( p-P\right) $} In Refs. [6-7] we have presented operators' Weyl ordering expansion formula \begin{equation} \rho=2\int \frac{\mathtt{d}^{2}\beta}{\pi} \genfrac{}{}{0pt}{}{:}{:} \left \langle -\beta \right \vert \rho \left \vert \beta \right \rangle \exp \left[ 2\left( \beta^{\ast}a-a^{\dagger}\beta+a^{\dagger}a\right) \right] \genfrac{}{}{0pt}{}{:}{:} .\label{21} \end{equation} For the pure coherent state density operator $\left \vert \alpha \right \rangle \left \langle \alpha \right \vert ,$ using (21) and the overlap $\left \langle \alpha \right \vert \left. \beta \right \rangle =\exp[-\frac{1}{2}\left( |\alpha|^{2}+|\beta|^{2}\right) +\alpha^{\ast}\beta]$ we derive \begin{align} \left \vert \alpha \right \rangle \left \langle \alpha \right \vert & =2 \genfrac{}{}{0pt}{}{:}{:} \int \frac{\mathtt{d}^{2}\beta}{\pi}\left \langle -\beta \right \vert \left. \alpha \right \rangle \left \langle \alpha \right \vert \left. \beta \right \rangle \exp[2\left( \beta^{\ast}a-a^{\dagger}\beta+a^{\dagger}a\right) ] \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =2 \genfrac{}{}{0pt}{}{:}{:} \exp \left[ -2\left( \alpha-a\right) \left( \alpha^{\ast}-a^{\dagger }\right) \right] \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =2 \genfrac{}{}{0pt}{}{:}{:} \exp \left[ -\left( p-P\right) ^{2}-\left( q-Q\right) ^{2}\right] \genfrac{}{}{0pt}{}{:}{:} ,\label{22} \end{align} thus the Weyl ordered form of pure coherent state $\left \vert \alpha \right \rangle \left \langle \alpha \right \vert $ is a Gaussian in $p-q$ space. Combining Eqs. (21), (8) and (20) yields \begin{align} 2\pi \mathtt{Tr}\left[ \rho \Delta \left( \alpha,\alpha^{\ast}\right) \right] & =4\int \mathtt{d}^{2}\beta \left \langle -\beta \right \vert \rho \left \vert \beta \right \rangle \mathtt{Tr}\left \{ \genfrac{}{}{0pt}{}{:}{:} \exp \left[ 2\left( \beta^{\ast}a-a^{\dagger}\beta+a^{\dagger}a\right) \right] \genfrac{}{}{0pt}{}{:}{:} \Delta \left( \alpha,\alpha^{\ast}\right) \right \} \nonumber \\ & =4\int \mathtt{d}^{2}\beta \left \langle -\beta \right \vert \rho \left \vert \beta \right \rangle \mathtt{Tr}\left[ 2\int \mathtt{d}^{2}\alpha^{\prime} \exp \left[ 2\left( \beta^{\ast}\alpha^{\prime}-\alpha^{\prime \ast} \beta+\alpha^{\prime \ast}\alpha^{\prime}\right) \right] \Delta \left( \alpha^{\prime},\alpha^{\prime \ast}\right) \Delta \left( \alpha,\alpha^{\ast }\right) \right] \nonumber \\ & =2\int \frac{\mathtt{d}^{2}\beta}{\pi}\left \langle -\beta \right \vert \rho \left \vert \beta \right \rangle \int \mathtt{d}^{2}\alpha^{\prime}\exp \left[ 2\left( \beta^{\ast}\alpha^{\prime}-\alpha^{\prime \ast}\beta+\alpha ^{\prime \ast}\alpha \right) \right] \delta \left( \alpha-\alpha^{\prime }\right) \delta \left( \alpha^{\ast}-\alpha^{\prime \ast}\right) \nonumber \\ & =2\int \frac{\mathtt{d}^{2}\beta}{\pi}\left \langle -\beta \right \vert \rho \left \vert \beta \right \rangle \exp \left[ 2\left( \beta^{\ast} \alpha-\alpha^{\ast}\beta+\alpha^{\ast}\alpha \right) \right] ,\label{23} \end{align} which is just an alternate expression of the Wigner function of $\rho,$ comparing (21) with (23) we see that the latter is just the result of replacing $a\rightarrow \alpha,$ $a^{\dagger}\rightarrow \alpha^{\ast},$ in the former, this is because that the right hand side of (21) is in Weyl ordering. Now we examine what is the Weyl ordering of $\delta \left( p-P\right) \delta \left( q-Q\right) .$ Using the completeness relation of $\left \vert q\right \rangle ,$ the coordinate eigenstate, and the completeness relation of the momentum eigenstate $\left \vert p\right \rangle ,$ $\left \langle q\right. \left \vert p\right \rangle =\frac{1}{\sqrt{2\pi}}e^{\mathtt{i}pq},$ we have \begin{align} \delta \left( p-P\right) \delta \left( q-Q\right) & =\int \mathtt{d} p^{\prime}\left \vert p^{\prime}\right \rangle \left \langle p^{\prime }\right \vert \delta \left( p-P\right) \delta \left( q-Q\right) \int \mathtt{d}q^{\prime}\left \vert q^{\prime}\right \rangle \left \langle q^{\prime}\right \vert \nonumber \\ & =\frac{1}{\sqrt{2\pi}}\int \mathtt{d}p^{\prime}\left \vert p^{\prime }\right \rangle \int \mathtt{d}q^{\prime}\left \langle q^{\prime}\right \vert \delta \left( p-p^{\prime}\right) \delta \left( q-q^{\prime}\right) e^{-\mathtt{i}p^{\prime}q^{\prime}}\nonumber \\ & =\frac{1}{\sqrt{2\pi}}\left \vert p\right \rangle \left \langle q\right \vert e^{-\mathtt{i}pq}.\label{24} \end{align} The overlap between $\left \langle q\right \vert $ and the coherent state is \begin{equation} \left \langle q\right \vert \left. \beta \right \rangle =\pi^{-1/4}\exp \left \{ -\frac{q^{2}}{2}+\sqrt{2}q\beta-\frac{1}{2}\beta^{2}-\frac{1}{2}|\beta |^{2}\right \} ,\label{25} \end{equation} and \begin{equation} \left \langle -\beta \right. \left \vert p\right \rangle =\pi^{-1/4}\exp \left \{ -\frac{p^{2}}{2}-\sqrt{2}ip\beta^{\ast}+\frac{1}{2}\beta^{\ast2}-\frac{1} {2}|\beta|^{2}\right \} .\label{26} \end{equation} Substituting (24) into (21) and using (25)-(26) lead to \begin{align} & \delta \left( p-P\right) \delta \left( q-Q\right) \nonumber \\ & =\frac{\sqrt{2}}{\pi}\int \frac{d^{2}\beta}{\pi} \genfrac{}{}{0pt}{}{:}{:} \left \langle -\beta \right \vert \left. p\right \rangle \left \langle q\right \vert e^{-\mathtt{i}pq}\left \vert \beta \right \rangle \exp \left[ 2\left( \beta^{\ast}a-a^{\dagger}\beta+a^{\dagger}a\right) \right] \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =\frac{\sqrt{2}}{\pi}e^{-\frac{q^{2}+p^{2}}{2}-\mathtt{i}pq}\int \frac{\mathtt{d}^{2}\beta}{\pi} \genfrac{}{}{0pt}{}{:}{:} \exp \left \{ -|\beta|^{2}+\sqrt{2}q\beta-\sqrt{2}\mathtt{i}p\beta^{\ast }\right \} \nonumber \\ & \times \exp \left[ 2\left( \beta^{\ast}a-a^{\dagger}\beta+a^{\dagger }a\right) -\frac{\beta^{2}}{2}+\frac{\beta^{\ast2}}{2}\right] \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =\frac{1}{\pi} \genfrac{}{}{0pt}{}{:}{:} \exp \{ \sqrt{2}q\left( a-a^{\dagger}\right) +\sqrt{2}\mathtt{i}p\left( a+a^{\dagger}\right) -2\mathtt{i}pq+a^{\dagger2}-a^{2}-a^{\dagger}a\} \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =\frac{1}{\pi} \genfrac{}{}{0pt}{}{:}{:} \exp[-2\mathtt{i}\left( q-Q\right) \left( p-P\right) ] \genfrac{}{}{0pt}{}{:}{:} .\label{27} \end{align} Similarly, we can derive \begin{align} \delta \left( q-Q\right) \delta \left( p-P\right) & =2\int \frac {\mathtt{d}^{2}\beta}{\pi} \genfrac{}{}{0pt}{}{:}{:} \left \langle -\beta \right \vert \left. q\right \rangle \left \langle p\right \vert e^{\mathtt{i}pq}\left \vert \beta \right \rangle \exp \left[ 2\left( \beta^{\ast}a-a^{\dagger}\beta+a^{\dagger}a\right) \right] \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =\frac{1}{\pi} \genfrac{}{}{0pt}{}{:}{:} \exp[2\mathtt{i}\left( q-Q\right) \left( p-P\right) ] \genfrac{}{}{0pt}{}{:}{:} .\label{28} \end{align} Eqs. (27)-(28) are the Weyl ordered forms of $\delta \left( p-P\right) \delta \left( q-Q\right) $ and $\delta \left( q-Q\right) \delta \left( p-P\right) ,$ respectively. \section{The new transformation of Wigner operator} Taking $\frac{1}{\pi} \genfrac{}{}{0pt}{}{:}{:} \exp[-2\mathtt{i}\left( q-Q\right) \left( p-P\right) ] \genfrac{}{}{0pt}{}{:}{:} $as an integration kernel of the following integration transformation with the result $ \genfrac{}{}{0pt}{}{:}{:} K\left( P,Q\right) \genfrac{}{}{0pt}{}{:}{:} ,$ \begin{equation} \iint_{-\infty}^{\infty}\frac{\mathtt{d}p\mathtt{d}q}{\pi}f\left( p,q\right) \genfrac{}{}{0pt}{}{:}{:} \exp[-2\mathtt{i}\left( q-Q\right) \left( p-P\right) ] \genfrac{}{}{0pt}{}{:}{:} = \genfrac{}{}{0pt}{}{:}{:} K\left( P,Q\right) \genfrac{}{}{0pt}{}{:}{:} ,\label{29} \end{equation} then from (27) we have \begin{equation} \genfrac{}{}{0pt}{}{:}{:} K\left( P,Q\right) \genfrac{}{}{0pt}{}{:}{:} =\iint_{-\infty}^{\infty}\mathtt{d}p\mathtt{d}qf\left( p,q\right) \delta \left( p-P\right) \delta \left( q-Q\right) =f\left( p,q\right) |_{p\rightarrow P,\text{ }q\rightarrow Q,\text{ }P\text{ before }Q},\label{30} \end{equation} this is the integration formula for quantizing classical function $f(p,q)$ as $P-Q$ ordering of operators. On the other hand, from (28) we have \begin{align} & \iint_{-\infty}^{\infty}\frac{\mathtt{d}p\mathtt{d}q}{\pi}f(p,q) \genfrac{}{}{0pt}{}{:}{:} \exp[2\mathtt{i}\left( q-Q\right) \left( p-P\right) ] \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =\iint_{-\infty}^{\infty}\mathtt{d}p\mathtt{d}qf(p,q)\delta \left( q-Q\right) \delta \left( p-P\right) =f\left( p,q\right) |_{q\rightarrow Q,\text{ }p\rightarrow P,\text{ }Q\text{ before }P},\label{31} \end{align} this is the scheme of quantizing classical function $f(p,q)$ as $Q-P$ ordering of operators. By noticing (5) we see \begin{align} & \frac{1}{\pi} \genfrac{}{}{0pt}{}{:}{:} \exp[-2\mathtt{i}\left( q-Q\right) \left( p-P\right) ] \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =\frac{1}{\pi}\iint \mathtt{d}p^{\prime}\mathtt{d}q^{\prime}e^{-2\mathtt{i} \left( q-q^{\prime}\right) \left( p-p^{\prime}\right) } \genfrac{}{}{0pt}{}{:}{:} \delta \left( q^{\prime}-Q\right) \delta \left( p^{\prime}-P\right) \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =\frac{1}{\pi}\iint \mathtt{d}p^{\prime}\mathtt{d}q^{\prime}\Delta \left( q^{\prime},p^{\prime}\right) e^{-2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }.\label{32} \end{align} It then follows from (32) and (27) that \begin{equation} \frac{1}{\pi}\iint \mathtt{d}p^{\prime}\mathtt{d}q^{\prime}\Delta \left( q^{\prime},p^{\prime}\right) e^{-2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }=\delta \left( p-P\right) \delta \left( q-Q\right) .\label{33} \end{equation} Similarly we can derive \begin{equation} \frac{1}{\pi}\iint \mathtt{d}p^{\prime}\mathtt{d}q^{\prime}\Delta \left( q^{\prime},p^{\prime}\right) e^{2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }=\delta \left( q-Q\right) \delta \left( p-P\right) ,\label{34} \end{equation} so $e^{\pm2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }/\pi$ can be considered the classical Weyl correspondence of $\delta \left( q-Q\right) \delta \left( p-P\right) $ and $\delta \left( p-P\right) \delta \left( q-Q\right) ,$ respectively$.$ Moreover, the inverse transform of (32) is \begin{align} & \iint \frac{\mathtt{d}q\mathtt{d}p}{\pi} \genfrac{}{}{0pt}{}{:}{:} \exp[-2\mathtt{i}\left( q-Q\right) \left( p-P\right) ] \genfrac{}{}{0pt}{}{:}{:} e^{2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }\nonumber \\ & =\iint \frac{\mathtt{d}q\mathtt{d}p}{\pi}\iint dp^{\prime \prime} dq^{\prime \prime}\Delta \left( q^{\prime \prime},p^{\prime \prime}\right) e^{-2\mathtt{i}\left( p-p^{\prime \prime}\right) \left( q-q^{\prime \prime }\right) +2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime }\right) }\nonumber \\ & =\iint dp^{\prime \prime}dq^{\prime \prime}\Delta \left( q^{\prime \prime },p^{\prime \prime}\right) e^{-2i\left( p^{\prime \prime}q^{\prime \prime }-p^{\prime}q^{\prime}\right) }\delta \left( q^{\prime}-q^{\prime \prime }\right) \delta \left( p^{\prime}-p^{\prime \prime}\right) =\Delta \left( q^{\prime},p^{\prime}\right) .\label{35} \end{align} which means \begin{equation} \iint \mathtt{d}q\mathtt{d}p\delta \left( p-P\right) \delta \left( q-Q\right) e^{2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }=\Delta \left( q^{\prime},p^{\prime}\right) ,\label{36} \end{equation} or \begin{equation} \iint \mathtt{d}q\mathtt{d}p\delta \left( q-Q\right) \delta \left( p-P\right) e^{-2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }=\Delta \left( q^{\prime},p^{\prime}\right) .\label{37} \end{equation} Eqs. (33)-(37) are new transformations of the Wigner operator in $q-p$ phase space. \section{The new transformation in phase space} Further, multiplying both sides of (35) from the left by $\iint \mathtt{d} q^{\prime}\mathtt{d}p^{\prime}h\left( p^{\prime},q^{\prime}\right) $ we obtain \begin{align} & \iint \mathtt{d}q^{\prime}\mathtt{d}p^{\prime}h\left( p^{\prime},q^{\prime }\right) \Delta \left( q^{\prime},p^{\prime}\right) \nonumber \\ & =\iint \mathtt{d}q^{\prime}\mathtt{d}p^{\prime}h\left( p^{\prime},q^{\prime }\right) \iint \frac{\mathtt{d}q\mathtt{d}p}{\pi} \genfrac{}{}{0pt}{}{:}{:} \exp[-2i\left( q-Q\right) \left( p-P\right) ] \genfrac{}{}{0pt}{}{:}{:} e^{2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }\nonumber \\ & =\iint \frac{\mathtt{d}q\mathtt{d}p}{\pi} \genfrac{}{}{0pt}{}{:}{:} \exp[-2\mathtt{i}\left( q-Q\right) \left( p-P\right) ] \genfrac{}{}{0pt}{}{:}{:} G\left( p,q\right) ,\label{38} \end{align} where we have introduced \begin{equation} G\left( p,q\right) \equiv \frac{1}{\pi}\iint \mathtt{d}q^{\prime} \mathtt{d}p^{\prime}h\left( p^{\prime},q^{\prime}\right) e^{2\mathtt{i} \left( p-p^{\prime}\right) \left( q-q^{\prime}\right) },\label{39} \end{equation} this is a new interesting transformation, because when $h\left( p^{\prime },q^{\prime}\right) =1,$ \begin{equation} \frac{1}{\pi}\iint \mathtt{d}q^{\prime}\mathtt{d}p^{\prime}e^{2\mathtt{i} \left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }=\int_{-\infty }^{\infty}\mathtt{d}q^{\prime}\delta \left( q-q^{\prime}\right) e^{2\mathtt{i}p\left( q-q^{\prime}\right) }=1.\label{40} \end{equation} The inverse of (39) is \begin{equation} \iint \frac{dqdp}{\pi}e^{-2i\left( p-p^{\prime}\right) \left( q-q^{\prime }\right) }G\left( p,q\right) =h\left( p^{\prime},q^{\prime}\right) .\label{41} \end{equation} In fact, substituting (39) into the the left-hand side of (41) yields \begin{align} & \iint_{-\infty}^{\infty}\frac{\mathtt{d}q\mathtt{d}p}{\pi}\iint \frac{\mathtt{d}q^{\prime \prime}\mathtt{d}p^{\prime \prime}}{\pi} h(p^{\prime \prime},q^{\prime \prime})e^{2\mathtt{i}\left[ \left( p-p^{\prime \prime}\right) \left( q-q^{\prime \prime}\right) -\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) \right] }\nonumber \\ & =\iint_{-\infty}^{\infty}\mathtt{d}q^{\prime \prime}\mathtt{d}p^{\prime \prime}h(p^{\prime \prime},q^{\prime \prime})e^{2\mathtt{i}\left( p^{\prime \prime}q^{\prime \prime}-p^{\prime}q^{\prime}\right) }\delta \left( p^{\prime \prime}-p^{\prime}\right) \delta \left( q^{\prime \prime}-q^{\prime }\right) =h(p^{\prime},q^{\prime}).\label{42} \end{align} This transformation's Parsval-like theorem is \begin{align} & \iint_{-\infty}^{\infty}\frac{\mathtt{d}q\mathtt{d}p}{\pi}|h(p,q)|^{2} \nonumber \\ & =\iint \frac{\mathtt{d}q^{\prime}\mathtt{d}p^{\prime}}{\pi}|G\left( p^{\prime},q^{\prime}\right) |^{2}\iint \frac{\mathtt{d}p^{\prime \prime }\mathtt{d}q^{\prime \prime}}{\pi}e^{2i\left( p^{\prime \prime}q^{\prime \prime }-p^{\prime}q^{\prime}\right) }\iint_{-\infty}^{\infty}\frac{\mathtt{d} q\mathtt{d}p}{\pi}e^{2i\left[ \left( -p^{\prime \prime}p-q^{\prime \prime }q\right) +\left( pp^{\prime}+q^{\prime}q\right) \right] }\nonumber \\ & =\iint \frac{\mathtt{d}q^{\prime}\mathtt{d}p^{\prime}}{\pi}|G\left( p^{\prime},q^{\prime}\right) |^{2}\iint \mathtt{d}p^{\prime \prime} \mathtt{d}q^{\prime \prime}e^{2i\left( p^{\prime \prime}q^{\prime \prime }-p^{\prime}q^{\prime}\right) }\delta \left( q^{\prime}-q^{\prime \prime }\right) \delta \left( p^{\prime}-p^{\prime \prime}\right) =\iint \frac{\mathtt{d}q^{\prime}\mathtt{d}p^{\prime}}{\pi}|G\left( p^{\prime },q^{\prime}\right) |^{2}.\label{43} \end{align} \section{P-Q ordering and Q-P ordering to Weyl ordering} We now use the above transformation to discuss some operator ordering problems. For instance, from the integration formula \begin{equation} \iint \limits_{-\infty}^{\infty}\frac{\mathtt{d}x\mathtt{d}y}{\pi}x^{m} y^{r}\exp[2\mathtt{i}\left( y-s\right) \left( x-t\right) ]=\left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( -\mathtt{i}\right) ^{r} H_{m,r}\left( \sqrt{2}t,\mathtt{i}\sqrt{2}s\right) ,\label{44} \end{equation} where $H_{m,r\text{ }}$is the two-variable Hermite polynomials [12-13], \begin{equation} H_{m,r}(t,s)=\sum_{l=0}^{\min(m,r)}\frac{m!r!(-1)^{l}}{l!(m-l)!(r-l)!} t^{m-l}s^{r-l}.\label{45} \end{equation} Eq. (44) can be proved as follows: \begin{align} \text{L.H.S. of (44)} & =e^{2\mathtt{i}st}\left( \frac{\partial}{\partial t}\right) ^{r}\left( \frac{\partial}{\partial s}\right) ^{m}\iint \limits_{-\infty}^{\infty}\frac{\mathtt{d}x\mathtt{d}y}{\pi}e^{2\mathtt{i} xy}\exp[-2\mathtt{i}yt-2\mathtt{i}sx]\nonumber \\ & =e^{2\mathtt{i}st}\left( \frac{\partial}{\partial t}\right) ^{r}\left( \frac{\partial}{\partial s}\right) ^{m}\int_{-\infty}^{\infty}\mathtt{d} xe^{-2\mathtt{i}sx}\delta \left( x-t\right) \nonumber \\ & =e^{2\mathtt{i}st}\left( \frac{\partial}{\partial t}\right) ^{r}\left( \frac{\partial}{\partial s}\right) ^{m}e^{-2\mathtt{i}st}=\text{R.H.S. of (44).}\label{46} \end{align} Using (28) and (44) we know \begin{align} Q^{m}P^{r} & =\iint_{-\infty}^{\infty}\mathtt{d}p\mathtt{d}qq^{m}p^{r} \delta \left( q-Q\right) \delta \left( p-P\right) \nonumber \\ & =\iint_{-\infty}^{\infty}\frac{\mathtt{d}p\mathtt{d}q}{\pi}q^{m}p^{r} \genfrac{}{}{0pt}{}{:}{:} \exp[2\mathtt{i}\left( p-P\right) \left( q-Q\right) ] \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =\left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( -\mathtt{i}\right) ^{r} \genfrac{}{}{0pt}{}{:}{:} H_{m,r}\left( \sqrt{2}Q,\mathtt{i}\sqrt{2}P\right) \genfrac{}{}{0pt}{}{:}{:} ,\label{47} \end{align} this is a simpler way to put $Q^{m}P^{r}$ into its Weyl ordering. Similarly, using (27) and the complex conjugate of (44) we see that the Weyl ordered form of $P^{r}Q^{m}$ is \begin{align} P^{r}Q^{m} & =\iint_{-\infty}^{\infty}\mathtt{d}p\mathtt{d}qp^{r}q^{m} \delta \left( p-P\right) \delta \left( q-Q\right) \nonumber \\ & =\iint_{-\infty}^{\infty}\frac{\mathtt{d}p\mathtt{d}q}{\pi} \genfrac{}{}{0pt}{}{:}{:} \exp[-2\mathtt{i}\left( q-Q\right) \left( p-P\right) ] \genfrac{}{}{0pt}{}{:}{:} q^{m}p^{r}\nonumber \\ & =\left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( \mathtt{i}\right) ^{r} \genfrac{}{}{0pt}{}{:}{:} H_{m,r}\left( \sqrt{2}Q,-\mathtt{i}\sqrt{2}P\right) \genfrac{}{}{0pt}{}{:}{:} .\label{48} \end{align} \section{Weyl ordering to P-Q ordering and Q-P ordering} According to (39) and (41) we know that the inverse transform of (44) is \begin{equation} \iint \frac{\mathtt{d}s\mathtt{d}t}{\pi}\left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( -\mathtt{i}\right) ^{r}H_{m,r}\left( \sqrt{2}t,\mathtt{i} \sqrt{2}s\right) e^{-2\mathtt{i}\left( y-s\right) \left( x-t\right) }=x^{m}y^{r},\label{49} \end{equation} which is a new integration formula. Then from (27) and (49) we have \begin{align} & \left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( -\mathtt{i}\right) ^{r}H_{m,r}\left( \sqrt{2}Q,\mathtt{i}\sqrt{2}P\right) |_{P\text{ before } Q}\nonumber \\ & =\left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( -\mathtt{i}\right) ^{r}\iint \mathtt{d}p\mathtt{d}q\delta \left( p-P\right) \delta \left( q-Q\right) H_{m,r}\left( \sqrt{2}q,\mathtt{i}\sqrt{2}p\right) \nonumber \\ & =\left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( -\mathtt{i}\right) ^{r}\iint \frac{\mathtt{d}p\mathtt{d}q}{\pi}H_{m,r}\left( \sqrt{2} q,\mathtt{i}\sqrt{2}p\right) \genfrac{}{}{0pt}{}{:}{:} e^{-2\mathtt{i}\left( q-Q\right) \left( p-P\right) } \genfrac{}{}{0pt}{}{:}{:} = \genfrac{}{}{0pt}{}{:}{:} Q^{m}P^{r} \genfrac{}{}{0pt}{}{:}{:} .\label{50} \end{align} Due to (45) we see \begin{equation} \left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( -\mathtt{i}\right) ^{r}H_{m,r}\left( \sqrt{2}Q,\mathtt{i}\sqrt{2}P\right) |_{P\text{ before } Q}=\sum_{l=0}\left( \frac{\mathtt{i}}{2}\right) ^{l}l!\binom{r}{l}\binom {m}{l}P^{r-l}Q^{m-l},\label{51} \end{equation} so (50)-(51) leads to \begin{equation} \genfrac{}{}{0pt}{}{:}{:} Q^{m}P^{r} \genfrac{}{}{0pt}{}{:}{:} =\sum_{l=0}\left( \frac{\mathtt{i}}{2}\right) ^{l}l!\binom{r}{l}\binom{m} {l}P^{r-l}Q^{m-l},\label{52} \end{equation} Eq. (50) or Eq. (52) is the fundamental formula of converting Weyl ordered operator to its $P-Q$ ordering. Similarly, from (28) and the hermite conjugate of (49) we have \begin{align} & \left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( \mathtt{i}\right) ^{r}H_{m,r}\left( \sqrt{2}Q,-\mathtt{i}\sqrt{2}P\right) |_{Q\text{ before }P\text{ }}\nonumber \\ & =\iint \mathtt{d}p\mathtt{d}q\delta \left( q-Q\right) \delta \left( p-P\right) \left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( \mathtt{i} \right) ^{r}H_{m,r}\left( \sqrt{2}q,-\mathtt{i}\sqrt{2}p\right) \nonumber \\ & =\iint \frac{\mathtt{d}p\mathtt{d}q}{\pi}\left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( \mathtt{i}\right) ^{r}H_{m,r}\left( \sqrt{2}q,-\mathtt{i} \sqrt{2}p\right) \genfrac{}{}{0pt}{}{:}{:} e^{2\mathtt{i}\left( q-Q\right) \left( p-P\right) } \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & = \genfrac{}{}{0pt}{}{:}{:} Q^{m}P^{r} \genfrac{}{}{0pt}{}{:}{:} = \genfrac{}{}{0pt}{}{:}{:} P^{r}Q^{m} \genfrac{}{}{0pt}{}{:}{:} ,\label{53} \end{align} so \begin{equation} \genfrac{}{}{0pt}{}{:}{:} Q^{m}P^{r} \genfrac{}{}{0pt}{}{:}{:} =\sum_{l=0}\left( \frac{-\mathtt{i}}{2}\right) ^{l}l!\binom{r}{l}\binom {m}{l}Q^{m-l}P^{r-l},\label{54} \end{equation} this is the fundamental formula of converting Weyl ordered operator to its $Q-P$ ordering, which is in contrast to (52). \section{Q-P ordering to P-Q ordering and vice versa} Combining (47) and (52) together we derive \begin{align} Q^{m}P^{r} & =\sum_{l=0}\frac{m!r!}{l!(m-l)!(r-l)!}(\frac{\mathtt{i}}{2})^{l} \genfrac{}{}{0pt}{}{:}{:} Q^{m-l}P^{r-l} \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =\sum_{l=0}\frac{m!r!}{l!(m-l)!(r-l)!}(\frac{\mathtt{i}}{2})^{l}\sum _{k=0}\left( \frac{\mathtt{i}}{2}\right) ^{k}k!\binom{r-l}{k}\binom{m-l} {k}P^{r-l-k}Q^{m-l-k}\nonumber \\ & =\sum_{l=0}\sum_{k=0}\frac{m!r!}{l!(m-l-k)!(r-l-k)!k!}(\frac{\mathtt{i}} {2})^{l+k}P^{r-l-k}Q^{m-l-k}\nonumber \\ & =\sum_{k=0}\frac{m!r!}{(m-k)!(r-k)!k!}(\mathtt{i})^{k}P^{r-k}Q^{m-k} ,\label{55} \end{align} which puts $Q^{m}P^{r}$ to its $P-Q$ ordering. It then follows the commutator \begin{equation} \left[ Q^{m},P^{r}\right] =\sum_{k=1}\frac{m!r!}{(m-k)!(r-k)!k!} (\mathtt{i})^{k}P^{r-k}Q^{m-k}.\label{56} \end{equation} On the other hand, from (48), (45) and (54) we have \begin{align} P^{r}Q^{m} & =\left( \frac{1}{\sqrt{2}}\right) ^{m+r}\left( \mathtt{i} \right) ^{r} \genfrac{}{}{0pt}{}{:}{:} H_{m,r}\left( \sqrt{2}Q,-\mathtt{i}\sqrt{2}P\right) \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & = \genfrac{}{}{0pt}{}{:}{:} \sum_{l=0}\frac{m!r!}{l!(m-l)!(r-l)!}(\frac{-\mathtt{i}}{2})^{l} \genfrac{}{}{0pt}{}{:}{:} Q^{m-l}P^{r-l} \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =\sum_{l=0}\frac{m!r!}{l!(m-l)!(r-l)!}(\frac{-\mathtt{i}}{2})^{l}\sum _{k=0}\left( \frac{-\mathtt{i}}{2}\right) ^{k}k!\binom{r-l}{k}\binom{m-l} {k}Q^{m-l-k}P^{r-l-k}\nonumber \\ & =\sum_{k=0}\frac{m!r!}{(m-k)!(r-k)!k!}(-\mathtt{i})^{k}Q^{m-k} P^{r-k},\label{57} \end{align} which puts $P^{r}Q^{m}$ to its $Q-P$ ordering. Thus (56) is also equal to \begin{equation} \left[ Q^{m},P^{r}\right] =\sum_{k=1}\frac{m!r!}{(m-k)!(r-k)!k!} (-\mathtt{i})^{k}Q^{m-k}P^{r-k}.\label{58} \end{equation} \section{$P-Q$ ordering or $Q-P$ ordering expansion of $\left( P+Q\right) ^{n}$} Due to \begin{align} \left( P+Q\right) ^{n} & =\frac{\mathtt{d}^{n}}{\mathtt{d}\lambda^{n} }\left. e^{\lambda \left( P+Q\right) }\right \vert _{\lambda=0} =\frac{\mathtt{d}^{n}}{\mathtt{d}\lambda^{n}}\left. \genfrac{}{}{0pt}{}{:}{:} e^{\lambda \left( P+Q\right) } \genfrac{}{}{0pt}{}{:}{:} \right \vert _{\lambda=0}\nonumber \\ & = \genfrac{}{}{0pt}{}{:}{:} \left( P+Q\right) ^{n} \genfrac{}{}{0pt}{}{:}{:} =\sum_{l=0}^{n}\binom{n}{l} \genfrac{}{}{0pt}{}{:}{:} Q^{l}P^{n-l} \genfrac{}{}{0pt}{}{:}{:} ,\label{59} \end{align} substituting (52) into (59) we derive \begin{equation} \left( P+Q\right) ^{n}=\sum_{l=0}^{n}\binom{n}{l}\sum_{k=0}\left( \frac{\mathtt{i}}{2}\right) ^{k}k!\binom{l}{k}\binom{n-l}{k}P^{l-k} Q^{n-l-k},\label{60} \end{equation} or using (54) we have \begin{equation} \left( P+Q\right) ^{n}=\sum_{l=0}^{n}\binom{n}{l}\sum_{k=0}\left( \frac{-\mathtt{i}}{2}\right) ^{k}k!\binom{l}{k}\binom{n-l}{k}Q^{l-k} P^{n-l-k}.\label{61} \end{equation} In sum, by virtue of the formula of operators' Weyl ordering expansion and the technique of integration within Weyl ordered product of operators we have found new two-fold integration transformation about the Wigner operator $\Delta \left( q^{\prime},p^{\prime}\right) $ in phase space quantum mechanics, which provides us with a new approach for deriving mutual converting formulas among $Q-P$ ordering, $P-Q$ ordering and Weyl ordering of operators. A new $c$-number two-fold integration transformation in $p-q$ phase space (Eq. (39)-(41)) is also proposed, we expect that it may have other uses in theoretical physics. In this way, the contents of phase space quantum mechanics [14] can be enriched. \end{document}
arXiv
\begin{document} \pacs{ 03.67.Lx, 03.65.Fd 03.65.Ud } \title{Recognizing Small-Circuit Structure in Two-Qubit Operators\\ and Timing Hamiltonians to Compute Controlled-Not Gates} \email{[email protected]} \author{Vivek V. Shende$^1$,Stephen S. Bullock$^2$ and Igor L. Markov$^3$} \email{[email protected]} \affiliation{$^1$The University of Michigan,Department of Mathematics,\\ $^2$National Institute of Standards and Technology, I.T.L.-M.C.S.D.\\ $^3$The University of Michigan, Department of Electrical Engineering and Computer Science} \email{[email protected]} \begin{abstract} This work proposes numerical tests which determine whether a two-qubit operator has an atypically simple quantum circuit. Specifically, we describe formulae, written in terms of matrix coefficients, characterizing operators implementable with exactly zero, one, or two controlled-not ({\tt CNOT}) gates and all other gates being one-qubit. We give an algorithm for synthesizing two-qubit circuits with optimal number of {\tt CNOT} gates, and illustrate it on operators appearing in quantum algorithms by Deutsch-Josza, Shor and Grover. In another application, our explicit numerical tests allow timing a given Hamiltonian to compute a {\tt CNOT} modulo one-qubit gates, when this is possible. \end{abstract} \maketitle \section{Introduction} Quantum circuits compactly represent unitary operators and find applications in quantum computing, communication and cryptography \cite{NielsenC:00}. Such a representation can often be interpreted as a program (e.g., a sequence of RF pulses for NMR) whose execution on a quantum system of choice performs a requested unitary evolution. Simple steps in the program correspond to gates in the circuit, and smaller circuits lead to faster programs. In this work we discuss exact implementations of two-qubit operators because (i) such operators suffice to implement arbitrary operators \cite{DiVincenzo:95}, and (ii) a number of controllable two-qubit systems were recently reported. The simulation of generic two-qubit operators via {\tt CNOT} gates and one-qubit operators has been thoroughly investigated, resulting in several three-{\tt CNOT} decompositions \cite{VidalDawson:03, VatanWilliams:03, ShendeEtAl:04}. It is known that the swap gate requires three {\tt CNOT}s \cite{VatanWilliams:03}, and also that an arbitrary $n$-qubit operator requires at least $\lceil \frac{1}{4}(4^n - 3n -1)\rceil$. The proof of this latter result \cite{ShendeEtAl:04} holds for any controlled-$u$ gate, where $u$ is a given fixed one-qubit operator. For $n=2$, it has been shown that an arbitrary controlled-$u$ gate is generically worse than the {\tt CNOT} \cite{Zhang+3:03}. The above-mentioned results motivate the focus on the {\em basic-gate} library \cite{BarencoEtAl:95}, which consists of the {\tt CNOT} gate and all one-qubit gates: it is powerful and well-understood. Yet, given the diversity of implementation technologies, it is not clear that the {\tt CNOT} gate will be directly available in a given implementation. Nonetheless, we believe results expressed in the {\em basic-gate} library will be relevant. An analogous situation occurs in the design of (classical) integrated circuits. In this context, first {\em technology-independent synthesis} is performed in terms of abstract gates (AND, OR, NOT). Later, during {\em technology mapping}, circuits are converted to use gates that are specific to a given implementation technology (e.g., NOR, NAND and AOI gates, which require very few {\tt CMOS} transistors). Work in the direction of quantum technology mapping includes techniques for expressing a {\tt CNOT} gate in terms of a given entangling two-qubit gate and arbitrary one-qubit gates \cite{BremnerEtAl:02a}. The simulation of {\tt CNOT} gates with implementation-specific resources is the basis of a major physical implementation technology \cite{WinelandEtAl:98}. The analogy with classical logic synthesis provides the following additional intuition: operators useful in practice will not be the worst-case operators studied in the aforementioned works. This belief is confirmed by published quantum algorithms and communication protocols. It is therefore important for quantum logic synthesis techniques to detect when a given operator can be implemented using fewer gates than are necessary in the worst case. For some classes of operators, this is easy; e.g., the algorithm in \cite{BullockM:03} implements tensor-product operators without {\tt CNOT}s. The matrix of a controlled-$U$ operator can be recognized by its pattern of zeros and ones (either directly, or after pre- and post-multiplication by wire swaps). Song and Klappenecker \cite{SongK:03} study optimal implementations of two-qubit controlled-unitary operators, known to require up to two {\tt CNOT} gates. They contribute a catalog of numerical tests that detect when zero, one or two {\tt CNOT} gates are required, and similar criteria for the number of basic one-qubit gates. We address a related question for arbitrary two-qubit operators and contribute simple numerical tests to determine the minimal achievable number of {\tt CNOT}s, including a novel one-{\tt CNOT} test. We also generalize a two-{\tt CNOT} test from \cite{VidalDawson:03} and make it easier to compute. Such explicit numerical tests facilitate a new application. A given two-qubit Hamiltonian $H$, if timed precisely, may allow one to implement a {\tt CNOT} using $\mbox{e}^{iHt}$ and one-qubit gates. We show how to compute correct durations. \section{Background and Notation} \label{sec:background} It is well known that an arbitrary one-qubit gate $u$ can be written as $u =\mbox{e}^{i\Phi} R_z(\theta) R_y(\phi) R_z(\psi)$ \cite{NielsenC:00}. Furthermore, the Bloch sphere isomorphism suggests that the choice of $y, z$ is arbitrary in the sense that any pair of orthogonal vectors will do: in particular, we may write \[u =\mbox{e}^{i\Phi} R_z(\theta) R_x(\phi) R_z(\psi) = \mbox{e}^{i\Phi} R_x(\alpha) R_z(\delta) R_x(\beta)\] These decompositions are more convenient when working with {\tt CNOT} gates because $R_z$ gates commute through the control of the {\tt CNOT} whereas $R_x$ gates commute through the target. We will denote by $C_j^k$ a {\tt CNOT} with control on the $j$-th wire and target on the $k$-th. For convenience, we consider the {\tt CNOT} gate to be normalized to have determinant $1$. Additional conventions are as follows. For $g$ any complex matrix, $g^t$ denotes the transpose and $g^*$ denotes the adjoint, i.e. the complex-conjugate transpose. Additionally, $\chi(g)=p(x)=\mbox{det }( x I - g)$ denotes the characteristic polynomial of $g$. We use axis-dependent phase operators \cite{ShendeEtAl:04} $S_* = R_*(\pi/2)$, $*=x,y,z$. Finally, $SU(4)$ denotes the group of all \emph{determinant one} unitary matrices, fixing the global phase of a two-qubit unitary operator up to $\pm 1, \pm i$. We now consider when two-qubit operators $u,v$ differ by pre- or post-composing with one-qubit operators and possibly by an irrelevant global phase. In this case, we write $u \equiv v$ and say that $u$ and $v$ are equivalent up to one-qubit gates. The following invariant characterizes when this occurs. \begin{proposition} \label{prop:invariants} Let $\gamma: U(4) \to U(4)$ be given by the formula $u \mapsto u (\sigma^y)^{\otimes 2} u^t (\sigma^y)^{\otimes 2}$. Then for $u, v \in SU(4)$, $u \equiv v \iff \chi[\gamma(u)] = \chi[\pm \gamma(v)]$. \end{proposition} We defer the proof to the Appendix. However, note that this proof provides an explicit procedure for computing the one-qubit operators $a,b,c,d \in SU(2)$ such that $(a \otimes b) u (c \otimes d) = e^{i \phi} v$ in the event that $\chi[\gamma(u)] = \chi[\pm\gamma(v)]$. We discuss $\gamma$ more fully in the context of minimal universal two-qubit circuits \cite{ShendeEtAl:04}. Related invariants are discussed in \cite{ZhangEtAl:02,Makhlin:00}, and generalizations in \cite{BullockBrennen:03}. \section{Optimizing {\tt CNOT}-count}\label{sec:CNOT} We now characterize which two-qubit operators admit a quantum circuit using only $m$ {\tt CNOT} gates. Since any two-qubit operator is implemented by some three {\tt CNOT} circuit, the relevant cases are $m=0,1,2$. We begin with case $m=0$. \begin{proposition} \label{prop:cnotcount:0} An operator $u \in SU(4)$ can be simulated using no {\tt CNOT} gates and arbitrary one-qubit gates from $SU(2)$ iff $\chi[\gamma(u)] = (x+1)^4$ or $(x-1)^4$. \end{proposition} \begin{proof} $u$ can be simulated using no {\tt CNOT} gates iff $u \equiv I$. Thus $\chi[\gamma(u)] = \chi[\pm\gamma(I)] = \chi[\pm I] = (x \pm 1)^4$. \end{proof} The case $m=1$ is similar. Note that this test \emph{requires} normalizing the global phase so that $\mbox{det}(v)=1$, implicit in $v \in SU(4)$. Had we not normalized the {\tt CNOT} gate, $\chi[\gamma(C_1^2)]$ would not be of the form described. \begin{proposition} \label{prop:cnotcount:1} An operator $u \in SU(4)$ can be simulated using one {\tt CNOT} gate and arbitrary one-qubit gates from $SU(2)$ iff $\chi[\gamma(u)] = (x+i)^2(x-i)^2$. \end{proposition} \begin{proof} $u$ is simulated using one {\tt CNOT} gate iff $u \equiv C_1^2$ or $u \equiv C_2^1$. Now $\gamma(C_2^1) = -i \sigma^z \otimes \sigma^x$; also $\gamma(C_1^2) = -i \sigma^x \otimes \sigma^z$. Each has $\chi$ given by $(x+i)^2(x-i)^2$. \end{proof} In particular, we see that $C_1^2 \equiv C_2^1$. This can also be seen from the well-known identity $(H \otimes H) C_1^2 (H \otimes H) = C_2^1$. We will use this fact for the final case, $m=2$. \begin{proposition} \label{prop:cnotcount:2} An operator $u \in SU(4)$ can be simulated using two {\tt CNOT} gates and arbitrary one-qubit gates from $SU(2)$ iff $\chi[\gamma(u)]$ has all real coefficients, which occurs iff $\mbox{tr}[\gamma(u)]$ is real. \end{proposition} \begin{proof} Since $C_1^2 \equiv C_2^1$, it is clear that $u$ can be simulated using two {\tt CNOT} gates iff $u \equiv C_1^2 (a \otimes b) C_1^2$. We decompose $a = R_x(\alpha) R_z(\delta) R_x(\beta)$ decomposition and $b = R_z(\theta) R_x(\phi) R_z(\psi)$, and pass $R_x$ gates and $R_z$ gates outward through the target and control of the {\tt CNOT} gates. Thus we are left with $u \equiv C_1^2 [R_z(\delta) \otimes R_x(\phi)] C_1^2$. Explicit computation yields $\chi[\gamma(C_1^2 [R_z(\delta) \otimes R_x(\phi)] C_1^2)] = (x + e^{i(\delta+\phi)})(x+e^{-i(\delta+\phi)})(x + e^{i(\delta -\phi)})(x + e^{-i(\delta - \phi)})$. On the other hand, if $\chi[\gamma(u)]$ has all real coefficients, then the eigenvalues come in conjugate pairs; it follows from this and Proposition \ref{prop:invariants} that $\chi[\gamma(u)]$ is as above for some $\delta, \phi$. Finally, we note that for $u \in SU(N)$, and $\chi(u) = \prod (x - \lambda_i)$, we have $\prod \lambda_i = 1$. Thus $\chi(u) = \left(\prod \overline{\lambda_i}\right) \prod (x-\lambda_i) = \prod (\overline{\lambda_i} x - 1)$. It follows that the coefficient of $x^k$ is the complex conjugate of the coefficient of $x^{N-k}$. In particular, for $N=4$, the coefficient of $x^2$ is real and the coefficients of $x^3, x$ are $\mathrm{tr}(u)$ and its conjugate. Since the constant term and the $x^4$ coefficient are $1$, we see $\chi(u)$ has all real coefficients iff $\mathrm{tr}(u)$ is real. \end{proof} \section{Synthesis Algorithm and Its Validation} \label{sec:examples} The results of Section \ref{sec:CNOT} can be combined with the techniques of Propositions \ref{prop:cnotcount:2} and \ref{prop:invariants} and the published literature to yield an explicit circuit synthesis algorithm: \begin{itemize} \item Given the matrix of a unitary operator $u \in U(4)$, divide it by $\sqrt[4]{\det(u)}$ to ensure $u \in SU(4)$. \item Compute $\chi[\gamma(u)]$ to determine whether $u$ requires zero, one, two, or three {\tt CNOT} gates. \item If $u$ requires zero or one {\tt CNOT} gates, use the techniques of the proof of Proposition \ref{prop:invariants1} to determine which one-qubit operators are required. \item If $u$ requires two {\tt CNOT} gates, find the roots of $\chi[\gamma(u)]$ and determine the $\delta,\phi$ of Proposition \ref{prop:cnotcount:2}. Then use the methods of Proposition \ref{prop:invariants1} to determine what one-qubit gates are required at the ends of the circuit. \item Finally, if $u$ requires three {\tt CNOT} gates, apply the methods of the literature \cite{ShendeEtAl:04}. \end{itemize} By construction, the algorithm produces {\tt CNOT}-optimal circuits in all cases. It also outperforms those in \cite{BullockM:03,ShendeEtAl:04,VidalDawson:03, VatanWilliams:03} in important special cases, as shown below. \begin{example}\label{ex:hth} Many quantum algorithms, notably Grover's quantum search \cite{Grover97} and Shor's number factoring \cite{Shor97}, use the operator $u = H \otimes H$ to create superpositions. Computing $\gamma(u)$ allows our synthesis algorithm to recognize that $u$ admits a quantum circuit containing no {\tt CNOT}s. \end{example} This example is less trivial than it seems: while writing $u = H \otimes H$ makes it obvious that $u$ requires no {\tt CNOT} gates, a synthesis procedure will not receive an input of $u = H \otimes H$ but rather of the $4\times 4$ matrix corresponding to $u$. It is not {\em a priori} clear that any worst-case {\tt CNOT}-optimal circuit decomposition will implement $u$ without {\tt CNOT} gates. However, several previously published algorithms do. For the next example, previous two-qubit synthesis techniques produce circuits with more {\tt CNOT}s than necessary. \begin{example} The operator $u$ that swaps $\ket{00} \leftrightarrow \ket{01}$ while fixing $\ket{10}$ and $\ket{11}$ plays a prominent role in the Deutsch-Josza algorithm \cite{DJ92,NielsenC:00}. Note that $C_2^1 (I\otimes \sigma_x)$ simulates $u$. Computing $\gamma(\mbox{e}^{i\pi /4}u)$ reveals that $u$ requires only one {\tt CNOT}. However, depending on certain algorithmic choices, anywhere from one to four one-qubit gates could appear. In any event, this compares favorably to previous work \cite{ShendeEtAl:04} which synthesizes a circuit with two {\tt CNOT} and five one-qubit gates. \end{example} The algorithmic choices mentioned above come in two flavors. First, as the two {\tt CNOT} gates $C_1^2$ and $C_2^1$ differ only by one-qubit gates, they are equivalent from the perspective of our methods. However, the number of one-qubit gates present in the resulting circuit depends on which of these is chosen. This is a finite problem: at most three {\tt CNOT} gates appear and thus there are at most $8$ possibilities, so we simply run through them all. Additional degrees of freedom arise in finding a circuit that computes a given $v$ using a given $u$ and one-qubit operators, when this is possible. The proof proof of Proposition \ref{prop:invariants1} describes an algorithm for this, and requires picking a basis of eigenvectors for a certain matrix. If the eigenvalues are distinct, the only degree of freedom is the ordering of the basis of eigenvectors ($4! = 24$ possibilities). However, repeated eigenvalues allow more flexibility in choosing basis vectors, and potentially non-trivial circuit optimizations. \begin{example} At the heart of Shor's factoring algorithm \cite{Shor97} is the Quantum Fourier Transform \cite{NielsenC:00}. On two qubits, it is given by the following matrix. \[ \mathcal{F} = \frac{1}{2} {\small \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & i & -1 & -i \\ 1 & -1 & 1 & -1 \\ 1 & -i & -1 & i \\ \end{array} \right)} \] Explicit computation of $\chi[\gamma(\mathcal{F})]$ reveals that two {\tt CNOT} gates do not suffice to simulate $\mathcal{F}$. Thus, the following circuit to compute $\mathcal{F}$ is {\tt CNOT}-optimal: \begin{center} \begin{picture}(16,4) \put(0,0){\hWire} \put(0,2){\boxGate{$S_y$}} \put(2,0){\hWire} \put(2,2){\boxGate{$T_z^5$}} \put(4,0){\botCNOT} \put(6,0){\hWire} \put(6,2){\boxGate{$T_z^*$}} \put(8,0){\topCNOT} \put(10,0){\botCNOT} \put(12,0){\hWire} \put(12,2){\boxGate{$T_z^4$}} \put(14,0){\hWire} \put(14,2){\boxGate{$S_y^*$}} \end{picture} \end{center} Above, $T_z = e^{-i\sigma^z \pi/8}$ and $S_y = e^{-i\sigma^y \pi/4}$. Note that this circuit requires only three one-qubit gates, although two of these have been broken up for clarity. Finally, given that this circuit is {\tt CNOT}-optimal, it is not difficult to check by hand that its basic-gate count cannot be improved. \end{example} \section{Timing a Hamiltonian to Compute CNOT} Our numerical tests facilitate a new application. Given a Hamiltonian that can be timed to compute a {\tt CNOT} modulo one-qubit gates, we illustrate finding the correct duration. Our example is a perturbation of $\sigma^x \otimes \sigma^x$ by non-commutative one-qubit noise. \begin{equation*} H_{42}= (0.42) I \otimes \sigma^z + \sigma^x \otimes \sigma^x \end{equation*} Recall that a {\tt CNOT} can be constructed using one-qubit gates and some time-iterate of the Hamiltonian $\sigma^x \otimes \sigma^x$. However, to handle the noise term, existing techniques resort to Trotterization, which implements $exp(A+B)$ by separately turning on $A$ and $B$ for short periods of time. Below we find a simpler, direct implementation of {\tt CNOT} from $H_{42}$. It is especially interesting in light of concerns about the scalability of Trotterization \cite{ChildsHN:03}. We compute $\gamma(\mbox{e}^{i H_{42}t})$ for uniformly-spaced trial values of $t$ and seek out those values at which the characteristic polynomial nears $p(x)=(x^2+1)^2=x^4+2x^2+1$. Our implementation in {\tt C++} finds $t_{\tt CNOT}=0.80587$ in twenty seconds on a common workstation. Hence, we produce a {\tt CNOT} from $H_{42}$ and one-qubit gates without Trotterization. Specifically, since $\mbox{e}^{iH_{42}t_{\tt CNOT}}$ implements $C_2^1$ up to one-qubit operators, we use the technique of Proposition \ref{prop:invariants1} to compute the relevant one-qubit operators. We find that the matrices \begin{equation*} \begin{array}{rr} a_2=\frac{1}{2} \left( \begin{array}{rr} 1-i & -1+i \\ 1+i & 1+i \\ \end{array} \right) & c_2=0.707107 \left( \begin{array}{rr} -1 & -1 \\ 1 & -1 \\ \end{array} \right) \end{array} \end{equation*} \begin{equation*} b_2= \left( \begin{array}{rr} -0.21503-0.976607i & 0 \\ 0 & -0.21503+0.976607i \\ \end{array} \right) \end{equation*} \begin{equation*} d_2= \left( \begin{array}{rr} 0.152049+0.690566i & 0.690566-0.152049i \\ -0.690566-0.152049i & 0.152049-0.690566i \\ \end{array} \right) \\ \end{equation*} satisfy $C_2^1=(a_2 \otimes b_2) \mbox{e}^{iH_{42}t_{\tt CNOT}} (c_2 \otimes d_2)$ with numerical precision of $10^{-6}$. Further numerical experiments suggest that building a {\tt CNOT} is possible whenever $0.42$ is replaced by a weight $w$, $0 \leq w \leq 1$. However, we have no analytical proof of this. Numerical experiments also suggest the {\em impossibility} of timing the Hamiltonian $H_{XYZ}=\sigma^x \otimes \sigma^x + \sigma^y \otimes \sigma^y + \sigma^z \otimes \sigma^z$ so as to compute a {\tt CNOT}. In other words, trying values of $t$ in the range $-10 \leq t \leq 10$ as above produced no candidate durations. \section{Conclusions and Future Work} \label{sec:conclusions} Our work addresses small-circuit structure in two-qubit unitary operators. In particular, we contribute tests for such structure, and our techniques can be viewed as algorithms for finding small circuits when they exist. We detail such an algorithm that produces the minimal possible number of {\tt CNOT} gates (zero, one, two or three) {\em for each input}. It is illustrated on circuit examples derived from well-known applications. The one-{\tt CNOT} test has an additional use. It provides a numerical method for timing a given two-qubit Hamiltonian $H$ so that $\mbox{e}^{i t H}$ realizes a {\tt CNOT} gate up to local unitary operators (one-qubit gates,) given this is possible for $H$. {\bf Acknowledgments.} This work is supported by the DARPA QuIST program and an NSF grant. The views and conclusions contained herein are those of the authors and should not be interpreted as neces\-sarily representing official policies or endorsements of employers and funding agencies. \section*{Appendix} \label{sec:delta2} \begin{proposition} \label{prop:invariants1} Let $\gamma: SU(4) \to SU(4)$ be given by the formula $u \mapsto u (\sigma^y)^{\otimes 2} u^t (\sigma^y)^{\otimes 2}$. Then for $u, v \in SU(4)$, $u \equiv v \iff \chi[\gamma(u)] = \chi[\pm \gamma(v)]$. \end{proposition} \begin{proof} By definition, $u \equiv v \iff u = (a \otimes b)\lambda v (a' \otimes b')$ for some one-qubit operators $a,b,a',b'$ and some scalar $\lambda$. Requiring $u,v \in SU(4)$ implies $\lambda = \pm 1, \pm i$. We show below that $u = (a \otimes b)v(a' \otimes b') \iff \chi[\gamma(u)] = \chi[\gamma(v)]$; the proposition then follows from the fact that $\gamma(iu) = -\gamma(u)$. We recall that there exist $E \in SU(4)$ such that $E~SO(4)~E^* = SU(2)^{\otimes 2} = \{a \otimes b: a,b \in SU(2)\}$. Such matrices are characterized by the property that $EE^t = -\sigma^y \otimes \sigma^y$. This and related issues have been exhaustively dealt with in several papers \cite{BennettEtAl:96,HillWooters:97, KhanejaBG:01a, LewensteinEtAl:01}. The property $\chi[\gamma(u)] = \chi[\gamma(v)]$ is not changed by replacing $\gamma$ with $E^* \gamma E$. Using the fact $\sigma^y \otimes \sigma^y = EE^t = (EE^t)^*$, we compute: $E^* \gamma(u) E = E^* u E E^t u^t E^{t*} E^* E = (E^* u E)(E^* u E)^t$ By making the substitution $u \mapsto EuE^*$; it suffices to prove: for $u, v \in SU(4)$, there exists $x, y \in SO(4)$ such that $xuy = v$ iff $\chi[uu^t] = \chi[vv^t]$. Here, $SO(4)$ is the real matrices within $SU(4)$. Note that for $P$ symmetric unitary, $P^{-1} = \overline{P}$, hence $[P+\overline{P}, P-\overline{P}]=0$. It follows that the real and imaginary parts of $P$ share an orthonormal basis of eigenvectors. As they are moreover real symmetric matrices, we know from the spectral theorem that their eigenvectors can be taken to be real. Thus there exists $q \in SO(4)$ such that $quu^t q^*$ is diagonal. By re-ordering (and negating) the columns of $q$, we can re-order the diagonal elements of $quu^t q^*$ as desired. Thus if $\chi[uu^t]=\chi[vv^t]$, we can find $q, r \in SO(4)$ such that $quu^t q^t = r vv^t r^t$ by diagonalizing both; then $(v^* r^t q u)(v^* r^t q u)^t = I$. Let $s = v^* r^t q u \in SO(4)$. We have $ q^t r v s = u$, as desired. \end{proof} \end{document}
arXiv
Jack Thorne (mathematician) Jack A. Thorne FRS (born 13 June 1987) is a British mathematician working in number theory and arithmetic aspects of the Langlands Program. He specialises in algebraic number theory. Jack Thorne FRS Born Jack A. Thorne (1987-06-13) 13 June 1987 Hereford, England NationalityBritish Alma materUniversity of Cambridge Harvard University Awards • Whitehead Prize (2017) Adams Prize (2022) Cole Prize (2023) Scientific career FieldsMathematics Institutions • University of Cambridge ThesisThe Arithmetic of Simple Singularities (2012) Doctoral advisorRichard Taylor, Benedict Gross Education Thorne read mathematics at Trinity Hall, Cambridge. He completed his PhD with Benedict Gross and Richard Taylor at Harvard University in 2012. Career and research Thorne was a Clay Research Fellow.[1] Currently, he is a Professor of Mathematics at the University of Cambridge,[2] where he has been since 2015, and is also a fellow at Trinity Hall, Cambridge. Thorne's paper on adequate representations[3] significantly extended the applicability of the Taylor-Wiles method. His paper on deformations of reducible representations[4] generalized previous results of Chris Skinner and Andrew Wiles from two-dimensional representations to n-dimensional representations. With Gebhard Böckle, Michael Harris, and Chandrashekhar Khare, he has applied techniques from modularity lifting to the Langlands conjectures over function fields. With Kai-Wen Lan, Harris, and Richard Taylor, Thorne constructed Galois representations associated to non-self dual regular algebraic cuspidal automorphic forms for GL(n) over CM fields.[5] Thorne's 2015 joint work with Khare on potential automorphy and Leopoldt's conjecture[6] has led to a proof of a potential version of the modularity conjecture for elliptic curves over imaginary quadratic fields.[7] In joint work with James Newton, Thorne has established symmetric power functoriality for all holomorphic modular forms.[8][9] Awards and honors Thorne was awarded the Whitehead Prize in 2017. In 2018, Thorne was an invited speaker at the International Congress of Mathematicians in Rio de Janeiro.[10][11] He was awarded the 2018 SASTRA Ramanujan Prize for his contributions to the field of mathematics. He shared the prize with Yifeng Liu.[12][13][14] In April 2020 he was elected a Fellow of the Royal Society.[15] In 2020 he received the EMS Prize of the European Mathematical Society,[16] in 2021 he was awarded a New Horizons in Mathematics Prize and in 2022 he was awarded the Adams Prize.[17] For 2023 he received the Cole Prize in Number Theory of the AMS.[18] References 1. "Jack Thorne | Clay Mathematics Institute". www.claymath.org. Retrieved 22 February 2019. 2. "Professor Jack Thorne". Trinity Hall. Retrieved 22 February 2019. 3. Thorne, Jack (October 2012). "On the automorphy of l-adic Galois representations with small residual image With an appendix by Robert Guralnick, Florian Herzig, Richard Taylor and Jack Thorne". Journal of the Institute of Mathematics of Jussieu. 11 (4): 855–920. arXiv:1107.5993. doi:10.1017/S1474748012000023. ISSN 1475-3030. S2CID 15994406. 4. Thorne, Jack (2015). "Automorphy lifting for residually reducible 𝑙-adic Galois representations". Journal of the American Mathematical Society. 28 (3): 785–870. doi:10.1090/S0894-0347-2014-00812-2. ISSN 0894-0347. S2CID 3945032. 5. Harris, Michael; Lan, Kai-Wen; Taylor, Richard; Thorne, Jack (26 October 2016). "On the rigid cohomology of certain Shimura varieties". Research in the Mathematical Sciences. 3 (1). arXiv:1411.6717. doi:10.1186/s40687-016-0078-5. ISSN 2197-9847. S2CID 119142230. 6. Thorne, Jack A.; Khare, Chandrashekhar B. (13 September 2017). "Potential Automorphy and the Leopoldt conjecture". American Journal of Mathematics. 139 (5): 1205–1273. arXiv:1409.7007. doi:10.1353/ajm.2017.0030. ISSN 1080-6377. S2CID 117991797. 7. Liu and Thorne Awarded SASTRA Ramanujan Prize, Notices of the American Mathematical Society, January 2019, https://www.ams.org/journals/notices/201901/rnoti-p113.pdf 8. Newton, James; Thorne, Jack A. (2021). "Symmetric power functoriality for holomorphic modular forms". Publications Mathématiques de l'IHÉS. 134: 1–116. arXiv:1912.11261. doi:10.1007/s10240-021-00127-3. S2CID 209460741. 9. Newton, James; Thorne, Jack A. (2021). "Symmetric power functoriality for holomorphic modular forms, II". Publications Mathématiques de l'IHÉS. 134: 117–152. arXiv:2009.07180. doi:10.1007/s10240-021-00126-4. S2CID 221703327. 10. "Invited Section Lectures – Speakers | ICM 2018". www.icm2018.org. Archived from the original on 8 December 2018. Retrieved 30 November 2018. 11. plusmathsorg (9 August 2018), ICM 2018: Jack Thorne, retrieved 22 February 2019 12. "Srinivasa Ramanujan Centre (SRC)". sas.sastra.edu. Retrieved 22 February 2019. 13. Maeve Forti (25 October 2018). "Yifeng Liu wins prestigious award in mathematics". YaleNews. Yale University. Retrieved 3 February 2019. 14. "Yale, Cambridge profs. get SASTRA-Ramanujan Award". The Hindu. 22 December 2018. Retrieved 3 February 2019. 15. "Outstanding scientists elected as Fellows and Foreign Members of the Royal Society". royalsociety.org. Retrieved 30 April 2020. 16. EMS Prize 2020 17. "Adams Prize Winner 2021–22". maths.cam.ac.uk. Retrieved 25 March 2022. 18. Cole Prize in Number Theory 2023 External links • Jack Thorne's Professional Webpage Recipients of SASTRA Ramanujan Prize • Manjul Bhargava (2005) • Kannan Soundararajan (2005) • Terence Tao (2006) • Ben Green (2007) • Akshay Venkatesh (2008) • Kathrin Bringmann (2009) • Wei Zhang (2010) • Roman Holowinsky (2011) • Zhiwei Yun (2012) • Peter Scholze (2013) • James Maynard (2014) • Jacob Tsimerman (2015) • Kaisa Matomäki (2016) • Maksym Radziwill (2016) • Maryna Viazovska (2017) • Yifeng Liu (2018) • Jack Thorne (2018) • Adam Harper (2019) • Shai Evra (2020) • Will Sawin (2021) • Yunqing Tang (2022) Fellows of the Royal Society elected in 2020 Fellows • Timothy Behrens • Yoshua Bengio • Malcolm J. Bennett • Ben Berks • Zulfiqar Bhutta • Kevin Brindle • Gordon Brown • William C. Campbell • Henry Chapman • G. Marius Clore • Vikram Deshpande • John Endler • Adam Eyre-Walker • Daniel Frost • François Guillemot • David Harel • Marian Holness • Ehud Hrushovski • Andrew P. Jackson • George Jackson • Xin Lu • Alexander Makarov • Keith Matthews • Iain McCulloch • Linda Nazar • Peter Nellist • Giles Oldroyd • Hugh Osborn • Oliver L. Phillips • Raymond Pierrehumbert • John Plane • Cathy Price • Carol Prives • Didier Queloz • Nicholas Read • Michael Rudnicki • William Schafer • Nigel Scrutton • John Shine • Stephen Smartt • Ralf Speth • Molly Stevens • Donna Strickland • Andrew M. Stuart • Sarah Teichmann • Richard Thompson • Jack Thorne • Nicholas Turner • Jane Visvader • Alan M. Wilson • Steve Young Honorary David Cooksey Foreign • Frances Arnold • Francis Collins • Kerry Emanuel • Ben Feringa • Else Marie Friis • Regine Kahmann • Margaret Kivelson • Ramamoorthy Ramesh • Wendelin Werner • Ada Yonath Authority control International • VIAF National • Germany Academics • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Microbial Cell Factories Biological and physicochemical properties of biosurfactants produced by Lactobacillus jensenii P6A and Lactobacillus gasseri P65 I. M. C. Morais1, A. L. Cordeiro1, G. S. Teixeira1, V. S. Domingues1, R. M. D. Nardi1, A. S. Monteiro2, R. J. Alves3, E. P. Siqueira4 & V. L. Santos1 Microbial Cell Factories volume 16, Article number: 155 (2017) Cite this article Lactobacillus species produce biosurfactants that can contribute to the bacteria's ability to prevent microbial infections associated with urogenital and gastrointestinal tracts and the skin. Here, we described the biological and physicochemical properties of biosurfactants produced by Lactobacillus jensenii P6A and Lactobacillus gasseri P65. The biosurfactants produced by L. jensenii P6A and L. gasseri P65 reduced the water surface tension from 72 to 43.2 mN m−1 and 42.5 mN m−1 as their concentration increased up to the critical micelle concentration (CMC) values of 7.1 and 8.58 mg mL−1, respectively. Maximum emulsifying activity was obtained at concentrations of 1 and 5 mg mL−1 for the P6A and P65 strains, respectively. The Fourier transform infrared spectroscopy data revealed that the biomolecules consist of a mixture of carbohydrates, lipids and proteins. The gas chromatography-mass spectrum analysis of L. jensenii P6A biosurfactant showed a major peak for 14-methypentadecanoic acid, which was the main fatty acid present in the biomolecule; conversely, eicosanoic acid dominated the biosurfactant produced by L. gasseri P65. Although both biosurfactants contain different percentages of the sugars galactose, glucose and ribose; rhamnose was only detected in the biomolecule produced by L. jensenii P6A. Emulsifying activities were stable after a 60-min incubation at 100 °C, at pH 2–10, and after the addition of potassium chloride and sodium bicarbonate, but not in the presence of sodium chloride. The biomolecules showed antimicrobial activity against clinical isolates of Escherichia coli and Candida albicans, with MIC values of 16 µg mL−1, and against Staphylococcus saprophyticus, Enterobacter aerogenes and Klebsiella pneumoniae at 128 µg mL−1. The biosurfactants also disrupted preformed biofilms of microorganisms at varying concentrations, being more efficient against E. aerogenes (64%) (P6A biosurfactant), and E. coli (46.4%) and S. saprophyticus (39%) (P65 biosurfactant). Both strains of lactobacilli could also co-aggregate pathogens. This report presents the first characterization of biosurfactants produced by L. jensenii P6A and L. gasseri P65. The antimicrobial properties and stability of these biomolecules indicate their potential use as alternative antimicrobial agents in the medical field for applications against pathogens that are responsible for infections in the gastrointestinal and urogenital tracts and the skin. Microorganisms are able to produce diverse surface-active compounds (SACs) containing both hydrophilic and hydrophobic moieties that can interact with surfaces, lower surface and interfacial tensions, form micelles, and emulsify immiscible substances [1]. Microbial SACs can be distinguished by their size, such as low-molecular-weight biosurfactants and high-molecular-weight surface-active polymers. High-molecular-weight surface-active polymers can be amphiphilic or polyphilic [2]. The former possesses one hydrophobic region at one end of the molecule; examples include lipopolysaccharides, lipoteichoic acids and lipoglycans of bacterial cell walls. In contrast, the latter have hydrophobic groups distributed across the entire molecule that are identical to the hydrophobically modified, comb-type polymers; examples include emulsan and hydrophobic polysaccharides [2]. An additional criterion for categorizing microbial SACs is the chemical nature of the molecules. The major classes of molecules consist of various structures, such as glycolipids, lipopeptides, polysaccharides or protein complexes, phospholipids, fatty acids and neutral lipids [3]. These biomolecules can be transported to the extracellular medium or remain attached to the cell surface as particulate biosurfactants [4]. In recent years, interest in SACs have increased due to their possible applications in environmental protection, crude oil drilling and the food processing and pharmaceutical industries [5, 6]. Unlike chemical surfactants, which are primarily derived from petroleum, these molecules can be produced by a wide variety of microorganisms, including bacteria, yeasts and filamentous fungi [7,8,9,10,11]. Furthermore, biosurfactants have several advantages over chemical surfactants, including the following: low toxicity, a lower critical micelle concentration (CMC), higher intrinsic biodegradability, greater stability at temperature, pH, and salinity extremes, the possibility of being produced from renewable substrates, and greater ecological acceptability [12]. SAC-producing Lactobacillus species has been described and are predominately found among the urogenital and gastrointestinal tract microbiota of humans. SACs derived from lactic acid bacteria (LAB) contribute to the bacteria's ability to prevent microbial infections associated with its ecosystems [13, 14]. Lactobacilli can prevent colonization of the urogenital tract by several pathogens, including yeasts of the genera Candida albicans, C. tropicalis and C. krusei, responsible for vulvovaginal candidiasis, anaerobic bacteria responsible for bacterial vaginosis (BV), such as Gardnerella vaginalis, Mycoplasma hominis, Atopobium vaginae, Prevotella spp., Veillonella spp. and Mobiluncus spp., the uropathogens Escherichia coli, Proteus spp., Klebsiella spp., Serratia spp. and sexually transmitted viruses [13,14,15,16,17,18]. Lactobacilli modulate the microbiota at these sites via different mechanisms, such as auto-aggregation, i.e., the ability to form multi-cellular aggregates that incorporate bacteria from the same species; lactic acid, hydrogen peroxide, bacteriocin, and SAC production; co-aggregation with pathogenic microorganisms (in which different bacterial species are incorporated); and adhesion to epithelial cells excluding pathogens [13, 17, 19]. This hypothesis of microbiota modulation has stimulated research on the isolation and characterization of novel SAC-producing lactobacilli, followed by investigations of the potential of these microorganisms to control pathogens. Many studies have previously reported the antibacterial, antifungal and antiviral activities of SACs produced by lactobacilli [11, 20,21,22]. However, another valuable application of SACs is their use as anti-adhesive agents to prevent pathogen adhesion to the host epithelium and solid surfaces as biomedical instruments [21,22,23,24,25,26]. Thus, these biomolecules might constitute a new and effective method to prevent host colonization by pathogenic microorganisms and the consequent development of clinical disturbs. The production yields of bacterial SACs are relatively high (2–10 g L−1). Additionally, SACs reduce the surface tension of water to values lower than 30 mN m−1. In contrast, biosurfactants produced by lactobacilli are less effective, only reducing the surface tension of water to values of approximately 36–40 mN m−1, and are produced at lower levels (20–100 mg L−1) [7, 20,21,22, 27, 28]. Furthermore, the chemical compositions of these biomolecules have not been well studied, with only a few biomolecules being partially characterized [10, 20,21,22, 29], and these characteristics can influence the biological activities of the SACs. In general, human vaginal communities are dominated by one of the four more common Lactobacillus species, L. gasseri, L. jensenii, L. crispatus, and L. iners [30]. In this study, we characterized the antimicrobial activity of purified SACs from two Lactobacillus species (L. jensenii P6A and L. gasseri P65) isolated from vaginal samples obtained from healthy women against clinical isolates of urogenital bacterial pathogens and the reference samples of the yeasts, C. albicans, C. krusei and C. tropicalis. These strains were previously characterized as able of antagonizing sixteen reference bacterial strains as demonstrated by in vitro assays [31]. However, the authors did not characterize the chemical nature of the active biomolecules produced by strains of Lactobacillus nor its effectiveness in controlling biofilms. Then, the physicochemical characterization of SACs was also performed, including the determination of the minimum surface tension, critical micelle concentration, stability at various pH values, temperatures and salt concentrations, as well as the evaluation of their chemical composition. It has been suggested that biofilm formation is an important virulence determinant in BV and other disorders of the genitourinary tract [32, 33]. Thus, the evaluation of the antibiofilm activities of biomolecules of these strains is important to validate their use as probiotic products to prevent urogenital infections. Besides this feature, the auto-aggregation activity and co-aggregation of these strains with pathogens were also studied. Strains and culture conditions In this study two strains of Lactobacillus (L. jensenii P6A and L. gasseri P65) isolated from the vaginal fluids of healthy women were employed [31]. The Lactobacillus strains are non-H2O2-producing, that show antagonistic activity against strains of Gardnerella vaginalis isolated from healthy women and from women with BV, as demonstrated by in vitro assays. In this study, the strains were evaluated in terms of their production of biosurfactants with antimicrobial and anti-adhesive activities against uropathogens. The strains were stored at −80 °C in conventional synthetic de Man, Rogosa and Sharpe (MRS) broth (Difco, Detroit, MI, USA) with 15% (v/v) glycerol until further use [34]. Bacteria from a frozen stock were streaked on MRS agar plates and incubated overnight at the optimum growing temperature (37 °C) for further culturing. The agar plates were stored at 4 °C for no longer than 2 weeks. The following strains were used for the antimicrobial and anti-adhesive assays: urogenital tract clinical isolates of E. coli, Klebsiella pneumoniae, Enterobacter aerogenes and Staphylococcus saprophyticus, and reference strains of C. albicans ATCC 18804, C. krusei ATCC 20298, and C. tropicalis ATCC 750. All the bacterial strains were cultured in brain heart infusion (BHI) broth (Difco, Detroit, MI, USA) at 37 °C for 24 h, and the yeast were cultured on Sabouraud Dextrose agar (SD) (Oxoid, Basingstoke, UK) at 30 °C for 48 h. Biosurfactant production and isolation For biosurfactant production, L. jensenii P6A and L. gasseri P65 were cultured at 37 °C in 1 L Erlenmeyer flasks containing 600 mL of MRS broth (Difco, Detroit, MI, USA) on a rotatory shaker at 120 rpm. Six milliliters of an overnight culture were used for inoculations. After 72 h, the cells were harvested by centrifugation (10,000×g for 5 min at 10 °C), washed twice in demineralized water, and suspended in 100 mL of phosphate-buffered saline solution (PBS; 10 mM KH2PO4/K2HPO4 and 150 mM NaCl, pH adjusted to 7.0). Cell suspensions were incubated at room temperature for 2 h with gentle stirring to release the biosurfactant, as previously described [27, 28]. The cells were then removed by centrifugation, and the supernatant was dried in an oven at 70 °C. To confirm biosurfactant production, the emulsifying activity (E24), using toluene as the hydrophobic substrate, and surfactant activity were routinely measured. The biomolecules were extracted by acid precipitation according to the protocol described by Van Hoogmoed et al. [35]. Briefly, the extracts were suspended in PBS (pH 7.0) at a concentration of 10 mg mL−1, and the pH was adjusted to 2.0 with 1 M HCl. The acidified samples were incubated at 4 °C for 2 h, and the precipitates were collected by centrifugation (10,000×g for 15 min at 4 °C) and washed twice with acidic water (pH 2.0). The precipitates were dissolved in distilled water and adjusted to pH 7.0 using 1 M NaOH. Physicochemical properties Surface–activity determination and critical micelle concentration (CMC) The surface tension of the PBS extracts was measured using a KRUSS tensiometer (K10T model, Hamburg, Germany) with the plate method, and the relationship between the biosurfactant concentration and the surface tension was determined. To increase the accuracy of the surface tension measurements, the average value of triplicate determinations was calculated. All measurements were performed at room temperature (25 °C). The CMC was determined by plotting the surface tension as a function of the biosurfactant concentration and by locating the point of the intersection between the two lines that best fit through the pre- and post-CMC data. Concentrations ranging from 0.1 to 50 mg mL−1 were used in the assays. Effects of the biosurfactant concentrations and the organic phase on the emulsifying activity The extract was diluted in deionized water to concentrations ranging from 0.1 to 20 mg mL−1 to determine the effect of the biosurfactant concentration on emulsifying activity. In these emulsification assays, 1 mL of the solution at each concentration was added to screwcap tubes containing 1.5 mL of toluene, followed by homogenization using a vortex mixer at maximum speed for 2 min. After allowing the sample to stand for 24 h, emulsifying activity (E24) was determined using the method described by Cameron et al. [36]. The assays were performed in triplicate. The means were compared by Tukey's test at 5% probability. To evaluate the spectra of the emulsifying activity, E24 assays were performed using 5 mg mL−1 biosurfactant extracts and the following hydrophobic substrates: hexadecane (Sigma, St. Louis, MO), hexane (Sigma, St. Louis, MO), diesel oil (Petrobras, Brazil), gasoline (Petrobras, Brazil), kerosene (Petrobras, Brazil), olive oil (Food-Bunge), cottonseed oil (Food-Bunge), sunflower oil (Food-Bunge) and toluene (Sigma, St. Louis, MO). Characterization of biosurfactants The protein concentrations of the biosurfactants were determined using Lowry's method [37]. The total carbohydrate concentrations were quantitatively determined using a colorimetric method with glucose as the standard [38]. The method developed by Piretti et al. [39] was used to quantify lipids. Fatty acid analysis Fatty acids were analyzed by gas chromatography–mass spectrometry after conversion to their methyl esters derivatives. To determine the fatty acids composition, crude biosurfactants (2–5 mg) were hydrolyzed with aqueous 2 mol L−1 HCl at 100 °C for 2 h in a sealed tube. The free lipids were extracted using n-hexane dried and then methylated with a 14% boron fluoride-methanol reagent (Sigma-Aldrich, St. Louis, MO, USA) at a ratio of 1 mL of the reagent per 10 mg of lipids. The resulting sample was stored in 2 mL microtubes and incubated in a 95 °C water bath for 15 min [40]. Fatty acid methyl esters (FAMEs) were extracted three times with n-hexane and analyzed by gas chromatography and mass spectrometry (GC–MS) using a Shimadzu GC–MS model QP 5050 A (Shimadzu, Kyoto, Japan) equipped with a PTE-5-Supelco column (30 m × 0.25 mm ID, 0.25 µm film) and employing He as the carrier gas at 0.8 mL min−1, split of 20 and 50 kPa pressure. The column temperature was programmed to increase from 80 °C (1 min) to 180 °C at 20 °C min−1, increases of 3 °C min−1 until 240 °C, and a following 20 °C min−1 increase until 300 °C and then to be maintained at this temperature for 2 min. Electron impact spectra in positive ionization mode were acquired between m/z 50 and 500. The identification of the compounds was performed by means comparison of the retention time, mass fragmentation profiles and molecular ion between sample and standard of FAME (Supelco 37 component FAME MIX, Bellefonte, PA, USA). The results were recorded and processed using Class 3.02 software (Shimadzu) and expressed as the relative percentages of each FAME. The mass spectrum of each fatty acid methyl ester was matched with the National Institute of Standards and Technology (NIST) database. Monosaccharide analysis The carbohydrate composition of the biosurfactants was determined by analyses of their polyacetal derivatives using GC–MS. A lyophilized sample of biosurfactant (1 mg) was hydrolyzed with 150 µL 2 M trifluoroacetic acid (CF3COOH) in a sealed tube at 120 °C for 4 h. After evaporation, the residue was washed twice using methanol. The sample was then reduced with 1 M aqueous sodium borohydride (NaBH4, 100 µL) and acetylated with a mixture of potassium acetate (100 µg) and acetic anhydride (100 µL) at 100 °C for 2 h. Excess reagent was removed by evaporation and the sample was washed several times with ethanol. Alditol acetates were extracted with ethyl acetate and water (1:1, v:v) and analyzed using a Shimadzu GC–MS model QP 5050 A (Shimadzu, Kyoto, Japan) equipped with a PTE-5-Supelco column (30 m × 0.25 mm ID, 0.25 µm film) employing He as the carrier gas at 0.7 mL min−1, split of 10 and 40 kPa pressure. The column temperature was programmed to increase from 100 °C (1 min) to 200 °C at 4 °C min−1, followed by a 20 °C min−1 increase until 300 °C and then to be maintained at this temperature for 5 min. Electron impact spectra in positive ionization mode were acquired between m/z 40 and 400. The identity of the sugars was first confirmed by comparing with the retention time obtained from the individual monosaccharide standard by means of addition of sample in the mixture of standard (SUPELCO, Bellefonte, PA, USA) and were further identified through GC/MS coupled to the NIST database. Fourier transmission infrared spectroscopy (FTIR) The biosurfactants functional groups were further analyzed using FTIR [41]. Pellets for the infrared analysis were obtained by grinding a mixture of 1 mg EPS with 100 mg of potassium bromide. FTIR spectra were recorded in the region of 4000–650 cm−1 at a resolution of 4 cm−1 on a Spectrum-One FTIR spectrometer (Perkin Elmer, Shelton, CT, USA) using an attenuated total reflectance (ATR) system. Stability assays The stability of the biosurfactants under different physicochemical conditions was evaluated using a solution of 5 mg mL−1 crude biosurfactant. To examine the influence of pH on the emulsification index, the biomolecule was eluted in different buffer solutions with different pH values. The sample was then subjected to the emulsification assay (E24) using toluene as the organic layer. A 200 mM potassium chloride/hydrochloric acid buffer was used for measurements at pH 1 and 2; a 200 mM sodium acetate/acetic acid buffer for measurements at pH 3, 4 and 5; a 100 mM sodium phosphate buffer for measurements at pH 6, 7 and 8; and a 100 mM glycine/sodium hydroxide buffer for measurements at pH 9 and 10. Furthermore, the heat stability of the crude biosurfactants was determined by incubating the biosurfactant solution (50 mg mL−1) in a 100 °C water bath for 60 min, and then cooling at room temperature. The emulsifying activity (E24) of each sample was determined as described above. The effects of different sodium chloride (NaCl), potassium chloride (KCl) and sodium bicarbonate (NaHCO3) concentrations on the biosurfactant activity were evaluated by adding different concentrations of the salts (800, 1200 and 2000 µg mL−1) as the emulsion formed. The solutions were allowed to stand for 20 min, and then the emulsification indexes (E24) of the biosurfactants were measured. All assays were performed in triplicates. Antimicrobial activity assays Culture media and inocula Mueller–Hinton broth (Himedia, Maharashtra, India) was prepared in accordance with the CLSI document M7-A10 for minimal inhibitory concentration (MIC) bacterial assays [42]. The inocula of all bacteria at final concentration of 10 × 105 CFU mL−1 were prepared using the spectrophotometric method. Candida cultures were freshly grown at 35 °C. For the susceptibility tests, inoculum suspensions were prepared at final concentrations of 1–5 × 103 cells mL−1 using the spectrophotometric method, in accordance with CLSI document M27-A3 [43]. Susceptibility tests The broth microdilution method was performed in accordance with the guidelines of the CLSI document M7-A6 for bacteria and M27-A3 for yeast using flat-bottom 96-well microplates (Corning, NY, USA). Stock solutions of biosurfactants were prepared in water at a concentration of 1024 µg mL−1. The compounds were diluted 1:2 in Mueller–Hinton broth to obtain a concentration two-fold greater than the maximum concentration in the analysis. Serial dilutions were prepared from this solution using the medium as a diluent. The compounds were tested at concentrations ranging from 256 to 4 µg mL−1. For tests using yeast, the stock solutions were prepared in RPMI 1640 medium (Sigma-Aldrich, St. Louis, MO, USA). Media without the extract and the solvent were used as growth and sterility controls. Chloramphenicol (Sigma-Aldrich; 0.78–100 µg mL−1) was used as a positive antibacterial control, and amphotericin B (Sigma-Aldrich; 0.03–15 µg mL−1) was used as a positive antifungal control. After plate assembly, 100 µL of each bacterial and yeast strain was inoculated per well in order to obtain 5 × 105 CFU mL−1 (or 5 × 104 CFU per well) and 0.5–2.5 × 103 CFU mL−1 (or 0.5–2.5 × 102 CFU per well), respectively. Then, the plates were incubated at 37 °C for 24 h for bacteria and 48 h for Candida species. All tests were performed in triplicate in at least two independent experiments. The MIC was defined as the lowest concentration of biosurfactants that completely inhibited the visible growth of test microorganisms. Biosurfactant-mediated disruption of pre-formed biofilms The antibiofilm activity of the biosurfactants against several microbial strains was determined using the procedure described by Heinemann et al. [44]. Bacterial isolates were grown in BHI for 24 h at 37 °C and yeast were grown in RPMI for 48 h at 30 °C. After incubation, the cells were centrifuged at 7200×g for 15 min, washed twice with PBS and used to prepare an inoculum at a density equivalent to 0.5 on the McFarland scale. 180 µL of BHI broth containing 1% glucose (bacteria) or RPMI (yeast) and 20 µL of the standardized inocula were added to each well of untreated 96-well polystyrene plates (Corning, NY, USA). The plates were then incubated for 24 h at 37 °C for bacteria and at 30 °C for yeasts. After incubation, unattached cells were removed by washing the wells, and biosurfactant at concentrations ranging from 180 to 22.5 mL L−1 were then added. The plates were further incubated under the same conditions. The assay was performed with four replicates of the control (medium without extract/biosurfactant) and four replicates of each concentration of the extract/biosurfactant studied. Non-adherent cells were removed using a multichannel pipette and the wells were washed three times with PBS. Cells that adhered to the bottoms of the wells (biofilm) were fixed with 300 µL of 99% methanol and stained for 5 min with a solution of 1% crystal violet. The excess stain was removed by placing the plate under running tap water. The plates were then air-dried and the dye that bound to the adherent cells was solubilized with 200 µL 95% ethanol. The solutions were transferred to another polystyrene plate and the absorbance was measured at 450 nm using a Multiskan MMC/340 microplate reader (Thermo Scientific GO, Waltham, MA, USA). The percentage of biofilm disruption was assessed by comparing the absorbance readings of the wells treated with the extract/biosurfactant and the control wells (not treated with the extract/biosurfactant). Auto-aggregation and co-aggregation assays and cell surface hydrophobicity Auto-aggregation assays were performed using the method reported by Vandevoorde et al. [45]. L. jensenii P6A and L. gasseri P65 were grown in flasks containing 100 mL MRS broth for 48 h at 37 °C and 120 rpm. The cells were harvested by centrifugation (10,000×g for 10 min at 10 °C), washed twice in demineralized water, and re-suspended in PBS (pH 7) at an OD600 of 0.6 ± 0.5 (approximately 108 CFU mL−1). The OD was measured using a spectrophotometer (Shimadzu CPS 240A) at regular intervals over a 4 h period, without disturbing the microbial suspension, and the sedimentation kinetics were obtained. The auto-aggregation coefficient (AC) was calculated at different times using the method reported by Kos et al. [46] as follows: $$ACt = \frac{ODi - ODt}{ODt} \times 100$$ where ODt is the optical density at 600 nm of the microbial suspension at t time (0.5, 1, 2, 3 or 4 h), and ODi is the initial optical density. The co-aggregation assay was performed using the same method as the auto-aggregation assay, the same pathogen isolates and two isolates of the LAB L. jensenii P6A and L. gasseri P65. Equal volumes (2 mL) of each Lactobacillus suspension were added to the following pathogens: E. coli, S. saprophyticus, E. aerogenes, K. pneumoniae, C. albicans, C. krusei, and C. tropicalis. Then, the samples were vortexed for 15 s. Control tubes containing 4 mL of each bacterial suspension were prepared simultaneously. The OD of the suspensions was measured after the initial preparation and 4 h after incubation at 25 °C. The co-aggregation percentage was calculated using the equation reported by Handley et al. [47]: $$Co\text{-}aggregation\,(\%) = \frac{{\left[ {\frac{{\left( {\text{ODx + ODy}} \right)}}{2} - {\text{OD}}({\text{x}} + {\text{y}})} \right]}}{{\frac{{\left[ {\text{ODx + ODy}} \right]}}{2}}} \times 100$$ where x and y represent OD measurements of tubes containing either the lactobacilli or the pathogen suspensions, respectively, and (x + y) represents OD measurements of tubes containing a mixture of the pathogen and Lactobacillus suspensions. The two strains were treated with LiCl (5 M) and incubated for 30 min at room temperature to remove the S layer (cell surface proteins) and to evaluate the influence of the S layer in auto-aggregation and co-aggregation capacities. Auto- and co-aggregation assays were performed and the results were compared to the assays conducted with cells containing the S layer. Production and tensoactive properties of the SACs The production of biosurfactants by L. jensenii P6A and L. gasseri P65 during growth in MRS broth was monitored by verifying the emulsifying activity using toluene as organic phase and by measuring the surfactant activity of supernatant. The emulsifying and surfactant activities at 72 h of incubation corresponded to 63.75% and 56 mN m−1 for L. jensenii P6A and 70% and 46 mN m−1 for L. gasseri P65. The production corresponded to 0.27 g L−1 for L. jensenii P6A and 0.42 g L−1 for L. gasseri P65. This low production pattern has already been described for other LAB, with values ranging from 0.02 to 0.1 g L−1, whereas for genera as Pseudomonas and Bacillus, the yield varies from 2 to 15 g L−1 [20, 22, 27, 28, 48]. In the assays performed to establish the CMC of the crude biosurfactants isolated from L. jensenii P6A and L. gasseri P65, a progressive decrease in surface tension was observed as the biosurfactant concentration increased. The CMC values of the biosurfactants from P6A and P65 were calculated as 7.1 and 8.58 mg mL−1, respectively, as shown in Fig. 1. There are different mathematical methods based on parametric and nonparametric estimation of the regression function to CMC definition and of other features in different application fields with good results [49,50,51]. In this study, we used the method of simple linear regression. It was assumed that the regression function corresponds to a straight line, hence called regression line, and the estimation of this straight line is reduced to the estimation of two of its parameters, slope and intercept. The advantage of these parametric methods is that when the functional model assumed for X (CMC concentration in this case) is adequate, the estimation is reduced to a few parameters, and therefore, it is extremely efficient. For CMC determination, two lines were estimated and assumed that the intersection point of these lines indicates the precise CMC concentration and thus an indication for the biosurfactant concentration with the highest capacity for surface tension reduction. At the points corresponding to CMC, the biomolecules from L. jensenii P6A and L. gasseri P65 reduced the water surface tension from 72 mN m−1 to approximately 43.2 and 42.5 mN m−1, respectively. The ability of a biosurfactant to reduce surface and interfacial tensions determines its functionality and effectiveness. For example, a good surfactant reduces the surface tension of water from 73.20 to 35.0 mN m−1 [52]. The values observed for biosurfactants from P6A and P65 are in the range of those observed for sodium dodecylsulfate (SDS) and biosurfactants isolated from different lactobacilli strains and other lactic acid bacteria (LAB) [20, 22, 27, 28, 52]. Effects of different concentrations of biosurfactants on the surface tension of water at room temperature (25 °C). a Surface tension (mN m−1) of the biosurfactants produced by L. jensenii P6A and b L. gasseri P65. The CMC was determined from the intersection between the regression lines that better described the two parts of the curve, below and above the CMC (arrow). The results represent the average of two independent measurements Relationships between the concentrations of the biosurfactants produced by L. gasseri P65 and L. jensenii P6A and the emulsifying activity, expressed as the emulsification index (E24), were evaluated using toluene as organic phase (Fig. 2). In general, E24 values increased as the concentration of the biosurfactant increased to 20 mg mL−1. For the biosurfactant produced by P6A, E24 values ranged from 21% in tests containing 0.5 mg mL−1 to 88.7% in tests with 20 mg mL−1 biosurfactant. No emulsifying activity was observed in tests using concentrations lower than 0.5 mg mL−1, and the differences in values between 1 and 17.5 mg mL−1 were not significantly different (p > 0.05). For the biosurfactant produced by L. gasseri P65, the emulsification index ranged from 10% in tests with 0.75 mg mL−1 biosurfactant to 77% in tests with 12.5 mg mL−1 biosurfactant. At concentrations higher than 5 mg mL−1, there was no significant difference in emulsifying activity (p > 0.05) according to Tukey's test. Effects of the concentration of biosurfactants produced by L. jensenii P6A and L. gasseri P65 on emulsifying activity, expressed as the emulsification index (E24), using toluene as the organic phase The biosurfactants produced by L. jensenii P6A and L. gasseri P65 presented different emulsification activities according to the different evaluated hydrophobic substrates. The biomolecule produced by L. jensenii P6A emulsified kerosene, toluene, hexane and xylene organic solvents at values greater than 62% and diesel oil at 28.3%, but showed low levels of emulsification of gasoline and hexadecane (Fig. 3a). E24 values ranged from 61 to 70% for vegetable oils (cotton, olive, and sunflower oils). For the biosurfactant produced by L. gasseri P65 (Fig. 3b), high E24 values were observed in assays with vegetable oils (cotton, olive and sunflower oils), whereas low values were observed for gasoline, diesel oil, hexane, xylene and hexadecane. The values for kerosene and toluene were 28.0 and 64.6%, respectively. Emulsifying activities of 5 mg mL−1 biosurfactants produced by L. jensenii P6A (a) and L. gasseri P65 (b) on the aqueous phase using different hydrophobic substrates, expressed as the emulsification index (E24) The chemical structure of both the biosurfactants and the emulsions organic phases may explain the variations observed in the emulsifying activity saturation point and in the profile of the emulsified compounds. The different results observed for L. jensenii P6A biosurfactant with diesel oil and gasoline can be explained by the fact that these substrates consist of a complex mixture of hydrocarbons, with the predominance of hydrocarbons of shorter carbon chains in gasoline and longer in diesel. Vegetable oils have a different hydrocarbon composition, mainly consisting of triglycerides. In general, the indexes found in our study were greater than the values reported for biosurfactants produced by other LAB. Emulsifying indexes between 40 and 49% with gasoline, kerosene and octane were observed for the biosurfactant produced by L. pentosus CECT4023 [53, 54]. In addition, the biosurfactant produced by L. plantarum CFR2194 exhibited emulsifying indexes between 13.6 and 38.2% for a variety of water-immiscible substrates [55]. The biomolecules from LAB strains (L26, L35 and L61) emulsified kerosene, sunflower oil, and olive oil with indexes varying from 8.22 to 26.5%, and the emulsions formed with the edible oils were more stable than the emulsions formed with kerosene [56]. The lipopeptide from Bacillus subtilis K1 isolated from aerial roots of banyan also showed good emulsification rates for olive oil [57]. Characterization of the biosurfactant The carbohydrate, protein and lipid concentrations of the L. jensenii P6A and L. gasseri P65 biosurfactant extracts were determined. The carbohydrate concentrations ranged from 51.49 to 38.61% and the protein and lipid concentrations ranged from 15.17 to 9.81% and 29.45 to 49.53%, respectively. These results show diverse biosurfactant structures, which are confirmed by FTIR analysis (Fig. 4). FTIR is widely used to characterize the functional groups of organic compounds based on the characteristic infrared absorption bands of specific chemical groups [58]. The presence of a broad band at 3500–3200 cm−1 in the spectra of the biosurfactants produced by L. gasseri P65 and L. jensenii P6A indicates the presence of OH groups (and, possibly, NH groups) of glycoproteins (Fig. 4; Table 1). Another band observed at approximately 1650 cm−1 corresponds to C=O stretching of peptide bonds. The absorption band observed in the region near 1720 cm−1, which was partly superimposed on the band at 1650 cm −1, may be attributed to the C=O stretching of lipid esters. The absorption bands around 1230 and ~1100 cm−1 can be attributed to ester asymmetric and symmetric C–O–C stretching, respectively. Intense bands were also observed at 1100–1000 cm−1, indicating the presence of C–O sugar linkages. FTIR spectra of the biosurfactants produced by L. jensenii P6A (a) and L. gasseri P65 (b) Table 1 Correlation between FTIR spectra and functional groups detected in biosurfactants produced by L. jensenii P6A and L. gasseri P65 Although little is known about the chemical structures of the biosurfactants produced by lactobacilli, some researchers have reported an initial characterization [53]. Indeed, it was found that L. pentosus biosurfactants are composed of 44.7 ± 1.5% soluble protein and 13.4 ± 2.9% total sugars; those obtained from L. fermentum B54 are rich in proteins, with few polysaccharides and phosphate groups [21]. These results are similar to those described for high molecular weight biosurfactants produced by bacteria and yeast, which are characterized by the presence of primarily 16- and 18-carbon fatty acids. The biosurfactants produced by L. gasseri P65 and L. jensenii P6A showed a different profile of fatty acids in the lipid portion, presenting only a fatty acid in common, the 14-methylpentadecanoic acid (16 carbon fatty acid) (Table 2). This fatty acid predominated on the biosurfactant produced by L. jensenii P6A, constituting 69% of the lipid fraction, while eicosanoic acid (20 carbon fatty acid) predominated in the biomolecule produced by L. gasseri P65, corresponding to 47.43%. Although the two biosurfactants contained the same sugars, the percentages varied. In addition, rhamnose was not detected in the biosurfactant produced by L. gasseri P65. These differences can explain the emulsified compounds profile, the E24 and CMC values and the stability profiles at different temperatures, pH and in the presence of different salts ions. Several studies have reported similar structures for LAB. Lactobacillus helveticus produce a glycolipid-type biosurfactant closely resembling xylolipids [59]. The biosurfactants produced by Lactococcus lactis 53 are composed of glycoproteins with glucose, rhamnose, fucose and mannose [28]. The biosurfactant from L. fermentum B54 are also composed of a large amount of proteins, but fewer polysaccharides and phosphate groups [21]. The biosurfactants spectra produced by L. lactis, L. paracasei and L. pentosus showed bands at approximately 3200–3500 cm−1, characteristic of glycoprotein stretching [7, 29, 53]. Bands at approximately 1675 and 1725 cm−1, corresponding to C=O (carbonyl groups) and NH (peptides), respectively, have been observed in L. pentosus biosurfactant, whereas bands at 2900 and 1000–1200 cm−1 [53] indicate the presence of glycoproteins. The biosurfactants produced by L. rhamnosus 1825 and L. fermenti 126 CCM have also been evaluated, with bands at approximately 3285, 1635 and 1549 cm−1, typical of NH bonds and CO–N bonds in proteins found for the latter. Similar spectra were observed for L. rhamnosus. The bands at approximately 2964, 2929 and 1458 cm −1 and 2961, 2936 and 1453 cm−1 for the biosurfactants produced by L. fermenti and L. rhamnosus CCM1825 126, respectively, correspond to the CH bonds of aliphatic groups; while peaks at 1200–1000 cm−1 confirm the presence of polysaccharide fractions [60]. When comparing infrared spectroscopic data for the compounds produced by L. gasseri P65 and L. jensenii P6A with data reported for other biosurfactants produced by LAB, it can be concluded that the compounds reported in the present study are related to those produced by L. lactis and L. paracasei. These observations suggest that these substances have a complex structure composed of glycolipoproteins. Table 2 Fatty acid and monosaccharide compositions of biosurfactants produced by L. jensenii P6A and L. gasseri P65 Stability of the biosurfactants The applicability of biosurfactants in several fields depends on their stability at different temperatures, pH values and salt concentrations. We observed stability of the biosurfactants produced by L. jensenii P6A and L. gasseri P65 after 60-min incubation at 100 °C, with no loss of activity (data not shown). This profile is similar to previously reported results. Desai and Banat [4] found that heat treatment (autoclaving at 120 °C for 15 min) of Bacillus sp. biosurfactants did not cause appreciable changes in their surface and emulsifying activities, and an additional biosurfactant isolated from L. paracasei showed unaltered surfactant activity after 120 h of incubation at 60 °C [29]. The emulsions formed also remained relatively stable after standing for 24 h at pH values ranging from pH 2 to 10, maintaining values of approximately 66.34% (±1.12) (data not shown). The biomolecules are relatively more stable to pH variations than other biomolecules described for LAB. Gudiña et al. [29] observed precipitation of some of the biosurfactant components produced by L. paracasei at pH values lower than 6, which may have contributed to the alterations in the surface activities. The addition of potassium chloride and sodium bicarbonate to the emulsions formed from toluene and the biosurfactants produced by L. jensenii P6A and L. gasseri P65 did not affect the emulsions. Indeed, the emulsifying indexes of L. jensenii P6A and L. gasseri P65 were maintained between 62 and 65.35% and between 58.43 and 65.54%, respectively, even when the salt concentration exceeded the limits of saturation (data not shown). However, when NaCl was added to the emulsions, there was an approximately 30% decrease in the height of the emulsified layer for L. jensenii P6A biosurfactant and an approximately 31% decrease for L. gasseri P65 biosurfactant. This profile is already known; the emulsifying activity of commercial surfactants tends to decrease with NaCl increasing, falling at concentration about 10–12% [61]. The evaluation of antimicrobial effects of biosurfactants of both L. jensenii P6A and L. gasseri P65 showed similar MIC values corresponding to 16 µg mL−1 for E. coli, and 128 µg mL−1 for K. pneumoniae, E. aerogenes and S. saprophyticus (Table 3). Furthermore, the extracts at a concentration of 16 µg mL−1 completely inhibited C. albicans growth, but were not active against the other species of Candida (C. krusei and C. tropicalis) at the highest concentration tested (256 µg mL−1). Although there are few reports regarding the antimicrobial activity of biosurfactants isolated from lactobacilli, some have been reported to exhibit activity against various microorganisms. For example, biosurfactants isolated from L. jensenii and L. rhamnosus completely inhibited the growth of Acinetobacter baumannii, E. coli and S. aureus at 25–50 mg mL−1 [62]. But the authors did not chemically characterize the biosurfactants. In another study, biosurfactants produced by Lactobacillus strains inhibited the growth of Pseudomonas spp. isolated from fresh beef, with minimal inhibitory concentrations (MIC) ranging from 25 to 50 mg mL−1 [63]. The biosurfactants studied here exhibited MIC values that support more studies objecting its use in antibacterial therapies or probiotics. Table 3 Antimicrobial activity of biosurfactants produced by L. jensenii P6A and L. gasseri P65 on uropathogens Disruption of pre-formed biofilms The Lactobacillus biosurfactants disrupted the biofilms of all tested microorganisms at different levels. The greatest percentages of biofilm disruption were obtained in tests with 180 µL mL−1 of L. jensenii P6A biosurfactant and E. aerogenes (64%) (Fig. 5a). For the other microorganisms, the disruption did not exceed 36% at the same concentration. The L. gasseri P65 biosurfactant decreased the formation of E. coli (46.4%) and S. saprophyticus (39%) biofilms in a high degree, but the values did not exceed 33% for the other microorganisms (Fig. 5b). Biosurfactants can adsorb to surfaces, forming a film at the interfaces by orienting polar and nonpolar groups according to the hydrophilicity/hydrophobicity of the surface. This interaction between biosurfactants and a substratum surface alters the surface hydrophobicity, thereby interfering with microbial adhesion and desorption processes [2, 28]. Percentage of disruption of biofilms produced by pathogenic microorganisms on the surface of polystyrene plates in the presence of different concentrations of biosurfactants produced by L. gasseri P65 (a) and L. jensenii P6A (b). The test was conducted on four replicates after 24 h of incubation Diverse studies have highlighted the importance of biofilm formation as a virulence factor for uropathogens such as G. vaginalis and E. coli [32, 33]. The in vitro model of adherence and biofilm formation used in this study is limited by the fact that the use of polystyrene plates does not mimic the in vivo conditions in epithelial cells. However, despite their limitations, in vitro models can be very informative, and they are crucial for obtaining further understanding of activities of anti-biofilm compounds. The results showing the anti-biofilm activities of biomolecules produced by lactobacilli validate and/or explain their activities in vivo, for example, following intravaginal administration in a pharmaceutical form for the prevention and treatment of recurrent infections of the genital tract. Auto-aggregation and co-aggregation of L. jensenii P6A and L. gasseri P65 Auto-aggregation of L. jensenii P6A and L. gasseri P65 and their co-aggregation with pathogens were evaluated. The sedimentation rate of the Lactobacillus strains was measured over a 4-h period using washed cells resuspended in PBS (pH 7.0). Low sedimentation levels after 4 h were obtained for both strains, with values corresponding to 7.38 and 10.74% for L. jensenii P6A and L. gasseri P65, respectively (data not shown). L. jensenii P6A exhibited greater aggregation with E. coli (27.4%) and C. albicans (25.9%) in co-aggregation tests with pathogenic microorganisms; for L. gasseri P65, a higher co-aggregation score was found for C. tropicalis (11.9%) (Table 4). In general, the activities were higher in the Gram-negative bacteria assays (E. coli, E. aerogenes and K. pneumoniae) when compared to the Gram-positive (S. saprophyticus) ones. Table 4 Co-aggregation activities of L. jensenii P6A and L. gasseri P65 after 4 h of incubation in PBS Auto-aggregation of lactobacilli appears to be necessary for adhesion to epithelial cells and mucosal surfaces, while co-aggregation with pathogens has been considered a strategy to exclude pathogenic bacteria from their hosts [46, 64]. The very close proximity of bacteria on the aggregate permit that antimicrobial substances released by lactobacilli can directly inhibit the pathogens. The lactobacilli in our study did not show a substantial self-aggregation ability, which differs from a previous study reporting auto-aggregation indexes of 51% for Lactobacillus paracasei strains, 45% for L. acidophilus M92 and 58% for Lactobacillus kefir 2345 [29, 65]. The co-aggregation scores with pathogens also were lower as compared to other studies. As an example, a strain of L. plantarum showed a co-aggregation score of 41.5% with enterohemorrhagic E. coli (EHEC), 40.5% with Salmonella enterica serotype Typhimurium, and 37.4% with Listeria monocytogenes [66]. The co-aggregation activity of lactobacilli can be variable and appears to be dependent on the strain used in the tests, as observed for L. delbrueckii L10. This strain could co-aggregate with S. aureus 1351 and C. albicans ATCC 70014, at percentages of 48.88 and 59.37%, respectively [67]. In contrast, Pan et al. [68] reported co-aggregation capacities of L. acidophilus and L. delbrueckii with Clostridium butirycum of only 5.76% ± 6.32 and 1.2 ± 1.6, respectively. The auto-aggregation rates of the Lactobacillus sp. strains decreased after removing the S layer, showing percentages of up to 5.0 and 6.8% for L. jensenii P6A and L. gasseri P65, respectively (data not shown). Similarly, a decrease in co-aggregation capacity with the pathogens was found. The greatest reductions after removal of the S layer were between L. jensenii P6A and E. coli, with values ranging from 27.4 to 1.4% (Table 4). Some authors have reported the importance of this protein layer for adhesion of Lactobacillus spp. For example, Kos et al. [46] showed that the removal of the protein layer decreased the auto-aggregation capacity among L. acidophilus M92 isolates. In another study, Golowczyc et al. [69] demonstrated a decrease in adhesion to Saccharomyces lipolytica cells for an L. kefir isolate after removal of the S layer. In addition, several studies have shown that pH, temperature and intergeneric interactions can influence these reactions [64]. Determination of the cell surface hydrophobicity of pathogenic microorganisms after incubation with the biosurfactants produced by L. jensenii P6A and L. gasseri P65 To examine changes in the cell surface that occurred after the pathogens were exposed to the biosurfactants, the cellular hydrophobicity of the cells was quantified. The percentages of C. krusei, S. saprophyticus and E. aerogenes cells that adhered to hexadecane were 47, 11 and 2%, respectively, after 24 h of incubation with the biosurfactant produced by L. jensenii P6A (data not shown). For the biosurfactant produced by the L. gasseri P65, the percentages for E. coli and E. aerogenes were 47 and 1.5%, respectively. These data indicate that the biosurfactants altered the cell surface of these pathogens by changing their adhesion to the hydrophobic substrate. For other microorganisms, the percentage of adherence to hexadecane was comparable to the controls. Previously, it was suggested that the hydrophobicity of the microbial cell, including viral particles can drive the adhesion of many pathogens [70]. In this study, biosurfactants produced by two Lactobacillus strains, L. jensenii P6A and L. gasseri P65, were characterized. The emulsifying activities of biosurfactants from these strains were stable at different pH values (2 at 10) and high temperatures (100 °C). However, emulsifying activity was destabilized in the presence of NaCl, which was not observed employing various concentrations of KCl and NaHCO3. Although the auto-aggregation and co-aggregation data obtained for these Lactobacillus strains with pathogens may not prove to be very effective in protecting the vaginal mucosa, L. jensenii P6A and L. gasseri P65 produced biosurfactants with considerable antimicrobial activities against E. coli and C. albicans and anti-adhesive activity against E. coli, S. saprophyticus and E. aerogenes. These results indicate the potential use of these biosurfactants as alternative antimicrobial agents in medicine for applications against pathogenic microorganisms that are responsible for infections and diseases in the gastrointestinal and urogenital tracts and the skin. Franzetti A, Tamburini E, Banat IM. Applications of biological surface active compounds in remediation technologies. Adv Exp Med Biol. 2010;672:121–34. doi:10.1007/978-1-4419-5979-9_9. Neu TR. Significance of bacterial surface-active compounds in interaction of bacteria with interfaces. Microbiol Rev. 1996;60:151–66. Cooper DG, Zajic JE. Surface active compounds from microorganisms. Adv Appl Microbiol. 1980;26:229–53. doi:10.1016/S0065-2164(08)70335-6. Desai JD, Banat IM. Microbial production of surfactants and their commercial potential. Microbiol Mol Biol Rev. 1997;61:47–64. Banat IM, Franzetti A, Gandolfi I, Bestetti G, Martinotti MG, Fracchia L, Smyth TJ, Marchant R. Microbial biosurfactants production, applications and future potential. Appl Microbiol Biotechnol. 2010;87:427–44. doi:10.1007/s00253-010-2589-0. Liu JF, Mbadinga SM, Yang SZ, Gu JD, Mu BZ. Chemical structure, property and potential applications of biosurfactants produced by Bacillus subtilis in petroleum recovery and spill mitigation. Int J Mol Sci. 2015;16:4814–37. doi:10.3390/ijms16034814. Rodrigues L, Moldes A, Teixeira J, Oliveira R. Kinetic study of fermentative biosurfactant production by Lactobacillus strains. Biochem Eng J. 2006;28:109–16. doi:10.1016/j.bej.2005.06.001. Alejandro CS, Humberto HS, María JF. Production of glycolipids with antimicrobial activity by Ustilago maydis FBD12 in submerged culture. Afr J Microbiol Res. 2011;5:2512–23. doi:10.5897/AJMR10.814. Monteiro AS, Miranda TT, Lula I, Denadai ÂM, Sinisterra RD, Santoro MM, Santos VL. Inhibition of Candida albicans CC biofilms formation in polystyrene plate surfaces by biosurfactant produced by Trichosporon montevideense CLOA72. Colloids Surf B Biointerfaces. 2011;84:467–76. doi:10.1016/j.colsurfb.2011.02.001. Shokouhfard M, Kermanshahi RK, Shahandashti RV, Feizabadi MM, Teimourian S. The inhibitory effect of a Lactobacillus acidophilus derived biosurfactant on biofilm producer Serratia marcescens. Iran J Basic Med Sci. 2015;18:1001–7. Satpute SK, Kulkarni GR, Banpurkar AG, Banat IM, Mone NS, Patil RH, Cameotra SS. Biosurfactants from Lactobacilli species: properties, challenges and potential biomedical applications. J Basic Microbiol. 2016;56:1–19. doi:10.1002/jobm.201600143. Makkar RS, Cameotra SS. An update on the use of unconventional substrates for biosurfactant production and their new applications. Appl Microbiol Biotechnol. 2002;58:428–34. doi:10.1007/s00253-001-0924-1. Reid G, Bruce AW, Smeianov V. The role of Lactobacilli in preventing urogenital and intestinal infections. Int Dairy J. 1998;8:555–62. doi:10.1016/S0958-6946(98)00075-2. Borges S, Silva J, Teixeira P. The role of lactobacilli and probiotics in maintaining vaginal health. Arch Gynecol Obstet. 2014;289:479–89. doi:10.1007/s00404-013-3064-9. Pendharkar S, Brandsborg E, Hammarström L, Marcotte H, Larsson PG. Vaginal colonisation by probiotic lactobacilli and clinical outcome in women conventionally treated for bacterial vaginosis and yeast infection. BMC Infect Dis. 2015;15:255. doi:10.1186/s12879-015-0971-3. Hoesl CE, Altwein JE. The probiotic approach: an alternative treatment option in urology. Eur Urol. 2005;47:288–96. doi:10.1016/j.eururo.2004.09.011. Lepargneur JP, Rousseau V. Protective role of the Doderleïn flora. J Gynecol Obstet Biol Reprod. 2002;31:485–94. Barrons R, Tassone D. Use of Lactobacillus probiotics for bacterial genitourinary infections in women: a review. Clin Ther. 2008;30:453–68. doi:10.1016/j.clinthera.2008.03.013. Valore EV, Park CH, Igreti SL, Ganz T. Antimicrobial components of vaginal fluid. Am J Obstet Gynecol. 2002;187:561. doi:10.1067/mob.2002.125280. Velraeds MMC, Van Der Mei HC, Reid G, Busscher HJ. Physicochemical and biochemical characterization of biosurfactats released by Lactobacillus strains. Colloids Surf B Biointerfaces. 1996;8:51–61. doi:10.1016/S0927-7765(96)01297-0. Velraeds MMC, Van Der Mei HC, Reid G, Busscher HJ. Inhibition of initial adhesion of uropathogenic Enterococcus faecalis by biosurfactants from Lactobacillus isolates. Appl Environ Microbiol. 1996;62:1958–63. doi:10.1016/S0090-4295(97)00065-4. Busscher VJ, Van Hoogmoed CG, Geertsma-Doornbusch GI, Van Der Kuijl-Boolj M, Van Der Mei HC. Streptococcus thermophilus and its biosurfactants inhibit adhesion by Candida spp. on silicone rubber. Appl Environ Microbiol. 1997;63:3810–7. Velraeds MM, van de Belt-Gritter B, Busscher HJ, Reid G, van der Mei HC. Inhibition of uropathogenic biofilm growth on silicone rubber in human urine by lactobacilli—a teleologic approach. World J Urol. 2000;18:422–6. doi:10.1007/PL00007084. Fracchia L, Cavallo M, Allegrone G, Martinotti MG. A Lactobacillus-derived biosurfactant inhibits biofilm formation of human pathogenic Candida albicans biofilm producers. Appl Microbiol Biotechnol. 2010;2:827–37. Rodrigues LR, Van der Mei HC, Teixeira JA, Oliveira R. Influence of biosurfactants from probiotic bacteria on formation of biofilms on voice prosthesis. Appl Environ Microbiol. 2004;70:4408–10. doi:10.1128/AEM.70.7.4408-4410.2004. Van der Mei HC, Free RH, Elving GJ, van Weissenbruch R, Albers F, Busscher HJ. Effect of probiotic bacteria on probiotic organisms for infection control and prevalence of yeasts in oropharyngeal biofilms on silicone rubber voice prostheses in vitro. J Med Microbiol. 2000;49:713–8. doi:10.1099/0022-1317-49-8-713. Rodrigues LR, Teixeira JA, Van der Mei HC, Oliveira R. Isolation and partial characterization of a biosurfactant produced by Streptococcus thermophilus A. Colloids Surf B Biointerfaces. 2006;53:105. doi:10.1016/j.colsurfb.2006.08.009. Rodrigues LR, Teixeira JA, Van der Mei HC, Oliveira R. Phycochemical and functional characterization of a biosurfactant produced by Lactobacillus lactis 53. Colloids Surf B Biointerfaces. 2006;49:79–86. doi:10.1016/j.colsurfb.2006.03.003. Gudiña EJ, Teixeira JÁ, Rodrigues LR. Isolation and functional characterization of a biosurfactant produced by Lactobacillus paracasei. Colloids Surf B. 2010;76:298. doi:10.1016/j.colsurfb.2009.11.008. Mendes-Soares H, Suzuki H, Hickey RJ, Forney LJ. Comparative functional genomics of Lactobacillus spp. reveals possible mechanisms for specialization of vaginal lactobacilli to their environment. J Bacteriol. 2014;196(7):1458–70. doi:10.1128/JB.01439-13. Teixeira GS, Carvalho FP, Arantes RM, Nunes AC, Moreira JLS, Mendonça M, Almeida RB, Farias LM, Carvalho MAR, Nicoli JR. Characteristics of Lactobacillus and Gardnerella vaginalis from women with or without bacterial vaginosis and their relationships in gnotobiotic mice. J Med Microbiol. 2012;61:1074–81. doi:10.1099/jmm.0.041962-0. Fattahi S, Kafil HS, Nahai MR, Asgharzadeh M, Nori R, Aghazadeh M. Relationship of biofilm formation and different virulence genes in uropathogenic Escherichia coli isolates from Northwest Iran. GMS Hyg Infect Control. 2015;10:Doc11. doi:10.3205/dgkh000254. Machado A, Cerca N. Influence of biofilm formation by Gardnerella vaginalis and other anaerobes on bacterial vaginosis. J Infect Dis. 2015;212:1856–61. doi:10.1093/infdis/jiv338. De Man JC, Rogosa M, Sharpe ME. A medium for the cultivation of lactobacilli. J Appl Bacteriol. 1960;23:130. doi:10.1111/j.1365-2672.1960.tb00188.x. Van Hoogmoed CG, Van der Kuijl-Booij M, Van der Mei HC, Busscher HJ. Inhibition of Streptococcus mutans NS adhesion to glass with and without a salivary conditioning film by biosurfactant-releasing Streptococcus mitis strain. Appl Environ Microbiol. 2000;66:659. doi:10.1128/AEM.66.2.659-663.2000. Cameron D, Cooper DG, Neufeld RJ. The mannoprotein of Saccharomyces cerevisiae is an effective bioemulsifier. Appl Environ Microbiol. 1988;54:1420–25. Lowry OH, Rosebrouch NJ, Farr AL, Randall RJ. Protein measurement with the folin phenol reagent. J Biol Chem. 1951;193:265–75. Dubois M, Gilles KA, Hamilton JK, Rebers PA, Smith F. Colorimetric method for determination of sugars and related substances. Anal Chem. 1956;28:350–6. doi:10.1021/ac60111a017. Piretti MV, Zuppa F, Pagliuca G, Taioli F. Variations of fatty acid constituents in selected tissues of the bivalve mollusc Scapharia inaequivaleis. Comp Biochem Physiol. 1988;89:183–7. Bligh EG, Dyer WJ. Um método rápido para a extração total de lipídeos e de purificação. Can J Biochem Physiol. 1959;37:911–7. Abu GO, Weiner RM, Rice J, Colwell RR. Properties of an extracellular adhesive polymer from the marine bacterium Shewanella colwelliana. Biofouling. 1991;3:69–84. doi:10.1080/08927019109378163. CLSI—Clinical and Laboratory Standard Institute. Methods for dilution antimicrobial susceptibility tests for bacteria that grow aerobically. Approved standard, 10th ed. CLSI document M07-A10. Wayne: Clinical and Laboratory Standards Institute; 2015. CLSI—Clinical and Laboratory Standards Institute. Reference method for broth dilution antifungal susceptibility testing of yeast. Approved Standard M27-A3. Wayne: CLSI; 2008. Heinemann C, Van Hylckama Vlieg JE, Janssen DB, Busscher HJ, Van der Mei HC, Reid G. Purification and characterization of a surface-binding protein from Lactobacillus fermentum RC-14 that inhibits adhesion of Enterococcus faecalis 1131. FEMS Microbiol Lett. 2000;190:177–80. doi:10.1111/j.1574-6968.2000.tb09282.x. Vandevoorde L, Christiaens H, Verstraete W. Prevalence of coaggregation among chicken lactobacilli. J Appl Bacteriol. 1992;72:214–9. doi:10.1111/j.1365-2672.1992.tb01826.x. Kos B, Suskovic J, Sukovic S, Simpraga M, Frece J, Matosic S. Adhesion and aggregation ability of probiotic strain Lactobacillus acidophilus M92. J Appl Microbiol. 2003;94:981–7. doi:10.1046/j.1365-2672.2003.01915.x. Handley PS, Harty DWS, Wyatt JE, Brown CR, Doran JP, Gibbs ACC. A comparison of the adhesion, coaggregation and cell-surface hydrophobicity properties of fibrillar and fimbriate strains of Streptococcus salivarius. J Gen Microbiol. 1987;133:3207–17. doi:10.1099/00221287-133-11-3207. Marius H, Mareen G, Fabiola W, Rudolf H. Production of microbial biosurfactants: status quo of rhamnolipid and surfactin towards large-scale production. Biotechnol J. 2017;12:1600561. doi:10.1002/biot.201600561. Fontán JLL, Costa J, Ruso JM, Prieto G, Sarmiento F. A nonparametric approach to calculate critical micelle concentrations: the local polynomial regression method. Eur Phys J E Soft Matter. 2004;13:133–40. doi:10.1140/epje/e2004-00050-3. Mohammad V, Banihabib ME, Behbahani SMR. Comparison of the ARMA, ARIMA, and the autoregressive artificial neural network models in forecasting the monthly inflow of Dez dam reservoir. J Hydrol. 2013;476:433–41. doi:10.1016/j.jhydrol.2012.11.017. Valipour M, Sefidkouhi MA, Raeini M. Selecting the best model to estimate potential evapotranspiration with respect to climate change and magnitudes of extreme events. Agric Water Manag. 2017;180(Part A):50–60. doi:10.1016/j.agwat.2016.08.025. Mulligan CN. Environmental applications for biosurfactants. Environ Pollut. 2005;133:183–9. doi:10.1016/j.envpol.2004.06.009. Moldes AB, Paradelo R, Vecino X, Cruz JM, Gudina EJ, Rodrigues LR, Teixeira JA, Domínguez JM, Barral MT. Partial characterization of biosurfactant from Lactobacillus pentosus and comparison with sodium dodecyl sulphate for the bioremediation of hydrocarbon contaminated soil. Biomed Res Int. 2013;2013:961842. doi:10.1155/2013/961842. Portilla-Rivera O, Torrado A, Dominguez JM, Moldes AB. Stability and emulsifying capacity of biosurfactants obtained from lignocellulosic sources using Lactobacillus pentosus. J Agric Food Chem. 2008;56:8074–80. doi:10.1021/jf801428x. Madhu N, Prapulla SG. Evaluation and functional characterization of a biosurfactant produced by Lactobacillus plantarum CFR 2194. Appl Biochem Biotechnol. 2014;172:1777–89. doi:10.1007/s12010-013-0649-5. Cornea CP, Roming FI, Sicuia OA, Voaideș C, Zamfir M, Grosu-Tudor SS. Biosurfactant production by Lactobacillus spp. strains isolated from Romanian traditional fermented food products. Rom Biotechnol Lett. 2016;21:2. Pathak KV, Keharia H. Application of extracellular lipopeptide biosurfactant produced by endophytic Bacillus subtilis K1 isolated from aerial roots of banyan (Ficus benghalensis) in microbially enhanced oil recovery (MEOR). 3 Biotech. 2014;4:41–8. doi:10.1007/s13205-013-0119-3. Silverstein RM, Webster FX, Kiemle DJ, Bryce DL. Spectrometric identification of organic compounds. 8th ed. New York: Wiley; 2014. Sharma D, Saharan BS, Chauhan N, Bansal A, Procha S. Production and structural characterization of Lactobacillus helveticus derived biosurfactant. Sci World J. 2014;2014:493548. doi:10.1155/2014/493548. Brzozowski B, Bednarski W, Golek P. Physicochemical properties of Lactobacillus biosurfactants. Food Technol Biotechnol. 2011;49:177–86. Phetrong K, Aran H, Maneerat S. Production and characterization of bioemulsifier from a marine bacterium, Acinetobacter calcoaceticus subsp. anitratus SM7. Songkl J Sci Technol. 2008;30:297–305. Sambanthamoorthy K, Feng X, Patel R, Patel S, Paranavitana C. Antimicrobial and antibiofilm potential of biosurfactants isolated from lactobacilli against multi-drug-resistant pathogens. BMC Microbiol. 2014;14:197. doi:10.1186/1471-2180-14-197. Augustin M, Hippolyte MT, Raïssa KR. Antibacterial activity of Lactobacillus' biosurfactants against Pseudomonas spp. isolated from fresh beef. Novus Int J Biotechnol Biosci. 2013;2:7–22. Abdulla AA, Abed TA, Saeed AM. Adhesion, autoaggregation and hydrophobicity of six Lactobacillus strains. Br Microbiol Res J. 2014;4:381–91. doi:10.9734/BMRJ/2014/6462. Golowczyc MA, Mobili P, Abraham AG, Garrote GL, De Antoni GL. Protective action of Lactobacillus kefir carrying S-layer against Salmonella enterica serovar Enteritidis. Int J Food Microbiol. 2007;118:264–73. doi:10.1016/j.ijfoodmicro.2007.07.042. Jankovic T, Frece J, Abram M, Gobin I. Aggregation ability of potential probiotic Lactobacillus plantarum strains. Int J Sanit Eng Res. 2012;6:19–24. Gomaa Z. Antimicrobial and anti-adhesive properties of biosurfactant produced by lactobacilli isolates, biofilm formation and aggregation ability. J Gen Appl Microbiol. 2013;59:425–36. doi:10.2323/jgam.59.425. Pan X, Wu T, Zhang L, Song Z, Tang H, Zhao Z. In vitro evaluation on adherence and antimicrobial properties of a candidate probiotic Clostridium butyricum CB2 for farmed fish. J Appl Microbiol. 2008;105:1623–9. doi:10.1111/j.1365-2672.2008.03885. Golowczyc MA, Mobili P, Garrote GL, Serrabell MA, Abraham AG, De Antoni GL. Interaction between Lactobacillus kefir and Saccharomyces lipolytica isolated from kefir grains. Evidence for lectin-like activity of bacterial surface proteins. J Dairy Res. 2009;76:111–6. doi:10.1017/S0022029908003749. Duncan-Hewitt WC. Nature of the hydrophobic effect. In: Doyle RJ, Rosenberg M, editors. Microbial cell surface hydrophobicity. Washington, D.C.: ASM Publications; 1990. p. 39–73. IMC carried out the experimental work, analyzed and interpreted the data and wrote the manuscript. GST isolated the lactobacilli strains; ALC and VSD collaborated on biosurfactant production and antimicrobial assays; RJA and EPS performed the chemical characterization by FTIR and CG-MS and contributed to data interpretation; RMDN and ASM contributed to data interpretation and critically revised the manuscript; VLS contributed to the conception and coordination of study and writing of the manuscript. All authors read and approved the final manuscript. The material and data supporting their findings can be found in the main paper. All the co-authors approved the publication of this work in Microbial Cell Factories. Ethical aspects The local Research Ethics Committee on Human Experimentation (COEP/UFMG) approved the research project that led to the initial collection of the Lactobacillus strains (Protocol ETIC 062/03). The authors are thankful for the financial support provided by Conselho Nacional de Desenvolvimento Cientifico e Tecnológico (CNPq), Fundação do Amparo à Pesquisa do Estado de Minas Gerais (FAPEMIG) and Comissão de Aperfeiçoamento de Pessoal de Nível Superior (CAPES). Laboratório de Microbiologia Aplicada, Departamento de Microbiologia, Instituto de Ciências Biológicas, Universidade Federal de Minas Gerais, C.P. 486, Belo Horizonte, MG, 31270-901, Brazil I. M. C. Morais , A. L. Cordeiro , G. S. Teixeira , V. S. Domingues , R. M. D. Nardi & V. L. Santos Laboratório de Microbiologia Aplicada, Universidade CEUMA, R. Josué Montello, 01, São Luís, MA, 65075120, Brazil A. S. Monteiro Departamento de Ciências Farmacêuticas, Faculdade de Farmácia, Universidade Federal de Minas Gerais, C.P. 486, Belo Horizonte, MG, 31270-901, Brazil R. J. Alves Laboratório de Química de Produtos Naturais, Centro de Pesquisas René Rachou, Fundação Oswaldo Cruz, Av. Augusto de Lima, 1715, Belo Horizonte, MG, 30190-002, Brazil E. P. Siqueira Search for I. M. C. Morais in: Search for A. L. Cordeiro in: Search for G. S. Teixeira in: Search for V. S. Domingues in: Search for R. M. D. Nardi in: Search for A. S. Monteiro in: Search for R. J. Alves in: Search for E. P. Siqueira in: Search for V. L. Santos in: Correspondence to V. L. Santos. Morais, I.M.C., Cordeiro, A.L., Teixeira, G.S. et al. Biological and physicochemical properties of biosurfactants produced by Lactobacillus jensenii P6A and Lactobacillus gasseri P65 . Microb Cell Fact 16, 155 (2017) doi:10.1186/s12934-017-0769-7 Biosurfactants Antibiofilm Lactobacillus jensenii
CommonCrawl
\begin{document} \RUNTITLE{Optimizing for Diversity} \TITLE{Optimizing for Strategy Diversity in the Design of Video Games} \ARTICLEAUTHORS{ \AUTHOR{Oussama Hanguir} \AFF{Industrial Engineering and Operations Research, Columbia University, New York, NY 10027, \EMAIL{[email protected]}} \AUTHOR{Will Ma} \AFF{Graduate School of Business, Columbia University, New York, NY 10027, \EMAIL{[email protected]}} \AUTHOR{Christopher Thomas Ryan} \AFF{UBC Sauder School of Business, University of British Columbia, Vancouver, BC, Canada, V6T 1Z2, \EMAIL{[email protected]}} } \ABSTRACT{ We consider the problem of designing video games (modeled here by choosing the structure of a linear program solved by players) so that players with different resources play diverse strategies. In particular, game designers hope to avoid scenarios where players use the same ``weapons'' or ``tactics'' even as they progress through the game. We model this design question as a choice over the constraint matrix $A$ and cost vector $c$ that seeks to maximize the number of possible \emph{supports} of unique optimal solutions (what we call \emph{loadouts}) of Linear Programs $\max\{c^\top x \mid Ax \le b, x \ge 0\}$ with nonnegative data considered over all resource vectors $b$. We provide an upper bound on the optimal number of loadouts and provide a family of constructions that have an asymptotically optimal number of loadouts. The upper bound is based on a connection between our problem and the study of triangulations of point sets arising from polyhedral combinatorics, and specifically the combinatorics of the cyclic polytope. Our asymptotically optimal construction also draws inspiration from the properties of the cyclic polytope. Our construction provides practical guidance to game designers seeking to offer a diversity of play for their plays. } \KEYWORDS{video game design, linear programming, triangulations, cyclic polytope} \maketitle \section{Introduction}\label{s:introduction} In this paper, we formulate the problem of \textit{designing} linear programs that allow for \textit{diversity} in their optimal solutions. This setting is motivated by video games, in particular, the design of competitive games where players optimize their strategies to improve their in-game status. For such games, a desideratum for game designers is for optimizing players to play different strategies at different stages of the game. Let us first informally define the problem that we study. A more careful definition is given in the following subsections. We interpret the player's problem as solving a Linear Program of the form $\max\{ c^\top x \mid Ax \leq b, x \ge0\}$. Players at different stages of the game have different resource vectors $b$. The columns of $A$ correspond to the tools that the player can use in the game. We call a subset of these tools (represented by subsets of the columns of $A$) a \emph{loadout}\footnote{Literally, a loadout means the equipment carried into battle by a soldier.} if they correspond to the support\footnote{Recall that the support of a vector $x = (x_1, x_2, \cdots, x_n)$ is the indices of its nonzero components.} of an optimal solution $x^*$ to the linear program $\max\{ c^\top x \mid Ax \leq b, x \ge0\}$ for some resource vector $b$.\footnote{In fact, we require $x^*$ to be the unique optimal solution of this linear program, for reasons that will become clear later.} The support of a vector corresponds to a selection of the available tools, forming a strategy for how the player approaches the game given available resources. We assume that the game designer is able to choose $A$ and $c$. We refer to this choice as the \emph{design} of the game. We measure the \emph{diversity} of a design as the number of possible loadouts that arise as the resource vector $b$ changes. The game designer's problem is to find a design that maximizes diversity. A solution to this problem is then able to meet the game designer's goal of finding a design where optimizing players employ as many different loadouts as possible as the game evolves and player resources change. In the next subsection, we also provide a concrete game example to ground some of these concepts. \subsection{Video game context and formal problem definition} Video games are both the largest and fastest-growing segment of the entertainment industry, to the extent that in 2020, the video game industry surpassed movies and North American sports combined in revenue \citep{witkowski2021videogames}. In gaming, as in related domains, key design aspects that affect player enjoyment include story, pacing, challenge level, and game mechanics. In this paper, we focus on game mechanics, carefully designing the structure of the set of tools available to the player as the key intervention to drive enjoyment. For many video games, engaged players aim to pick the best strategy available to conquer the challenges they face. Often the key strategic decision is to select the best set of tools (often, weapons) to use to meet challenges. Players have limited in-game resources and face constraints (size of weapons, a limit on the number of weapons of some type, etc.) when selecting their strategy. These decisions, therefore, can be modeled as constrained optimization problems. To make this discussion concrete, consider the following fictional game. In RobotWar, we control a sci-fi robot to drive into battle against other robots. As we play the game, we accrue and manage experience points (XP). Before each battle, we pick the combination of weapons and equipment our robot takes into battle. There are different types of weapons including short-range (sabers, shotguns, etc.) and long-range (sniper cannon, explosive missiles, etc.). Weapons can be bought with experience points, and there are capacity constraints on the sizes of the weapons carried (including ammunition) that is proportional the amount of XP invested in them. Every weapon has an initial damage (per usage) that it can deal to the opponent, but we can increase the damage of a weapon by investing additional XP. By winning more battles, we get more XP and increase the robot's capacity to hold more weapons. Before each battle, a player picks the weapons that will be used, as well as how much XP is invested into each weapon. Assume that we are at a given stage of the game with a fixed amount of XP and fixed capacities. We want our robot to have the highest possible total damage value. We can compute the combination of weapons that maximizes the total damage by solving a linear program with decision variables representing how much XP to invest in each weapon. The set of weapons that the player invests a positive amount of XP into is called the player's loadout. In the context of RobotWar, the key strategic choice is selecting a loadout. Note that a loadout refers to the combination of weapons and not the amount of XP invested in each weapon. If the same combination of weapons is adopted but with different allocations of XP, then in the two situations we have used the same loadout. The previous example shows how a video game can be designed so that determination of an optimal loadouts can be done through optimization. In the next subsection, we present real-life video game settings where players, in fact, use optimization to form their strategies. In light of the loadout decisions of players, game designers may ponder the following question: how to set the constraints of the game and the attributes of the tools so that the number of optimal loadouts across all possible resource states is maximized? In other words, the game designer may want to set the game up in such a way that as the resources of the players evolve, the optimal loadouts change. A desire to maximize the number of optimal loadouts is motivated by the fact that video games can get boring when they are too repetitive with little to no variation \citep{schoenau2011player}. It is considered poor game design if a player can simply ``spam'' (i.e., repeatedly use) one strategy to progress easily through a game without needing to adjust their approach.\footnote{\url{https://wikivisually.com/wiki/Spam_\%28video_games\%29}} In our study, we assume that our game design question can be captured by a linear program. This is justified as follows. Consider a game that has $n$ available tools. The player has a decision variable $x_i$ for each tool $i$, which represents ``how much'' of the tool the player employs. We put ``how much'' in quotations because there are multiple interpretations of what this might mean. In the RobotWar example, $x_i$ denotes the amount of XP invested in weapon $i$. The more XP invested, the more the weapon can be used. This can be interpreted as a measure of ``ammunition'' or ``durability''. Let $c \in \mathbb{R}_{\geq 0}^n$ denote the vector of benefits that accrue from using the various tools. That is, one unit of tool $i$ yields a per-unit benefit of $c_i$. In the RobotWar example, $c_i$ represents the damage dealt to the opponent by weapon $i$ per unit of XP invested in weapon $i$. The player must obey a set of $m$ linear constraints when selecting tools. These constraints are captured by a matrix $A \in \mathbb{R}_{\geq 0}^{m \times n}.$ These constraints include considerations like a limit on the number of coins or experience points that the player has, a limit on the capacity (weight, energy, etc.) of the tools that can be carried, etc. A vector $b \in \mathbb{R}_{\geq 0}^m$ represents the available resources at the disposal of the player and forms the right-hand sides of the set of constraints. In the RobotWar example, there are natural constraints corresponding to total available XP and capacities for the various weapons. For a fixed game design $(A,c)$ and resource vector $b$, players solve the linear program \begin{equation}\label{eqn:define-LP} LP(A,c,b): \qquad \max\{ c^\top x \mid Ax \leq b, x \ge 0\}, \end{equation} where $c$, $A$, and $b$ all have nonnegative data. If $x$ is the optimal solution of $LP(A,c,b)$, we define the support of $x$ as $\supp(x) \triangleq \{i \in \{1,\ldots,n\} \mid x_i > 0 \}.$ If $x$ is the \emph{unique} optimal solution of $LP(A,c,b)$, then we call $\supp(x)$ an \emph{optimal loadout} (or simply, a loadout) of design $(A,c)$. For fixed $n$ and $m$, the \textit{loadout maximization problem} is to choose the vector $c$ and matrix $A$ that maximize the total number of loadouts of the design $(A,c)$. That is, the goal is to design benefits for each tool (the vector $c$) and limitations on investing in tools (the matrix $A$) so that the linear programs $LP(A,c,b)$ have as many possible supports of unique optimal solutions as possible, as $b$ varies in $\mathbb{R}^m$. Note that the loadout maximization problem is a natural question to ask about Linear Programs in general. Indeed, much of our analysis will treat the problem in general without carrying too much of the video game interpretation. However, it is important to stress the interpretation of this problem grounded in the video game context. We highlight three key elements of this interpretation: \begin{itemize} \item We are interested in loadouts, corresponding to supports of optimal solutions, but not in the values of the solutions themselves. This is motivated by our desire to maximize diversity; two solutions that have the same support and use the same set of tools but in different proportions are considered variations of the same strategy and not a different strategy. \item We restrict attention to \emph{unique} optimal loadouts. If we considered maximizing the number of supports of all (even non-unique) optimal solutions, this leads to degenerate cases that should be excluded. As a trivial example, in any design with $c = 0$, every loadout is optimal as they all give rise to the same objective value of $0$. This is not a desirable design in practice because there is no strategy involved for the player in designing their loadout. That fact we consider unique optimal solutions can also be motivated by the fact that it is helpful for a game designer to be able to predict the best strategy of a player given their resources. This can be especially useful in online role-playing video games and survival games where players fight computer-controlled enemies and so knowing a player's optimal strategy helps balance the difficulty of the environment. \item We only count the supports of unique \emph{optimal} solutions and ignore the supports of \emph{non-optimal} solutions. A reason for this has already been mentioned, optimal behavior is easy to predict whereas ``non-optimal behavior'' is difficult to anticipate. Furthermore, in many modern video games that have a ``freemium'' revenue model, most of the revenue comes from dedicated ``hardcore'' players, who are more likely to optimize in order to continue their in-game progress. By restricting attention to optimal player behavior in the design problem, the game designer makes decisions for its highest tier of revenue-generating players. \end{itemize} \subsection{Some illustrative examples} We now turn to a consideration of some real games where the loadout maximization problem has relevance in practice. Although these examples may not perfectly fit the linear model that we study here in every aspect, they nonetheless share the same essential design and speak to the tradeoffs of interest. Consider, for example, the MOBA (Multiplayer Online Battle Arena) genre that has become increasingly popular in past years. In the MOBA game \emph{SMITE}, for example, players take control of deities from numerous pantheons across the world. Teams of deities work together to destroy objects in enemy territory, while defending their own territory against an enemy team trying to do the same. The main source of advantage is to purchase tools that enhance the attributes of the deities. These attributes include power, attack speed, lifesteal (percentage of damage dealt to enemy deities that is returned back to the player as health), and critical strike chance (the chance of an attack dealing double the damage that it would normally do). In this game, one can (and some players actually do \citep{smite}) use linear programming to model the tools to buy and decide which ones will give the best advantage, while at the same time keeping costs down. Game designers can, accordingly, anticipate the decision-making procedures of players and select the various attributes of the tools and their prices to promote diversity of gameplay. Another example of using linear programming to compute an optimal strategy is in the game \emph{Clash of Clans}. In this game, players fortify their base with buildings to obtain resources, create troops, and defend against attacks. Players put together raiding parties to attack other bases. Linear programming can be used to determine the best combination of characters in a raiding party, with constraints on the training cost of warriors (measured in elixir, a resource that the player mines and/or plunders) as well as space in the army camps to house units \citep{clashofclans}. While these examples show how maximizing behavior among players can be effectively modeled as linear programs, there is also evidence that game designers are interested in maximizing diversity using optimization tools. For example, veteran game designer Paul Tozour presents the problem of diversity maximization in a series of articles on optimization and game design on Gamasutra, a video game development website \citep{supertank}. In an article in that series, Tozour describes the fictional game of ``SuperTank'' (similar to our fictional RobotWar) to show how optimization models can be used to design the attributes of available weapons that lead to varied styles of play. Tozour makes a strong case throughout his series of articles for using optimization tools, stating that game designers \begin{quote} \dots might be able to use automated optimization tools to search through the possible answers to find the one that best meets their criteria, without having to play through the game thousands of times. \end{quote} \subsection{Our contributions} We initiate the study of the loadout maximization problem. Our first contribution in the paper is to establish a link between the loadout maximization problem and the theory of polyhedral subdivisions and triangulations. In particular, for a fixed design $(A,c)$, the theory of triangulations offers a nice decomposition (or triangulation) of the cone generated by the columns of the constraint matrix $A$. This decomposition depends on the objective vector $c$. We show that for a fixed design $(A,c)$, the loadouts can be seen as elements of this decomposition. This allows us to use a set of powerful tools from the theory of triangulations to prove structural results on the loadouts of a design. Our second contribution is to show a non-trivial upper bound on the number of loadouts of any design. The upper bound involves an interesting connection to the faces of the so-called cyclic polytope, a compelling object central to the theory of polyhedral combinatorics. We also show that this upper bound holds when the constraints of the linear program are equality constraints. The third contribution of this paper is to present a construction of a design $(A,c)$ with a number of loadouts that asymptotically matches the above upper bound. Furthermore, for cases with few constraints, we present optimal constructions that \textit{exactly} match the upper bound. Our constructions provide practical insights that game designers can use to balance the tools available in the game, with the hope of increasing strategy diversity. \vskip 10pt \noindent\textbf{Outline of sections.} In \Cref{s:formulation}, we cleanly state all of our results, sketch their proofs, and illustrate their intuition on small examples, without formally defining all terminology and definitions related to linear programming and triangulations. These formal definitions can be later found in \Cref{sec:preliminaries}. Our upper bound results are derived in \Cref{section:theorem1}, while our asymptotically optimal constructions are presented in \Cref{sec:thm2}. \subsection{Related work}\label{sec:relatedwork} \noindent \textbf{Video game research in the management sciences.} The increasing popularity of video games and the growth of the global video game market has led to a considerable surge in the study of video game related problems in operations management, information systems, and marketing. There have been several studies on advertising in video games. \cite{turner2011or} study the in-game ad-scheduling problem. \cite{guo2019economic} and \cite{sheng2020incentivized} study the structure of ``rewarded'' advertising where players are incentivized to watch ads for in-game rewards. Generally, these rewards come in the form of virtual currencies whose value is fixed by the game designer. \cite{guo2019selling} study the impact of selling virtual currency on players’ gameplay behavior, game provider’s strategies, and social welfare. Another significant research direction concentrates on studying ``loot boxes'' in video games, where a loot box is a random allocation of virtual items whose contents are not revealed until after purchase, and that is sold for real or in-game money. \cite{chen2020loot} study the design and pricing of loot boxes, while \cite{ryan2020selling} study the pricing and deployment of enhancements that increase the player's chance of completing the game. \cite{chen2017eomm} and \cite{huang2019level} study the problem of in-game matchmaking to maximize a player's engagement in a video game. \cite{jiao2020opaque} investigate whether the seller should disclose an opponent’s skill level when selling in-game items that can increase the win rate. Other streams of works focused on how video game data can be used to study player behavior. \cite{nevskaya2019should} empirically explore the impact of different in-game policies that can limit excessive engagement of players in games, while \cite{kwon2016excessive} uses individual-level behavioral data to study the evolution of player engagement post-purchase. \vskip 10pt \noindent \textbf{Optimization Theory and Parametric Programming}. Our work is closely related to parametric linear programming, which is the study of how optimal properties depend on parameterizations of the data. The study of parametric linear programming dates back to the work of \cite{saaty1954parametric}, \cite{mills1956marginal}, \cite{williams1963marginal}, and \cite{walkup1969lifting} in the 1950s and 1960s. In parametric programming, the objective is to understand the dependence of optimal solutions on one or more parameters; that is, on the entries of $A$, $b$, and $c$. Our work is novel in the sense that the objective is to understand the structure of the supports of optimal solutions by fixing $A$ and $c$ and having $b$ vary in $\mathbb{R}^m_{\geq 0}$. To the best of our knowledge, this question has not previously been studied in the literature. \section{Statement of the main results }\label{s:formulation} In this section, we state our main results. To make these statements precise, we require some preliminary definitions. Let $[k]$ denote the set $\{1,\ldots,k\}$ for any positive integer $k$. Using this notation, we can define the support of $x \in \mathbb{R}^n_{\geq 0}$ as $\supp(x) = \{j \in [n] \mid x_j > 0 \}$. For a matrix $A\in\mathbb{R}^{m\times n}_{\ge0}$, the $(i,j)$th entry is denoted $a_{ij}$ for $i \in [n]$ and $j \in [m]$, the $j$th column is denoted $A_j$ for $j \in [n]$, and the $i$th row is denoted $a_i$ (where $a_i$ is a column vector) for $i \in [m]$. For a column vector $y \in \mathbb{R}^m$, $y^\top A_j$ denote the scalar product of $y$ and column $A_j$, i.e., $y^\top A_j = \sum_{i=1}^m y_i a_{i,j}$. Recall the definition of the linear program $LP(A,b,c)$ in \cref{eqn:define-LP}. As mentioned in the introduction, we are interested in the unique optimal solutions of the design $(A,c)$. For simplicity, we simply call these the \emph{loadouts} of design $(A,c)$; that is, $L \subseteq [n]$ is a loadout of design $(A,c)$ if there exists a nonnegative resource vector $b \in \mathbb{R}^m_{\geq 0}$ such that $LP(A,c,b)$ has a unique optimal solution $x^*$ with $\supp(x) = L$. We say that loadout $L$ is \emph{supported by} resource vector $b$. If $|L| = k$ then we say $L$ is a $k$-loadout. Given a design $(A,c)$ and an integer $k \in [m]$, let $\mathcal{L}^k(A,c)$ denote the set of all $k$-loadouts of design $(A,c)$. The set of all loadouts of any size is $\mathcal{L}(A,c) \triangleq \cup_{k=1}^n \mathcal{L}^k(A,c)$. Using this notation, we can restate the loadout optimization problem. Given dimensions $n$ and $m$ and integer $k \le n$, the \emph{$k$-loadout optimization problem} is \begin{equation}\label{eq:k-loadout-problem} \max \{ |\mathcal L^k(A,c)| \mid A \in \mathbb{R}^{m \times n}, c \in \mathbb{R}^n, A \text{ and } c \text{ are nonnegative} \}. \tag{$\text{L}_k$} \end{equation} We can assume without loss of generality that the linear programs $LP(A,c,b)$ are bounded and thus possess an optimal solution because otherwise there is no optimal solution and, therefore, no loadout. Given that a loadout corresponds to the support of a unique solution of a linear program, any optimal solution with support size greater than $m$ cannot be unique. Therefore, the number of $k$-loadouts when $k > m$ is always equal to zero. This leads us to consider the optimization problems \cref{eq:k-loadout-problem} only for $k \in \{1, \ldots, \min(m,n)\}.$ For convenience, we will avoid the trivial case of $k=1$ where the optimal number of loadouts is $\min(m,n)$. A final case we eliminate immediately is when $\min(m,n) =n$, i.e. $m\ge n$. In this case, a trivial design is optimal. By setting $A = I_n$ to be the identity matrix of size $n$, and $c = (1,\ldots,1)$, we ensure that for $k \in [1,n]$, every one of the $\binom{n}{k}$ subsets is a loadout (see \cref{lemma:n_leq_m} in the appendix). In summary, we proceed without loss under the assumption that $n>m\ge k\ge2$. \subsection{The Cyclic Polytope}\label{sec:cyclic-polytope-intro} All of our bounds are intimately related to the number of faces on the \textit{cyclic polytope}, which is formally defined in \Cref{sec:preliminaries}. A remarkable aspect of the cyclic polytope is that for $n>m\ge2$, the cyclic polytope $\mathcal{C}(n,m)$ \textit{simultaneously} maximizes the number of $k$-dimensional faces for all $k=0,\ldots,m-1$ among $m$-dimensional polytopes over $n$ vertices, a property known as McMullen's Upper Bound Theorem \citep{mcmullen1970maximum}. The number of $k$-dimensional faces on $\mathcal{C}(n,m)$ is given by the formula \begin{equation*} f_k(\mathcal{C}(n,m)) = \sum\limits_{\ell = 0}^{\lfloor m/2 \rfloor} \binom{\ell}{m-k -1} \binom{n - m + \ell -1}{\ell} + \sum\limits_{\ell = \lfloor m/2 \rfloor + 1}^m \binom{\ell}{m-k -1} \binom{n - \ell -1}{m - \ell}. \end{equation*} When $k=m-1$, this simplifies\footnote{This is easily seen through the ``hockey stick'' identity on Pascal's triangle.} to \begin{align*} f_{m-1}(\mathcal{C}(n,m)) &=\binom{n - \lceil m/2\rceil}{\lfloor m/2\rfloor} + \binom{n - \lfloor m/2\rfloor -1}{\lceil m/2\rceil -1}. \end{align*} As an illustration of these formulas, suppose $m=3$. The formulas evaluate to \begin{align} f_2(\mathcal{C}(n,3)) &=\binom{n-2}{1}+\binom{n-2}{1} && =&2n-4 \label{eqn:f2} \\ f_1(\mathcal{C}(n,3)) &=1\binom{n-3}{1}+2\binom{n-3}{1}+3\binom{n-4}{0} &&=&3n-6 \label{eqn:f1} \\ f_0(\mathcal{C}(n,3)) &=\binom{2}{2}\binom{n-3}{1}+\binom{3}{2}\binom{n-4}{0} &&=&n. \nonumber \end{align} To check that this is correct, note that $f_0(\mathcal{C}(n,3))$ should be $n$ by definition. Meanwhile, we remark that the cyclic polytope is a \textit{simplicial} polytope, i.e.\ all of its $(m-1)$-dimensional faces are the convex hull of exactly $m$ points. When $m=3$, this translates to all of its facets being triangles. Therefore, $2f_2(\mathcal{C}(n,3))=3f_1(\mathcal{C}(n,3))$, since every edge is contained in exactly 2 triangles and every triangle contains exactly 3 edges. In conjunction with Euler's immortal formula $ f_2(\mathcal{C}(n,3)) + f_0 (\mathcal{C}(n,3)) = f_1(\mathcal{C}(n,3)) + 2, $ one can \textit{uniquely} express $f_2(\mathcal{C}(n,3)),f_1(\mathcal{C}(n,3))$ as a function of $n$ for simplicial polytopes in 3 dimensions, which indeed can be checked to equal respective expressions~\cref{eqn:f2,eqn:f1} above. In higher dimensions, simplicial polytopes can have different numbers of faces for each dimension, but they can never surpass the number on the cyclic polytope for that dimension. \subsection{Statements of Main Results} \begin{theorem}\label{thm:upperbound} Fix positive integers $n,m,k$ with $n>m\ge k\ge2$. Then the number of $k$-loadouts for any design $(A,c)$ with $A\in\mathbb{R}^{m\times n}$ and $c\in\mathbb{R}^n$ satisfies \begin{align} \label{eqn:introUB} |\mathcal{L}^k(A,c)| &\le f_{k-1}(\mathcal{C}(n+1,m))-\binom{m}{k-1}. \end{align} \end{theorem} We note that the trivial upper bound on the number of $k$-loadouts in a design with $n$ tools is $\binom{n}{k}$. When $m<n$, the RHS of~\eqref{eqn:introUB} will always be smaller than this trivial upper bound, which shows that having a limited number of resource types in the game does indeed prevent all subsets of tools from being viable. \begin{theorem}\label{thm:lowerbound} Fix positive integers $n,m,k$ with $n>m\ge k\ge2$. Then we can provide a family of explicit designs $(A,c)$ with $A\in\mathbb{R}_{\ge 0}^{m\times n}$ and $c\in\mathbb{R}_{\ge 0}^n$ that satisfy \begin{align*} |\mathcal{L}^k(A,c)|&\ge\begin{cases} f_{k-1}(\mathcal{C}(n,m)) & \text{if $k<m/2$}\\ f_{k-1}(\mathcal{C}(n,m))/2 & \text{if $k\ge m/2$ and $m$ is odd, or $k=m/2$ and $m$ is even}\\ f_{k-1}(\mathcal{C}(n,m))/4 & \text{if $k> m/2$ and $m$ is even}. \end{cases} \end{align*} \end{theorem} The constructions from \Cref{thm:lowerbound} are always within a 1/4-factor of being optimal asymptotically as $n\to\infty$ because it is known (see \cref{lemma:asymptotic_fk} in Appendix~\ref{appx:asymptotic_fk} for a formal proof) that \begin{align*} \lim_{n\to\infty}\frac{f_{k-1}(\mathcal{C}(n,m))}{f_{k-1}(\mathcal{C}(n+1,m))}=1. \end{align*} \begin{restatable}{theorem}{lowerboundSmallM}\label{thm:lowerboundSmallM} For $n>m=3$, we can provide a family of explicit designs $(A,c)$ with $A\in\mathbb{R}_{\ge 0}^{m\times n}$ and $c\in\mathbb{R}_{\ge 0}^n$ that satisfy $|\mathcal{L}^3(A,c)| \ge 2n-5$ and $|\mathcal{L}^2(A,c)| \ge3n-6$. \end{restatable} \begin{restatable}{theorem}{lowerboundSmallMtwo}\label{thm:lowerboundSmallMtwo} For $n>m=2$, we can provide a family of explicit designs $(A,c)$ with $A\in\mathbb{R}^{m\times n}$ and $c\in\mathbb{R}^n$ that satisfy $|\mathcal{L}^2(A,c)| \ge n-1.$ \end{restatable} The constructions from \Cref{thm:lowerboundSmallM} and \Cref{thm:lowerboundSmallMtwo} are \textit{exactly tight}; it can be checked that they match the upper bound expression from \Cref{thm:upperbound} when evaluated at $m=3$ and $m=2$. The proofs of both theorems are deferred to Appendix \ref{appx:exact_construction}. \vskip 10pt \textbf{Example of construction from \Cref{thm:lowerbound} and intuition.} \cref{table:construction} shows an example of the asymptotically optimal construction for $m = 4$ and $n = 6$.\footnote{The fact that the cost vector is $(1,1,\dots, 1)$ is simply a normalization and can be assumed without loss.} Our construction provides a pattern that game designers can follow to diversify loadouts on a set of tools $1,\ldots,n$, by having two types of constraints. The first type of constraints (rows 1 and 3) accords more importance to tools with big indices (because these tools have lower costs to rows 1 and 3) while the second type of constraints (rows 2 and 4) give more advantage to tools with a small index (because these tools have lower costs to rows 2 and 4). This ``tension'' between the two types of constraints ensures that a given combination of tools cannot be optimal for too many resource vectors. This captures the rough intuition that a game with an overpowered tool (meaning one that is more useful than the others but also not significantly ``cumbersome'' to limit its use) leads to uniform strategies among players. In other words, for diversity, all tools should have strengths and weaknesses. Furthermore, among the constraints of the same type, the resource requirements of tools either monotonically increase or monotonically decrease along the rows. The implication of this is as follows. Consider the tool corresponding to the first column in \cref{table:construction}. This tool is cheapest with respect to the first and third resources and the most expensive with respect to the second and fourth. Thus, any time the player has an excess of resources 2 and 4, she will certainly use the first tool. However, as soon as one of those resources is constrained, it is tempting to jettison the first tool. This monotone structure heightens the sensitivity of the structure of optimal solutions to changes in the resource vector. Practically speaking, this means that tools that are very powerful in some dimensions must also have significant weaknesses to ensure variety of play. A concrete example of this is the ``rocket launcher'' in first-person shooters, which is typically the most powerful weapon but suffers from having the most expensive ammunition. This can be captured in our \cref{table:construction} construction by scaling any of the columns to have higher reward but also higher cost. \begin{table}[] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $c$ & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \multirow{4}{*}{$A$} & 1 & 2 & 3 & 4 & 5 & 6 \\ \cline{2-7} & $M - 1^2$ & $M - 2^2$ & $M - 3^2$ & $M - 4^2$ & $M - 5^2$ & $M - 6^2$ \\ \cline{2-7} & $1^3$ & $2^3$ & $3^3$ & $4^3$ & $5^3$ & $6^3$ \\ \cline{2-7} & $M - 1^4$ & $M - 2^4$ & $M - 3^4$ & $M - 4^4$ & $M - 5^4$ & $M - 6^4$ \\ \hline \end{tabular} \caption{Example of our construction with $m = 4$, $n = 6$, and $M = 6^4 + 1$.} \label{table:construction} \end{center} \end{table} \subsection{Roadmap for proving \Cref{thm:upperbound,thm:lowerbound}}\label{ss:roadmap} This subsection provides a high-level overview of our approach for establishing our upper and lower bounds. All the undefined terminology used here will be defined in more detail in later sections. We prove our upper bound \cref{thm:upperbound} using a sequence of transformations. \begin{enumerate} \item We first introduce the intermediate concept of an \textit{equality loadout} problem that replaces the inequality constraint $Ax \leq b$ with an equality $Ax = b$. A \emph{$k$-equality loadout} is a $k$-loadout in this revised problem. We show that for a fixed design $(A,c)$ and for every dimension $k$, the number of $k$-loadouts is less than the number of $k$-equality loadouts (\Cref{prop:inclusion}). \item This allows us to focus on proving an upper bound on the number of equality loadouts. Here, we can exploit the dual structure of the equality LP and prove that equality loadouts belong to a cell complex $\Delta_c(A)$ that is characterized by $A$ and $c$. Importantly, we show that loadouts correspond to \textit{simplicial} cells in this cell complex (\Cref{lemma:loadout-simplicial}). \item In turn, this allows us to, without loss of generality, assume that $\Delta_c(A)$ is a \textit{triangulation} (as opposed to an arbitrary subdivision), of a cone in the positive orthant of $\mathbb{R}^m$ (\Cref{lemma:refinement}). \item We show that triangulations of cones in the positive orthant of $\mathbb{R}^m$ correspond to triangulations of points in the lower dimension $\mathbb{R}^{m-1}$ (\Cref{lem:pointedcone}). \item Finally, we show that the simplices in this triangulation can be embedded into faces of a simplicial polytope in $\mathbb{R}^m$. Therefore, any upper bound on the number of faces of polytopes in $\mathbb{R}^m$ implies an upper bound on the number of loadouts. This allows us to invoke the ``maximality'' of the cyclic polytope with respect to its number of faces mentioned in \cref{sec:cyclic-polytope-intro}. Therefore, the number faces of the cyclic polytope of dimension $m$ bounds the number of faces in a polytope of dimension $m$, and implies a bound on the number of equality loadouts. We also carefully count the number of extraneous faces added through our transformations, by invoking a bound on the minimal number of faces a polytope can have, which allows us to derive tight bounds for small values of $m$ (\Cref{lemma:embedding}). \end{enumerate} We remark that in the above proof, we needed to first map to triangulations in the lower dimension $\mathbb{R}^{m-1}$ and then later return to polytopes in the original dimension $\mathbb{R}^m$, in order to invoke McMullen's Upper Bound. However, this required the introduction of a point at the south pole, which means that it is difficult for our upper bound to be tight for small values of $n$. Nonetheless, the introduction of this additional point is insignificant as $n\to\infty$. This is why we can prove asymptotic optimality. To prove our complementing lower bound \Cref{thm:lowerbound}, we first explicitly provide our design $(A,c)$ in \Cref{sec:construction}, which is also inspired by the cyclic polytope. Compared to the cyclic polytope, every even row of the matrix $A$ has been ``flipped'' (as previously observed in \Cref{table:construction}), for reasons that will become apparent in our proof, which we now outline. \begin{enumerate} \item First, we focus on the dual program of $LP(A,c,b)$ and present a sufficient condition (\Cref{def:inequalitycell}) for loadouts in terms of dual variables (\Cref{lem:inequalitycell}). \item We show that by taking hyperplanes corresponding to the facets of the cyclic polytope in dimension $m$, one can attempt to construct dual variables that satisfy the sufficient condition (\Cref{lem:hyperplaneequation}). Our aforementioned ``flipping'' of the even rows in $A$ is crucial to this construction of the dual variables. We show that as long as the facet of the cyclic polytope is of the ``odd''\footnote{To be more precise, we require odd parity when $m$ is even, and even parity when $m$ is odd. What we mean by the parity of a facet will be made clear later. For brevity, we will focus the exposition here on the case where $m$ is even.} parity, the constructed dual variables will indeed be sufficient (\Cref{lemma:remaining_conditions}), and hence such a facet and all of the faces contained within it correspond to loadouts. \item Therefore, to count the number of $k$-loadouts, we need to count the number of $(k-1)$-dimensional faces on a cyclic polytope in dimension $m$ that are contained within at least one odd facet. To the best of our knowledge, this is an unsolved problem in the literature. Nonetheless, using Gale's evenness criterion we can map this to a purely combinatorial problem on binary strings (\cref{lemma:face_array} and \cref{lemma:facets_intersection}). Through some combinatorial bijections, we show that at worst a factor of 4 is lost when one adds the requirement that the $(k-1)$-dimensional face must be contained within at least one odd facet, with the factor improving to 2 if $m$ is odd (\cref{cor:m_odd}), and improving to 1 if $k$ is small (\cref{lemma:small_k}).\footnote{We should note that generally, a cyclic polytope does not have an equal number of odd and even facets. Therefore, one should not expect this factor to always be 2.} These arguments form the cases in Theorem~\ref{thm:lowerbound}. \end{enumerate} To summarize, both our upper and lower bounds employ the cyclic polytope, but through different transformations---projecting down to $\mathbb{R}^{m-1}$ and then lifting back up for the upper bound, and ``flipping'' even rows for the lower bound. \section{Preliminaries}\label{sec:preliminaries} We present terminology we use in the proofs of both \cref{thm:upperbound,thm:lowerbound}. Additional terminology needed in the proof of only one of these results is found in the relevant sections. A \emph{$d$-simplex} is a $d$-dimensional polytope that is the convex hull of $d +1$ affinely independent points. For instance, a 0-simplex is a point, a 1-simplex is a line segment and 2-simplex is a triangle. For a matrix $A = (A_1,\ldots, A_n)$ of rank $m$, let $\cone(A) = \cone(\{A_1, \ldots, A_n\})$ represent the closed convex polyhedral cone $\{Ax \mid x \in \mathbb{R}^{n}_+\}$. We use the notation $\cone(C)$ to denote the cone generated by the columns indexed by $C \subseteq [n]$. If $C \subseteq [n]$ is a subset of indices, the \textit{relative interior} of $C$ is the relatively open (i.e., open in its affine hull) convex set \[ \relint_A(C) \triangleq \big\{ \sum\limits_{j\in C} \lambda A_j \mid \lambda_j > 0 \mbox{ for all } j \in C, \mbox{ and } \sum\limits_{j\in C} \lambda_j = 1 \big\}. \] A subset $F$ of polytope $P$ is a \textit{face} if there exists $\alpha \in \mathbb{R}^n$ and $\beta \in \mathbb{R}$ such that $\alpha^\top x + \beta \leq 0$ for all $x \in P$ and $F = \{x \in P \mid \alpha^\top x + \beta\ = 0\}$. If $\dim(F) = k$ then $F$ is called a \textit{$k$-dimensional face} or \textit{$k$-face}. The faces of dimensions 0, 1, and $\dim(P) - 1$ are called vertices, edges, and facets, respectively. Furthermore, we say that $F$ is face of $C$, where $F, C \subseteq [n]$, when $\cone(F)$ is a face of $\cone(C)$. We define a polyhedral subdivision of $\cone(A)$ as follows. \begin{definition}\label{def:cone_subdivision} Let $A = (A_1,\ldots,A_n)$ be a matrix of rank $m$. A collection $\mathscr{S}$ of subsets of $[n]$ is a polyhedral subdivision of $\cone(A)$ if it satisfies the following conditions: \begin{itemize} \item[(CP)] If $C \in \mathscr{S}$ and $F$ is a face of $C$, then $F \in \mathscr{S}$. (Closure Property) \item[(UP)] $\cone(\{1,\ldots,n\}) \subset \bigcup\limits_{C \in \mathscr{S}} \cone(C)$. (Union Property) \item[(IP)] If $C, C' \in \mathscr{S}$ with $C \neq C'$, then $\relint_A(C) \cap \relint_A(C') = \emptyset$. (Intersection Property) \end{itemize} \end{definition} If $\{j_1, \ldots,j_k\} $ belongs a subdivision of $\cone(A)$, then the set of indices $\{j_1, \ldots,j_k\}$ is called a \textit{cell} of the subdivision, and if the cone is of dimension $k$, it is called a $k$-cell. We note that a polyhedral cone subdivision is completely specified by listing its maximal cells. Next, we define a special subdivision of $\cone(A)$ as a function of the cost vector $c$. The cells of this subdivision map to the loadouts of the design $(A,c)$. For $A\in\mathbb{R}^{m\times n}_{\ge0}$ and $c\in\mathbb{R}^n_{\ge0}$, we define the polyhedral subdivision $\Delta_c(A)$ of $\cone(A)$ as a family of subsets of $\{1,\ldots, n\}$ such that $C \in \Delta_c(A)$ if and only if there exists a column vector $y\in \mathbb{R}^m$ such that $y^\top A_j = c_j$ if $j \in C$ and $y^\top A_j > c_j$ if $j \in \left\{1,\dots, n\right\} \setminus C$. In such a case, we say $C$ is a cell of $\Delta_c(A)$ and that $\Delta_c(A)$ is a \textit{cell complex}. A cell $C \in \Delta_c(A)$ is simplicial if the column vectors $(A_j)_{j \in C}$ are linearly independent. If all the cells of $\Delta_c(A)$ are simplicial, then we say $\Delta_c(A)$ is a \emph{triangulation}. The maximum size of a simplicial cell is $m$. The next results shows that $\Delta_c(A)$ is indeed a polyhedral subdivision of $\cone(A)$. The proof is deferred to Appendix~\ref{appendixA}. \begin{prop}\label{lemma:3properties} $\Delta_c(A)$ is a polyherdal subdivision of $\cone(A)$. \end{prop} Intuitively, we can think of the subdivision $\Delta_c(A)$ as follows: take the cost vector $c$, and use it to lift the columns of $A$ to $\mathbb{R}^{n+1}$ then look at the projection of the upper faces (those faces you would see if you ``look from above''). This is illustrated in \cref{ex:lifting-example}. \begin{example}\label{ex:lifting-example} Consider the following matrix and cost vectors \begin{equation}\label{eqn:example-data} A = \begin{pmatrix} 1/4 & 1/2 & 3/4\\ 1 & 1 & 1 \end{pmatrix}, \quad c_1 = (2,2.125 + \epsilon,2.25) \quad \mbox{ and } \quad c_2 = (2,2.125-\epsilon,2.25), \end{equation} where $\epsilon > 0$ is a small constant. The corresponding subdivisions of $\cone(A)$ are \[\Delta_{c_1}(A) = \big\{ \{1,2\}, \{2,3\}, \{1\}, \{2\}, \{3\}, \emptyset\big\} \ \ \mbox{ and } \ \ \Delta_{c_2}(A) = \big\{ \{1,2,3\}, \{1\}, \{3\}, \emptyset\big\}.\] \noindent For example, to see that $\{1,2\}$ is a cell of $\Delta_{c_1}(A)$, we consider $y = (0.5 + 4\epsilon, 1.875-\epsilon)$. One can verify that $y^\top A_1 = c_1$ and $y^\top A_2 = c_2$, while $y^\top A_3 > c_3$. We observe that for the cost vector $c_1$, the cell $\{1,2\}$ is simplicial, while for $c_2$, the cell $\{1,2,3\}$ is not simplicial. See \cref{fig:lifting} for a visualization of $\Delta_{c_1}(A)$ and $\Delta_{c_2}(A)$. $\triangleleft$ \end{example} \begin{figure} \caption{An illustration of the triangulations $\Delta_{c_1}(A)$ (left side of the figure) and $\Delta_{c_2}(A)$ (right side of the figure). In the figure, the third dimension (corresponding to the row of $1$'s in the matrix $A$ in \cref{eqn:example-data}) is suppressed since all objects are at the same height of $1$.} \label{fig:lifting} \end{figure} In our definition of simplicial cell, we mentioned that if all the cells in the subdivision $\Delta_c(A)$ are simplicial, then $\Delta_c(A)$ is called a triangulation. More generally, a triangulation of cones is a cone subdivision where all the cells are simplicial (the columns of every cell are linearly independent). We will also define the notion of triangulations of point configurations, which we define below. \begin{definition} Let $B = \{x_1,\ldots,x_n\}$ be a point configuration (i.e., a finite set $B$ of points) in $\mathbb{R}^n$. A triangulation of $B$ is a collection $\mathcal{T}$ of simplices whose vertices are points in $B$, and whose dimension is the same dimension as the affine hull of $B$, with the following properties: \begin{itemize} \item[(CP)] If $C \in \mathcal{T}$ and $F \subseteq C$, then $F \in \mathcal{T}$. (Closure Property) \item[(UP)] $\conv(B) \subset \bigcup\limits_{C \in \mathcal{T}} \conv(C)$. (Union Property) \item[(IP)] If $C, C' \in \mathcal{T}$ with $C \neq C'$, then $\relint_B(C) \cap \relint_B(C') = \emptyset$. (Intersection Property) \end{itemize} \end{definition} As described in the roadmap for \cref{thm:upperbound} in \cref{ss:roadmap}, to prove the upper bound on the number of loadouts, we will show that the cells of a triangulation and, therefore, the loadouts can be seen as faces of a higher dimensional polytope and that any upper bound on the number of faces of that polytope implies an upper bound on the number of loadouts. A crucial part of our analysis invokes the ``maximality'' of the cyclic polytope with respect to its number of faces, as described already in \cref{sec:cyclic-polytope-intro}. Now, we present a formal definition of the cyclic polytope as well as the $f$-vector of a polytope, that contains all the information about the number of faces. \begin{definition}[Cyclic Polytope]\label{def:cyclicpolytope} The cyclic polytope $\mathcal{C}(n,d)$ may be defined as the convex hull of $n$ distinct vertices on the moment curve $t \mapsto (t, t^2,\ldots, t^d)$. The precise choice of which $n$ points on this curve are selected is irrelevant for the combinatorial structure of this polytope. See an illustration of a cyclic polytope in \cref{fig:cyclic_polytope}. \end{definition} \begin{definition}[$f$-vector]\label{def:fvector} The $f$-vector of a $d$-dimensional polytope $P$ is given by $(f_0(P), \ldots , f_{d-1}(P))$, where $f_i(P)$ enumerates the number of $i$-dimensional faces in the $d$-dimensional polytope for all $i=0,\ldots,d-1$. For instance, a 3-dimensional cube has eight vertices, twelve edges, and six facets, so its $f$-vector is $(f_0(P),f_1(P),f_2(P))=(8,12,6)$. \end{definition} As stated earlier, \cite{mcmullen1970maximum} shows that that the cyclic polytope $\mathcal{C}(n,d)$ maximizes the number of faces over every dimension for convex polytopes in dimension $d$. In other words, for any $d$-dimensional polytope $P$ on $n$ vertices, we have $f_i(P) \leq f_i(\mathcal{C}(n,d+1))$ for $1 \leq i \leq d.$ This result is known as McMullen's Upper Bound Theorem. \begin{figure} \caption{Representation of the cyclic polytope $\mathcal{C}(7,3)$.} \label{fig:cyclic_polytope} \end{figure} \section{Upper Bound (Proof of \Cref{thm:upperbound})}\label{section:theorem1} Throughout this section we fix positive integers $n > m\ge 2$ and $A\in\mathbb{R}^{m\times n}_{\ge0},c\in\mathbb{R}^n_{\ge0}$. We start by formally introducing the \textit{equality} loadout problem. We consider the parametric family of linear programming problems with equality constraints \begin{equation*} LP_=(A,c,b): \qquad \max\{ c^\top x \mid Ax = b, x \ge 0\}, \end{equation*} By analogy to the definition of loadouts in \cref{s:formulation}, an \textit{equality loadout} is defined as a subset of indices $L \subseteq \left\{1,\dots, n\right\}$ such that there exists a resource vector $b$ for which $LP_{=}(A,c,b)$ has a unique optimal solution $x^*$ such that $\supp(x^*) = L$. If $|L| = k$ then we say that $L$ is a $k$-equality loadout. Given $A$ and $c$ and an integer $k \in [m]$, let $\mathcal{L}^k_{=}(A,c)$ denote the family of all equality loadouts $L$ of dimension $k$. Finally, $\mathcal{L}_{=}(A,c)$ denotes the family of equality loadouts of all dimensions given $A$ and $c$. Namely, $\mathcal{L}_{=}(A,c) \triangleq \cup_{k=1}^m \mathcal{L}^k_{=}(A,c)$. The following proposition bounds the number of loadouts by the number of equality loadouts, for fixed $A$ and $c$. \begin{lemma}\label{prop:inclusion} For every $A\in\mathbb{R}^{m\times n}_{\ge0}, \ c\in\mathbb{R}^n_{\ge0}$ and $k \in [m]$ we have $\mathcal{L}^k(A,c)\subseteq\mathcal{L}^k_{=}(A,c)$. \end{lemma} \noindent \proof{\emph{Proof}.} Consider $L \in \mathcal{L}^k(A,c)$. There exists $b \in \mathbb{R}^m_{\geq 0}$ and $x \in \mathbb{R}^n_{\geq 0}$ with $\supp(x) = L$ such that $x$ is the unique optimal solution of $LP(A,c,b)$. We can see that $x$ is also the unique optimal solution of $LP_=(A,c,b')$ where $b' = Ax$. Any other optimal solution to $LP_=(A,c,b')$ would also be optimal for $LP(A,c,b)$. \Halmos \endproof In the rest of this section, assume without loss of generality that A is a full-row rank matrix. To see that this assumption is not restrictive, let $A$ be an arbitrary $m\times n$ non-negative matrix and let $A^f$ be a submartix of $A$ containing a maximal set of linearly independent columns of $A$. One can see that any equality loadout of $A$ is an equality loadout of $A^f$. Therefore, $\mathcal{L}^k_{=}(A,c)\subseteq\mathcal{L}^k_{=}(A^f,c)$, and since our objective in this section is to provide an upper bound on the number of loadouts, we may assume that $A$ is of full row rank. We present, for all $k\in[m]$, an upper bound for the number $|\mathcal{L}^k_{=}(A,c)|$ of equality loadouts of size $k$ with respect to the design $(A,c)$. To do so, we divide the cone corresponding to the columns of $A$ into a collection of cells of $\Delta_c(A)$. We show that loadouts correspond to simplicial cells in $\Delta_c(A)$ and that we can restrict ourselves, without loss of generality, to designs $(A,c)$ where all the cells of $\Delta_c(A)$ are simplicial. Finally, we present an upper bound on the number of cells of any dimension $k$ in a triangulation, which yields an upper bound on $|\mathcal{L}^k_{=}(A,c)|$. Some of the results of this section are known in the literature (an excellent reference is the textbook \cite{de2010triangulations}), but we present them using our notation and adapted to the loadout terminology. We provide proofs for clarity and of our a desire to be as self-contained as possible. The proofs are also suggestive of some aspects of our later constructions in \cref{sec:thm2}. \subsection{From equality loadouts to triangulations} The following result links the optimal solutions of $LP_=(A,c,b)$ to the cells of subdivision $\Delta_c(A)$. \begin{prop}\label{prop:sturfmels-thomas}(\cite{sturmfels1997variation}, Lemma 1.4) The optimal solutions $x$ to $LP_=(A,c,b)$ are the solutions to the problem \begin{equation}\label{eq:support-subset-of-cell} \text{Find } x \in \mathbb{R}^n \text{ s.t. } Ax =b, x \ge 0, \text{ and } \text{supp}(x) \text{ is a subset of a cell of } \Delta_c(A). \end{equation} \end{prop} \noindent \proof{\emph{Proof}.} Consider the dual of $LP_=(A,c,b)$: \begin{alignat*}{3} D_{=}(A,c,b):\quad\quad & \text{minimize} & b^\top y& \\ & \text{s.t.} \quad& y^\top A \geq c \end{alignat*} \noindent We start by recalling the complementary slackness conditions. If $x$ and $y$ are feasible solutions to the primal and dual problem, respectively, then complementary slackness states that $x$ and $y$ are optimal solutions to their respective problems if and only if \begin{alignat*}{4}\tag{CS} y_i(a_i^\top x-b_i) & = & \ 0, \ & \quad \forall \ i \in [m],\\ (c_j - y^\top A_j) x_j & = & \ 0, \ & \quad \forall \ j \in [n]. \end{alignat*} Let $x$ be an optimal solution of $LP_=(A,c,b)$ and $y$ be an optimal solution of $D_{=}(A,c,b)$. By complementary slackness, $x_j >0$ implies $y^\top A_j = c_j$, which means that the support of $x$ lies in a cell of $\Delta_c$. Conversely, let $x$ be a solution to \cref{eq:support-subset-of-cell}. Then there exists $y \in \mathbb{R}^m$ such that $\supp(x)\subset \{j \mid y^\top A_j = c_j\}$. This implies that $c^\top x = y^\top Ax = y^\top b$, and hence $x$ is an optimal solution to $LP_=(A,c,b)$ by strong duality.\Halmos \endproof \begin{lemma}\label{lemma:loadout-simplicial} A subset $L \subseteq [n]$ is a loadout of $(A,c)$ if and only if it is a simplicial cell in the subdivision $\Delta_c(A)$. \end{lemma} \noindent \proof{\emph{Proof}.} Suppose $L$ is simplicial cell of $\Delta_c(A)$, and let $y$ be the corresponding vector to $L$ from Definition 2. Set the right-hand side $b = \sum_{j \in L} \alpha_i A_j$ for some $\alpha_j >0$, $\forall j \in L$. We show that $L$ is an equality loadout by showing that $\Bar{x} = (\Bar{x}_L, \Bar{x}_{\Bar{L}}) = (\alpha,0)$ (where $\bar L = [n] \setminus L$) is the unique optimal solution of $L_=(A,c,b)$. We first show that $\bar{x}$ is optimal. Note that $\Bar{x}$ and $y$ are respectively primal and dual feasible, and they satisfy the complementary slackness conditions. In fact, since $A\bar{x} =b$ by definition, we have $y_i(a_i^\top x-b_i)=0$ for $i \in [m]$. Furthermore, by definition of $y$, we have $y^\top A_j = c_j$ for $j \in L$, and since $\supp(x) = L$, we have $x_j = 0$ for all $j \not\in L$, which implies $(y^\top A_j - c_j)x_j = 0.$ This shows that $\Bar{x}$ and $y$ satisfy the complementary slackness conditions. Therefore, $\Bar{x}$ (resp. $y$) is primal (resp. dual) optimal. We now show that $\bar{x}$ is unique. Suppose now that there is another solution $x'$ to $L_=(A,c,b)$. Then $x'$ and $y$ verify the complementary slackness conditions. This implies that $x'_j=0$ for $j \not\in L$, and $\bar{x}$ and $x'$ have support in $L$. But since $L$ is simplicial, the columns $(A_j)_{j\in L}$ are linearly independent, and the only solution to $Ax = b$ with support in $L$ is $\bar{x}$. Therefore, $\bar{x}=x'$. Assume now that $L$ is a loadout for a right-hand side $b$. By \cref{prop:sturfmels-thomas} there exists a cell $C \in \Delta_c$ such that $L \subset C$. Suppose that $L$ is not a cell of $\Delta_c$. By \cref{lemma:3properties}, $\Delta_c(A)$ is subdivision of $\cone(A)$. Therefore, by property (CP), $L$ is not a face of any cell. Furthermore, since $L$ is a loadout, there exists a solution $x$ such that $\supp(x) = L$ and $Ax = b$. This implies that $b \in \relint(\cone(L)) \subset \cone(C)$. All faces of $C$ are cells, and by Corollary 11.11(a) in \cite{soltan2015lectures} that $\cone(C) = \bigcup \{ \relint(\cone(F)) \mid F \text{ is a face of } \cone(C) \}$. Therefore, $b$ lies in the interior of some face $F$, and by \cref{prop:sturfmels-thomas}, $F$ contains the support of an optimal solution for $L_=(A,c,b)$. Because we assumed that $L$ is not a cell, then $F \neq L$. This contradicts the uniqueness of the support $L$ that is required for $L$ to be a loadout. Therefore, $L$ is a cell of $\Delta_c(A)$. Assume now that $L$ is not simplicial and $L = \{j_1,\ldots,j_k\}$, this means that there exists $\gamma_2, \ldots, \gamma_{k}$ such that, wlog, \[ \sum\limits_{i=2}^{k} \gamma_i A_{j_i} = A_{j_1} \ \ \mbox{ and } \ \ \sum\limits_{i=2}^{k} \gamma_i c_{j_1} = c_{j_1}\] Note that the $\gamma_i$ need not to be all positive. Consider $\alpha > 0$ such that $b = \alpha_1 A_{j_1} + \ldots \alpha_{k} A_{j_{k}}$ and such that $x= (\alpha,0)$ is an optimal solution for $L_=(A,c,b)$. Let $\alpha_{min} = \min\limits_{i \in \{1,\ldots,k\}} \alpha_i$, $\gamma_{min} = \max\limits_{i \in \{1,\ldots,k\}} |\gamma_i|$, and $\epsilon = \frac{\alpha_{min}}{\gamma_{min}}$. It is clear that $\alpha_i \geq \epsilon \gamma_i$ and $\epsilon > 0$. We can rewrite the right-hand side $b$ as follows: \begin{align*} b = (\alpha_1+\epsilon) A_{j_1} +\sum\limits_{i=2}^{k} (\alpha_i - \epsilon \gamma_j) A_{j_i}. \end{align*} We can therefore define a new solution $x'$ such that $x'_{j_i} = \alpha_i - \epsilon \gamma_i$ for $i \in \{2,\ldots, k\}$, $x'_{j_{1}} = \alpha_1 + \epsilon$, and $x'_j = 0$ otherwise. We claim that $x$ and $x'$ have the same cost. In fact, \begin{align*} c^\top x & = \sum\limits_{i=1}^{k} \alpha_i c_{j_i}\\ c^\top x' & = (\alpha_1+\epsilon) c_{j_{1}} + \sum\limits_{i=2}^{k} (\alpha_j-\epsilon\gamma_j) c_{j_i} = \epsilon(c_{j_1} - \sum\limits_{i=2}^{k} \gamma_j c_{j_i}) + \sum\limits_{i=1}^{k} \alpha_j c_{j_i} = c^\top x \end{align*} This contradicts the uniqueness of the loadout $L$. Thus $L$ is a simplicial cell of $\Delta_c(A).$\Halmos \endproof The lemma above implies that we can focus on the simplicial cells of the subdivision $\Delta_c(A)$. We next show that we can consider without loss of generality choices of $c$ where all the cells of $\Delta_c(A)$ are simplicial. The idea is that if $\Delta_c(A)$ has some non-simplicial cells, then we can ``perturb'' the cost vector $c$ to some $c'$ and transform at least one non-simplicial cell into one or more simplicial cells. This perturbation conserves all the simplicial cells of $\Delta_c(A)$ and thus the number of equality loadouts for the design $(A,c')$ cannot be less than the number of equality loadouts for the design $(A,c)$. Without loss of optimality, we can ignore cost vectors $c$ that give rise to non-simplicial cells. We first define the notion of refinement that formalizes the ``perturbation'' of $c$. \begin{definition} Given two cell complexes $\mathcal{C}_1$ and $\mathcal{C}_2$, we say that $\mathcal{C}_1$ refines $\mathcal{C}_2$ if every cell of $\mathcal{C}_1$ is contained in a cell of $\mathcal{C}_2$. \end{definition} \cite[Lemma 2.3.15]{de2010triangulations} shows that if $c' = c + \epsilon \cdot e$ is perturbation of $c$ with $\epsilon > 0$ sufficiently small and $e = (1,\ldots,1)$, then the new subdivision $\Delta_{c'}(A)$ refines $\Delta_c(A)$. Since $\Delta_{c'}(A)$ refines $\Delta_{c}(A)$, then $\Delta_{c'}(A)$ will have more cells. However, it is not clear if $\Delta_{c'}(A)$ will have more simplicial cells than $\Delta_{c}(A)$. We show in the following lemma that this is the case. We show that such a refinement preserves all the simplicial cells of $\Delta_c(A)$, and can only augment the number of simplicial cells. \begin{lemma}\label{lemma:refinement} A refinement of $\Delta_c$ can only add to the number of simplicial cells in $\Delta_c$. \end{lemma} \proof{\emph{Proof}.} We fix the matrix $A$ and let $\Delta_c$ denote $\Delta_c(A)$. Assume $\Delta_c$ is not a triangulation. There exists $\epsilon > 0$, such that for every cost vector $c'$ that verifies $|c_i-c'_i| \leq \epsilon$, $\Delta_{c'}$ is a refinement of $\Delta_c$, i.e., for every cell $C' \in \Delta_{c'}$, there exists a cell $C \in \Delta_c$ such that $C' \subset C$. We will argue that all simplicial cells of $\Delta_c$ are simplicial cells of every refinement $\Delta_{c'}$. Let $F$ be a simplicial cell of $\Delta_c$. Let $x$ be a point in the relative interior of $F$. There exists a cell $C' \in \Delta_{c'}$ such that $x \in \relint(C')$, and furthermore $\dim C' = \dim F$. By definition of a refinement there exists $C \in \Delta_c$ such that $C' \subseteq C$ and $x \in \relint(C') \subset \relint(C)$. Therefore, $C$ and $F$ are both cells of the subdivision $\Delta_c$ and $\relint(C) \cap \relint(F) \neq \emptyset$. This implies that $C = F$ by the intersection property. We have established that, for every simplicial cell $F$ in $\Delta_c$, there exists a maximal cell $C'$ in $\Delta_c'$ such that $C' \subseteq F$. Since $F$ is simplicial, $C'$ is a face of $F$, and the closure property says $C'$ is a cell of $\Delta_c$. Furthermore, since $\dim C' = \dim F$ and $C' \subseteq F$, then $C' = F$ and $F$ is a simplicial cell of the refinement $\Delta_{c'}$.\Halmos \endproof In \cite[Corollary 2.3.18]{de2010triangulations}, it is shown $\Delta_c(A)$ can be refined to a triangulation within a finite number of refinements (suffices for $c'$ to be generic). Therefore, the lemma above implies that in order to maximize the number of loadouts for any dimension $k \leq m$, we can restrict attention to designs $(A,c)$ such that $\Delta_c(A)$ is a triangulation without loss of generality. We observe that since the matrix $A \in \mathbb{R}_{\geq 0}^{m\times n}$ has all nonnegative entries, $\cone(A)$ is contained entirely in the positive orthant and therefore cannot contain a line. Cones that do not contain lines are called \emph{pointed}. The following lemma shows that triangulations of pointed cones in dimension $m$ are equivalent to triangulations of a non-restricted set of points (columns) in dimension $m-1$. This implies that equality loadouts (which we showed correspond to cells in a cone triangulation) can be seen as cells of a triangulation of a point configuration. The proof of the lemma is deferred to Appendix~\ref{appx:lemma_pointed_cone}. \begin{lemma}[\cite{beckcomputing}, Theorem 3.2]\label{lem:pointedcone} Every triangulation $\mathcal{T}$ of a pointed cone of dimension $m$ can be considered as a triangulation $\mathcal{T}'$ of a point configuration of dimension $m-1$ such that for $1 \leq k \leq m$, the $k$-simplices of $\mathcal{T}$ can be considered as $(k-1)$-simplices of $\mathcal{T}'$. \end{lemma} \cref{lem:pointedcone} implies that equality loadouts of dimension $k$ correspond to $(k-1)$-simplices in a triangulation of a point configuration in dimension $m-1$. \subsection{From cells of a triangulation to faces of a polytope} Recall that $n>m\ge k\ge2$. We have just shown that the number of equality $k$-loadouts is upper-bounded by the maximum possible number of $(k-1)$-simplices in a triangulation of $n$ points in $\mathbb{R}^{m-1}$. We now show that any $n$-point triangulation in $\mathbb{R}^{m-1}$ can be embedded onto the boundary of an $(n+1)$-vertex polytope in $\mathbb{R}^m$, in a way such that $(k-1)$-simplices in the triangulation correspond to $(k-1)$-faces on the polytope. We then apply the cyclic polytope upper bound on the number of $(k-1)$-faces on any $(n+1)$-vertex polytope in $\mathbb{R}^m$ to establish our result. To get a tighter bound, we carefully subtract the ``extraneous'' faces added from the embedding that did not correspond to $(k-1)$-simplices in the original triangulation. We lower bound the number of such extraneous faces using the lower bound theorem of \cite{kalai1987rigidity}. Let $\mathcal{T}$ denote the original $n$-point triangulation in $\mathbb{R}^{m-1}$. We will use $\conv \mathcal{T}$ to refer to the polytope obtained by taking the convex hull of all the faces in $\mathcal{T}$. Let $g_{k-1}(\mathcal{T})$ denote the number of $(k-1)$-simplices in the triangulation $\mathcal{T}$. We embed $\conv \mathcal{T}$ into a polytope $P$ in $\mathbb{R}^m$ as follows. Let $z^1,\ldots,z^n\in\mathbb{R}^{m-1}$ denote the vertices in triangulation $\mathcal{T}$. We now define the following lifted points in $\mathbb{R}^m$. For all $i=1,\ldots,n$, let $\underline{z}^i$ denote the point $(z^i_1,\ldots,z^i_{m-1},0)$. For all $i=1,\ldots,n$, let $\Bar{z}^i$ denote the point $(z^i_1,\ldots,z^i_{m-1},\epsilon)$, for some fixed $\epsilon>0$. Let $\epsilon >0$, and replace each point $\underline{z}^i$ that is in the interior of $ \conv(\{\underline{z}^1,\ldots,\underline{z}^n\})$ by the ``lifted'' point $\Bar{z}^i = (z^i_1, \ldots, z_{m-1}^i,\epsilon)$. The points on the boundary of $\conv(\{\underline{z}^1,\ldots,\underline{z}^n\})$ are not lifted. Let $S$ be the set of the $n$ points in $\mathbb{R}^m$ after lifting. Let $\mathcal{S}_m$ be the unit sphere of $\mathbb{R}^m$ with center at the origin, and $S'$ be the projection of $S$ onto $\mathcal{S}_m$, where every point is projected along the line connecting the point to the center of the sphere. The set $S'$ has the property that all the points that are on the ``equator" hyperplane $z_{m} = 0$ are exactly the projections of the points of $S$ on the boundary of $\conv(S)$ (the points that were not lifted). The other points of $S'$ are in the ``northern hemisphere'' (the half space $x_{m}>0$). The final step is to adjoin the boundary points to the ``south pole'',$(0 ,\ldots ,0, -1) \in \mathbb{R}^{m}$. Let $P$ be the resulting polytope, i.e., $P = \conv(S')$. The next lemma shows that for $2 \leq k \leq m$, the $(k-1)$-dimensional faces of $P$ are either $(k-1)$-simplices of $\mathcal{T}$, or $(k-2)$-faces of $\mathcal{T}$ that were adjoined to the south pole. \begin{lemma}\label{lemma:embedding} For $2 \leq k \leq m$, we have $f_{k-1}(P) =g_{k-1}(\mathcal{T})+f_{k-2}(\mathcal{T}).$ \end{lemma} \proof{\emph{Proof}.} Fix $k \in \{2,\ldots,m\}$. The projection of every $(k-1)$-simplex of $\mathcal{T}$ (after lifting the non-boundary points) is a simplicial face of $P$. Let $F$ be a $(k-2)$-dimensional face of $\conv \mathcal{T}$. The points of $F$ lie on the boundary of $\mathcal{T}$, and by adjoining them to the south pole, we create a $(k-1)$-face of the new polytope $P$.\Halmos \endproof The previous lemma implies that $g_{k-1}(\mathcal{T}) = f_{k-1}(P)- f_{k-2}(\mathcal{T})$. Since $P$ has $n+1$ points, we know from the upper bound theorem that $f_{k-1}(P) \leq f_{k-1}(\mathcal{C}(n+1,m))$. Therefore, $g_{k-1}(\mathcal{T}) \leq f_{k-1}(\mathcal{C}(n+1,m))- f_{k-2}(\mathcal{T})$, and all we need is a lower bound on $f_{k-2}(\mathcal{T})$. The following lemma uses the lower bound theorem (Theorem 1.1, \cite{kalai1987rigidity}) to establish a lower bound on $f_{k-2}(\mathcal{T})$. The lower bound theorem states presents a lower bound on the number of faces in every dimension among all polytopes of dimension $d$ over $p$ points, for $d \geq 2$ and $p \geq 2$. \begin{lemma}\label{lemma:upper_bound_triangulation} For $2 \leq k \leq m$, we have $g_{k-1}(\mathcal{T}) \leq f_{k-1}(\mathcal{C}(n+1,m))-\binom{m}{k-1}.$ \end{lemma} \proof{\emph{Proof}.} Let $p$ denote the number of vertices (boundary points) of the polytope $\mathcal{T}$. By the lower bound theorem of \cite{kalai1987rigidity}, we obtain \[ f_{k-2}(\mathcal{T})\ge\left\{ \begin{array}{ll} \binom{m-1}{k-2}p - \binom{m}{k-1}(k-2) & \mbox{if } k=2,\ldots,m-1, \\ \\ (m-2)p - m(m-3) & \mbox{if } k = m. \end{array} \right . \] The right-hand side is increasing in $p$. But the minimum possible value of $p$ is $m$ (since $\conv \mathcal{T}$ is a full-dimensional polytope in $\mathbb{R}^{m-1}$). Hence \[ f_{k-2}(\mathcal{T})\ge\left\{ \begin{array}{ll} \binom{m-1}{k-2}m - \binom{m}{k-1}(k-2) & \mbox{if } k=2,\ldots,m-1, \\ \\ m & \mbox{if } k = m. \end{array} \right . \] We observe that $\binom{m-1}{k-2}m - \binom{m}{k-1}(k-2)$ evaluates to $m$ if $k=m$. Therefore, we can combine the two cases and derive using \cref{lemma:embedding} that \begin{align*} g_{k-1}(\mathcal{T}) &\le f_{k-1}(P)-\left(\binom{m-1}{k-2}m - \binom{m}{k-1}(k-2)\right) \\ &\le f_{k-1}(\mathcal{C}(n+1,m))-\left(\frac{m!}{(k-2)!(m-k+1)!} - \frac{m!}{(k-1)!(m-k+1)!}(k-2)\right) \\ &= f_{k-1}(\mathcal{C}(n+1,m))-\left(\frac{m!}{(k-1)!(m-k+1)!}(k-1) - \frac{m!}{(k-1)!(m-k+1)!}(k-2)\right) \\ &= f_{k-1}(\mathcal{C}(n+1,m))-\binom{m}{k-1} \end{align*} where we used the fact $f_{k-1}(P) \leq f_{k-1}(\mathcal{C}(n+1,m))$ from the upper bound theorem.\Halmos \endproof \subsection{Proof of Theorem 1} We are now ready to present the proof of \cref{thm:upperbound}. \proof{\emph{Proof of \cref{thm:upperbound}}.} Consider $k \in \{2,\ldots,m\}$, \cref{lemma:loadout-simplicial} states that equality loadouts of size $k$ are $k$-cells in the cone subdivision $\Delta_c(A)$. By \cref{lemma:refinement}, $\Delta_c(A)$ can be considered a triangulation of cones and by \cref{lem:pointedcone}, the number of $k$-cells $\Delta_c(A)$ is less than the maximum number of $(k-1)$-cells in a triangulation of $n$ points in dimension $m-1$. Finally, \cref{lemma:upper_bound_triangulation} shows that the number $(k-1)$-cells in a triangulation of $n$ points in dimension $m-1$ is less than $f_{k-1}(\mathcal{C}(n+1,m) - \binom{m}{k-1}$. Therefore, $|\mathcal{L}^k_=(A,c)| \leq f_{k-1}(\mathcal{C}(n+1,m) - \binom{m}{k-1}$. This inequality, combined with \cref{prop:inclusion} yields \begin{equation*} |\mathcal{L}^k(A,c)| \leq |\mathcal{L}^m_=(A,c)| \leq f_{k-1}(\mathcal{C}(n+1,m)) - \binom{m}{k-1}.\Halmos \end{equation*} \endproof \section{General Lower Bound (Proof of \Cref{thm:lowerbound})}\label{sec:thm2} Throughout this section, we fix positive integers $n>m\ge4$, and explicitly present designs $(A,c)$ that have the number of $k$-loadouts promised in \Cref{thm:lowerbound} for all $k\leq m$. For $m=2$ and $m=3$, the exactly optimal designs are presented in Appendix \ref{appx:exact_construction}. All of the designs constructed in this paper will satisfy the property that $A$ has linearly independent rows, hence we assume in the rest of this section that $A$ is a full row rank matrix. \subsection{Construction based on moment curve} \label{sec:construction} Let $t_1,\ldots,t_n$ be arbitrary real numbers satisfying $0< t_1 < t_2 < \ldots < t_n$. Let $M$ be an arbitrary constant satisfying $M\ge t^m$. We define the design $(A, c)$ so that $c = (1, \ldots, 1) \in \mathbb{R}^n \mbox{ and } A = [v'_m(t_1), \ldots v'_m(t_n)]$. where \[ t \mapsto v'_m(t) = \begin{pmatrix} t\\ M-t^2\\ t^3\\ M-t^4\\ \vdots \\ \frac{(-1)^m+1}{2}M-(-1)^mt^m \end{pmatrix} \in \mathbb{R}^m.\] Note that the final row equals $M-t^m$ if $m$ is even, or $t^m$ if $m$ is odd. For any such values $t_1,\ldots,t_n$ and $M$, we will get a design that satisfies our \Cref{thm:lowerbound}. We set all the entries of the cost vector $c$ to 1 to simplify computations. It is not a requirement and the construction would still hold by setting $c_j$ to be any positive number and scaling the column $A_j$ by a factor of $c_j$. We will also later show that any of these constructions satisfy our assumption of $A$ having full row rank. \textbf{Motivation behind the construction.} Let $P$ be the convex hull of $\{v'_m(t_1), \ldots v'_m(t_n)\}$. Let \[ t \mapsto v_m(t) = \begin{pmatrix} t\\ t^2\\ t^3\\ \vdots\\ t^m \end{pmatrix} \in \mathbb{R}^m\] denote the $m$-dimensional original moment curve the defines the cyclic polytope. The choice of the curve $v'_m$ is motivated by role the cyclic polytope plays in our corresponding upper bound \cref{thm:upperbound}. In fact, \cref{thm:upperbound} shows that the number of $k$-dimensional loadouts is less than the number of $(k-1)$-dimensional faces of the cyclic polytope $\mathcal{C}(n+1,m)$ (for $2 \leq k \leq m$). An ideal lower bound proof would connect the number of loadouts to the number of faces of the cyclic polytope. However, simply setting the columns of the constraint matrix $A$ to be points on the moment curve of the cyclic polytope does not guarantee the existence of loadouts. We therefore, introduce the curve $v'_m$ that describes a ``rotated'' cyclic polytope and show that it is rotated to ensure that the supporting normals of ``half'' of the facets are nonnegative. We use these rotated facets to construct a number of loadouts that asymptotically matches the upper bound. The rotation is performed by multiplying the even coordinates of the moments curve by -1, and we use a sufficiently big constant $M$ to ensure the positivity of the new constraint matrix. \subsection{Dual certificate for loadouts} \label{sec:dualCert} Using LP duality, we derive a sufficient condition for subsets of $[n]$ to be loadouts. \begin{definition}\label{def:inequalitycell} A set $C\subseteq[n]$ is an \textit{inequality cell} of the design $(A,c)$ if there exists a variable $y\in\mathbb{R}^m$ such that \begin{alignat}{4}\label{eq:inequalitycell} y_i & > & 0 & , \quad \forall \ i \in [m];\\ \nonumber y^\top A_j & = & c_j & , \quad \forall \ j \in C;\\ \nonumber y^\top A_j & > & c_j & , \quad \forall \ j \not\in C. \end{alignat} \end{definition} Here, $y$ can be interpreted as a dual variable. However, in contrast to the definition of a cell that features in \Cref{prop:sturfmels-thomas}, here we require $y>0$. This is because non-negativity is needed for $y$ to be feasible in the dual when the LP has an inequality constraint $Ax\le b$ instead of an equality constraint as considered in \cref{prop:sturfmels-thomas}. \begin{lemma}\label{lem:inequalitycell} Suppose $C\subseteq[n]$ is an inequality cell with $|C|=m$. Then every non-empty subset of $C$ is a loadout. \end{lemma} To establish \Cref{lem:inequalitycell}, we show that for every subset $L \subseteq C$, $y$ will verify the complementary slackness constraints with a primal variable $x$ that has support equal to $L$. This establishes the optimality of $x$, and to show its uniqueness, we use the assumption that $A$ has a full row rank equal to $m$. \proof{\emph{Proof of \Cref{lem:inequalitycell}}.} \noindent We start by recalling the complementary slackness conditions. If $x$ and $y$ are feasible solutions to the primal and dual problem, respectively, then complementary slackness states that $x$ and $y$ are optimal solutions to their respective problems if and only if \begin{alignat*}{4}\tag{CS} y_i(a_i^\top x-b_i) & = & \ 0, \ & \quad \forall \ i \in [m],\\ (c_j - y^\top A_j) x_j & = & \ 0, \ & \quad \forall \ j \in [n]. \end{alignat*} Now, let $L$ be a non-empty subset of $C$. We must show that $L$ is a loadout. Take an arbitrary $x^L\ge0$ with support equal to $L$, and define $b$ to equal $Ax^L$. Since $C$ is an inequality cell by \cref{def:inequalitycell}, there exists a dual variable $y^C$ satisfying the conditions in \cref{eq:inequalitycell}. Consider $LP(A,c,b)$. Clearly $x^L$ and $y^C$ are primal and dual feasible. They also satisfy the CS conditions. Therefore, $x^L$ and $y^C$ are primal and dual optimal. We now argue that $x^L$ is the unique optimal solution of $LP(A,c,b)$ If $x$ is not unique, there exists another optimal solution $x'$. By complementary slackness, $x'$ and $y^C$ must satisfy $(c_j - (y^C)^\top A_j) x'_j = \ 0$ for all $j \in [n].$ By definition of $y^C$, $(y^C)^\top A_j > c_j, \ \forall \ j \not\in C$. Therefore, $\supp(x') \subseteq C$. The other complementary slackness condition \begin{alignat*}{4} y_i(a_i^\top x-b_i) & = & \ 0, \ & \quad \forall \ i \in [m], \end{alignat*} implies that \begin{align}\label{eq:full_rank_system} [A_{j_1}|\cdots|A_{j_m}] \ x'=b, \end{align} where $C=\{j_1,\ldots,j_m\}$. But since $A$ is assumed to be of full rank, the columns $A_{j_1}, \cdots, A_{j_m}$ are linearly independent and the system \eqref{eq:full_rank_system} has a unique solution. Since we have \begin{align*} [A_{j_1}|\cdots|A_{j_m}] \ x^L =b, \end{align*} by definition of $x^L$ and $b$, $x^L$ is the unique solution to \eqref{eq:full_rank_system} and therefore the unique optimal solution of $LP(A,c,b)$. This shows that $L$ is a loadout and concludes the proof. \Halmos \endproof Note that this lemma only works in one direction. If $L$ is a loadout, it is not clear that we can find a corresponding dual certificate that satisfies \cref{def:inequalitycell}. However, for our construction, we only need the direction proved in the lemma. \subsection{Deriving dual certificates for our construction} \label{sec:reducingToCombinatorial} In order to prove \cref{thm:lowerbound}, we consider our design from \Cref{sec:construction}, and try to show that there are many inequality cells of cardinality $m$. To do so, we take an arbitrary $C\subseteq[n]$ with $|C|=m$ and consider the hyperplane that goes through the $m$ points $\{v'_m(t_j) \mid j\in C\}$. We show in \Cref{lem:hyperplaneequation} that the coefficients of the equation for this hyperplane have the same sign. We then use these coefficients to construct a candidate dual vector $y$. The last step (\cref{lemma:remaining_conditions}) is to show that when the hyperplane satisfies a \textit{gap parity }combinatorial condition, this dual vector will indeed satisfy \Cref{def:inequalitycell}, certifying that $C$ is an inequality cell. The proofs of \cref{lem:hyperplaneequation,lemma:remaining_conditions} are presented in Appendix~\ref{appendix:proof_lemmas}. \begin{lemma}\label{lem:hyperplaneequation} Let $C = \{j_1, \ldots, j_m\} \subseteq [n]$ be a subset of $m$ indices such that $j_1<\cdots<j_m$. Then the equation \begin{equation}\label{eq:hyperplane} \det\begin{pmatrix} 1 & \ldots & 1 & 1\\ v'_m(t_{j_1}) & \ldots & v'_m(t_{j_m}) & y \end{pmatrix} = 0 \end{equation} defines a hyperplane in variable $y\in\mathbb{R}^m$ that passes through the points $v'_m(t_{j_1}), \ldots, v'_m(t_{j_m}).$ Furthermore if equation \cref{eq:hyperplane} is written in the form \[ \alpha_1 y_1 + \ldots \alpha_m y_m - \beta = 0, \] then we have $\alpha_1 \neq 0, \ldots \alpha_m \neq 0, \beta \neq 0$, and \[ \mbox{sign}(\alpha_1) = \ldots = \mbox{sign}(\alpha_m) = \mbox{sign}(\beta) = (-1)^{\lfloor \frac{m}{2} \rfloor + m + 1}, \] where $\mbox{sign}(\alpha_j)$ is equal to $1$ if $\alpha_j > 0$ and equal to -1 otherwise. \end{lemma} We now consider a subset $C = \{j_1, \ldots, j_m\} \subseteq [n]$ with $j_1<\cdots<j_m$, such that the corresponding hyperplane has equation $ \alpha_1 y_1 + \ldots \alpha_m y_m - \beta = 0$, as defined above. The previous lemma shows that the dual variable $y = \alpha/\beta$ satisfies $y_i > 0$ for all $i \in [m]$. We now proceed towards a \textit{gap parity condition} on the subset $C$ under which setting $y=\alpha/\beta$ also satisfies the remaining conditions of \Cref{def:inequalitycell}. \begin{definition} (Gaps). For a set $C \subset [n]$, a \textit{gap} of $C$ refers to an index $i \in [n]\setminus C$. A gap $i$ of $C$ is an \textit{even gap} if the number of elements in $C$ larger than $i$ is even, and $i$ is an \textit{odd gap} otherwise. \end{definition} \begin{definition}\label{def:gap_parity} (Facets and Gap Parity). A subset $C\subseteq[n]$ is called a \textit{facet} if $|C|=m$ and either: (i) all of its gaps are even; or (ii) all of its gaps are odd. If all of its gaps are even, then we call $C$ an \textit{even facet} and define $g(C)=2$. On the other hand, if all of its gaps are odd, then we call $C$ an \textit{odd facet} and define $g(C)=1$. We let $g(C)\in\{1,2\}$ denote the \textit{gap parity} of a facet $C$, with $g(C)$ being undefined if $C$ is not a facet. \end{definition} We now see that every facet with gap parity \textit{opposite} to $m$ is an inequality cell. \begin{lemma}\label{lemma:remaining_conditions} Every facet $C$ with $g(C)\not\equiv m \ (\text{mod } 2)$ is an inequality cell. \end{lemma} The proofs of \cref{lem:hyperplaneequation,lemma:remaining_conditions} require some technical developments on the sub-determinants of $A$ and are deferred to Appendix~\ref{appendix:proof_lemmas}. The outline of the proof of \cref{lemma:remaining_conditions} is as follows. To show that $C = \{j_1, \ldots, j_m\}$ is an inequality cell, we consider the dual certificate $y = \frac{\alpha}{\beta}$ where $\alpha_1 y_1 + \ldots \alpha_m y_m - \beta = 0$ is the equation of $C$. By \cref{lem:hyperplaneequation}, $\beta$ and $\alpha$ have the same signs, and that $\beta \neq 0$ and $\alpha_i \neq 0$ for $i \in [m]$. Therefore, $y_i > 0, \ \ \forall i \in [m].$ For $j \in C$, \[ y^\top v'_m(t_j) = \frac{\alpha^\top v'_m(t_j)}{\beta} = \frac{\beta}{\beta} = 1 = c_j.\] The last step is to show $ y^\top v'_m(t_j) > c_j$ for $j \not\in C$. \subsection{Counting the number of $k$-loadouts} \label{sec:pureCounting} The preceding \Cref{sec:dualCert,sec:reducingToCombinatorial} combine to provide a purely combinatorial lower bound on the number of $k$-loadouts in our construction. Indeed, \Cref{lem:inequalitycell} shows a subset $L\subseteq[n]$ with $|L|=k$ is a $k$-loadout as long as $L$ is contained within some inequality cell $C$. In turn, \Cref{lemma:remaining_conditions} shows that $C$ is an inequality cell as long as it is a facet with gap parity opposite to $m$. In this section, we undertake the task of counting the number of $k$-subsets that are contained within at least one facet with gap parity opposite to $m$, for all $k=1,\ldots,m$. The challenge is not to over-count these subsets because such a subset can be contained in different facets. To aid in this task, it is convenient to interpret subsets of $[n]$ as arrays of length $n$ consisting of dots (.) and stars (*), representing the absence and presence respectively of an index in the subset. We follow the notation of \cite{eu2010cyclic}, and for any subset $L \subseteq [n]$, we associate $L$ with an $(1 \times n)$-array having a star (*) at the $j$th entry if $j \in L$ and a dot (.) otherwise. In such an array, every maximal segment of consecutive stars is called a block. A block containing the star at entry $1$ or $n$ is a border block, and the other ones are inner blocks. The border block containing the star at entry 1 is called the first border block, and the one containing the star at $n$ is called the last border block. For example, the array associated with $ n = 9$ and subset $L = \{1, 3, 4, 7, 8, 9\}$ is shown in \cref{figure:blocks}, with an inner block $\{3, 4\}$ and border blocks $\{1\}$ and $\{7, 8, 9\}$. A block will be called even or odd according to the parity of its size. For instance, $\{3, 4\}$ is an even inner block, and $\{7, 8, 9\}$ is an odd last border block. \begin{figure} \caption{The array associated with $n = 9$ and $L = \{1, 3, 4, 7, 8, 9\}$.} \label{figure:blocks} \end{figure} For $1 \le k \le m$ and $0 \le s \le k$, let $A(n, k, s)$ be the set of $(1\times n)$-arrays with $k$ stars and $s$ odd inner blocks. We further define $A^{\mathrm{odd}}(n, k, s)$ (resp. $A^{\mathrm{even}}(n, k, s)$) to be the set of $(1\times n)$-arrays with $k$ stars, $s$ odd inner blocks and an odd (resp. even) last border block, such that \begin{equation*} |A^{\mathrm{odd}}(n, k, s)| + |A^{\mathrm{even}}(n, k, s)| = |A(n, k, s)|. \end{equation*} Note that the last border block can be empty (occurring when there is a (.) in position $n$) and such a block is considered even. We first show that the set of arrays corresponding to facets is $A(n,m,0)$, and that the set of arrays of $k$-subsets that are included in a facet contains $\cup_{s=0}^{m-k} A(n,k,s)$. \begin{lemma}\label{lemma:face_array} The set of facets (as per \Cref{def:gap_parity}) corresponds to the set of arrays with $m$ stars and no odd inner blocks. In other words, the set of facets is equal to $A(n,m,0)$. The set of even facets is $A^{\mathrm{even}}(n,m,0)$ and the set of odd facets is $A^{\mathrm{odd}}(n,m,0)$. Furthermore, for $1 \le k \le m-1$, every $k$-subset in $\cup_{s=0}^{m-k} A(n,k,s)$ is contained in a facet. \end{lemma} \proof{\emph{Proof}.} Let $C$ be an even facet. Let $j \not\in C$ be the greatest gap in $C$. By definition of an even facet, the number of indices in $C$ larger than $j$ is even. Because $j$ is the greatest gap, the elements in $C$ larger than $j$ constitute the last border block. Therefore the last border block of $C$ is even. Now, consider the rightmost inner block of $C$, if this block is odd, then there exists an odd gap of $C$, which contradicts the fact that $C$ is an even. By considering the remaining inner blocks from right to left, we can see that if any of these blocks is odd, then $C$ will have an odd gap, contradicting the fact that it's an even facet. Therefore, the array of $C$ has no odd inner blocks and an even last border block. Similarly, we can show that if $C$ is an odd facet, the array of $C$ has no odd inner blocks and an even last border block. This shows that every facet has no odd inner blocks. Conversely, consider a subset $C$ whose array is $A(n,m,0)$. This implies that $|C| = m$ and $C$ has no odd inner blocks. One can see that since all the inner blocks are even, then all the gaps of $C$ have the same parity as the last border block of $C$. Therefore, $C$ is a facet. Consider $1 \le k \le m-1$ and $L \subseteq [n]$ such that $|L| = k$ and the array of $L$ is in $\cup_{s=0}^{m-k} A(n,k,s)$. We show that we can add $m-k$ stars to the array of $L$ to get rid of all the odd inner blocks. This implies that $L$ is included in a facet. Since $L$ has less than $s \leq m-k$ odd inner blocks, we can add $s$ stars to the right of every odd inner block. This ensures that there is no odd inner block. We then add $m-k-s$ stars to the right of the first border block of the array.\Halmos \endproof Recall that we are interested in the $k$-subsets that are included in facets with gap parity opposite to $m$. The next lemma presents a sufficient condition for a $k$-subset to be included in both an even and an odd facet. \begin{lemma}\label{lemma:facets_intersection} For $1\le k \leq m-1$, any $k$-subset with strictly less than $m-k$ odd inner blocks is included in both an even facet and an odd facet. \end{lemma} \proof{\emph{Proof}.} We present the proof for the case of odd facets. The other case is argued similarly. Let $L \subseteq [n]$ such that $|L| = k$ and the corresponding array has $s$ odd inner blocks, with $s < m-k$. We will augment the array corresponding to $L$ to become an array corresponding to an odd facet by adding $m-k$ stars. This will prove that $L$ is contained in a facet. Consider the following procedure \begin{enumerate} \item Go over all the odd inner blocks of $L$ from left to right. \item For every odd inner block, add a star to the right of the block. \item If the last border block is odd, add the remaining stars to the right of the first border block. Otherwise, add one star to the left of the last border block and the remaining stars to the right of the first border block. \end{enumerate} After step 2 of the procedure above, every inner block has been transformed to either an even inner block or to be part of the last border block. In fact, after adding a star to the right of an inner block, we distinguish the following three cases: 1) The added star does not connect the block to any other block. In this case, the block becomes even. 2) The added star connects the odd inner block to an even inner block. In this case, the new block is even. 3) The added star connects the odd inner block to an odd inner block. In this case, we keep adding a star to the right. If the last border block is odd, then we can add the remaining $m-k -s$ stars to the right of the first border block. Since $m < n$, we can always do so without affecting the last border block. If the last border block is even, then we add one star to the left of this border block. If the added star doesn't connect this border block to any other block. Then we are done. If the added star connects this border block to another block $\alpha$, then $\alpha$ is an even block and, therefore, the last border block changes parity because it now has 1 + $|\alpha|$ additional stars, and 1 + $|\alpha|$ is odd.\Halmos \endproof The next lemma shows that when a $k$-subset has $m-k$ odd inner blocks for $1 \le k \le m$, then it is included in a facet with gap parity opposite to $m$ if the last border block has parity also opposite to $m$. \begin{lemma}\label{lemma:mk_odd} Let $L \subseteq [n]$ such that $|L| = k \in [n]$. If the array of $L$ is in $A^{\mathrm{even}}(n,k,m-k)$, then $L$ is included in an even facet. Similarly, if the array of $L$ is in $A^{\mathrm{odd}}(n,k,m-k)$, then $L$ is included in an odd facet. \end{lemma} \proof{\emph{Proof}.} We prove the result in the even case. The odd case is argued similarly. Let $\alpha \in A^{\mathrm{even}}(n,k,m-k)$ be the array of $L$. By adding one star to every inner odd block in $\alpha$ (exactly $m-k$ stars added), we ensure that the resulting array has 0 inner odd blocks and an even last border block. The resulting array corresponds therefore to an even facet and $L$ is included in an even facet.\Halmos \endproof \begin{lemma} \label{cor:small_dimension} Recall that $|\mathcal{L}^k(A,c)|$ is the number of $k$-loadouts in our construction. When $m$ is odd, we have for $1 \le k \le m$ \begin{align} \label{eqn:1234} |\mathcal{L}^k(A,c)| \geq \sum\limits_{s = 0}^{m-k-1} |A(n, k, s)| + |A^{\mathrm{even}}(n, k, m-k)|. \end{align} Similarly, when $m$ is even, we have \begin{align} \label{eqn:5678} |\mathcal{L}^k(A,c)| \geq \sum\limits_{s = 0}^{m-k-1} |A(n, k, s)| + |A^{\mathrm{odd}}(n, k, m-k)|. \end{align} Note that the summation in both~\eqref{eqn:1234} and~\eqref{eqn:5678} are empty if $k=m$. \end{lemma} \proof{\emph{Proof}.} Let $1 \le k \le m$ . We present the proof only for the case $m$ odd. The other case is argued symmetrically. When $m$ is odd, we showed in \cref{lemma:remaining_conditions}, that every subset of an even facet is a loadout of our construction. Combining \cref{lemma:face_array} and \cref{lemma:facets_intersection} show that any $k$-subset with strictly less than $m-k$ odd inner blocks is included in an even facet. \cref{lemma:mk_odd} shows that any $k$-subset with exactly $m-k$ odd inner blocks and an even last border block is included in an even facet. Therefore \[ |\mathcal{L}^k(A,c)| \geq \sum\limits_{s = 0}^{m-k-1} |A(n, k, s)| + |A^{\mathrm{even}}(n, k, m-k)|.\Halmos\] \endproof In the rest of this section, we show that for $k < \lfloor m/2\rfloor$, we have $|\mathcal{L}^k(A,c)| \geq \sum_{s=0}^{m-k} |A(n,k,s)|,$ and for $k \geq \lfloor m/2\rfloor$, we have $|\mathcal{L}^k(A,c)| \geq \big(\sum_{s=0}^{m-k} |A(n,k,s)|\big)/4$. We first deal with small values of $k$. \begin{corollary}\label{lemma:small_k} When $k < m/2$, we have \[|\mathcal{L}^k(A,c)| \geq \sum_{s=0}^{m-k} |A(n,k,s)|.\] \end{corollary} \proof{\emph{Proof}.} We first observe that when $k < m/2$, then $m-k > k$ and therefore $|A(n, k, m-k)|=0$. \cref{cor:small_dimension} implies then that $|\mathcal{L}^k(A,c)| \geq \sum_{s=0}^{m-k} |A(n,k,s)|$.\Halmos \endproof Next, we focus on the case where $m$ is odd and present the following lemma. \begin{lemma}\label{lemma:Aodd_even} For $1 \le k \le m$, we have \[|A^{\mathrm{odd}}(n, k, m-k)| \leq |A^{\mathrm{even}}(n, k, m-k)|\] \end{lemma} \proof{\emph{Proof}.} Let $\alpha \in A^{\mathrm{odd}}(n, k, m-k)$. We can transform $\alpha $ to an array from $|A^{\mathrm{even}}(n, k, m-k)|$ as follows: we take the first star to the left of the last border block and add it to the right of the first border block and translate all inner blocks by 1 to the right. The resulting array is in $|A^{\mathrm{even}}(n, k, m-k)|$. One can easily see that both operations are injective. Therefore, $|A^{\mathrm{even}}(n, k, m-k)| \leq |A^{\mathrm{odd}}(n, k, m-k)|$.\Halmos \endproof \begin{corollary}\label{cor:m_odd} For odd $m$ and $1 \le k \le m$, we have \[|\mathcal{L}^k(A,c)| \geq \frac{\sum_{s=0}^{m-k} |A(n,k,s)|}{2}.\] \end{corollary} \proof{\emph{Proof}.} By \cref{cor:small_dimension}, for odd $m$, \begin{align*} |\mathcal{L}^k(A,c)| & \geq \sum\limits_{s = 0}^{m-k-1} |A(n, k, s)| + |A^{\mathrm{even}}(n, k, m-k)|\\ & \geq \sum\limits_{s = 0}^{m-k-1} |A(n, k, s)| + \frac{|A^{\mathrm{even}}(n, k, m-k)|+|A^{\mathrm{odd}}(n, k, m-k)|}{2}\\ & = \sum\limits_{s = 0}^{m-k-1} |A(n, k, s)| + \frac{|A(n, k, m-k)|}{2}\\ & \geq \frac{\sum\limits_{s = 0}^{m-k} |A(n, k, s)|}{2}, \end{align*} where in the second inequality we use the fact that by \cref{lemma:Aodd_even}, we have $|A^{\mathrm{even}}(n, k, m-k)| \geq |A^{\mathrm{odd}}(n, k, m-k)|$.\Halmos \endproof We now turn our attention to the case where $m$ is even. We first deal with the case $k = m/2$. \begin{corollary}\label{lemma:k_equal_m2} When $m$ is even and $k = m/2$, we have \[|\mathcal{L}^k(A,c)| \geq \frac{\sum_{s=0}^{m-k} |A(n,k,s)|}{2}.\] \end{corollary} \proof{\emph{Proof}.} We argue that $|A(n,m/2,m/2)| \leq |A(n,m/2,m/2 - 1)|$ (Note that $|A(n,m/2,m/2)|=|A^{\mathrm{even}}(n,m/2,m/2)|$ because $|A^{\mathrm{odd}}(n,m/2,m/2)|=0$ in this case). Combined with \cref{cor:small_dimension}, this will imply that for $k = m/2$, \begin{align*} |\mathcal{L}^{\frac{m}{2}}(A,c)| & \geq \sum\limits_{s = 0}^{\frac{m}{2}-1} |A(n, \tfrac{m}{2}, s)|\\ & \geq \sum\limits_{s = 0}^{\frac{m}{2}-2} |A(n, \tfrac{m}{2}, s)| + \frac{|A(n,\tfrac{m}{2},\frac{m}{2}-1)| + |A(n,\tfrac{m}{2},\frac{m}{2})|}{2}\\ & \geq \frac{\sum\limits_{s = 0}^{\frac{m}{2}} |A(n, \frac{m}{2}, s)|}{2}. \end{align*} To see that $|A(n,m/2,m/2)| \leq |A(n,m/2,m/2 - 1)|$, take any array $\alpha \in A(n,m/2,m/2)$. The array $\alpha$ must have exactly $m/2$ odd inner blocks of 1 point each, 0 even blocks, and empty border blocks. By taking the last odd inner block of $\alpha$ and moving to the far right, we create a border block of one star and the resulting array is in $|A(n,m/2,m/2 - 1)|$. Furthermore, this operation is injective with respect to $\alpha$. Therefore, $|A(n,m/2,m/2)| \leq |A(n,m/2,m/2 - 1)|$.\Halmos \endproof The only remaining case is $m$ even and $k > m/2$. \begin{lemma}\label{lemma:k_greater_m2} For an even $m$ and $k > m/2$, we have \[|A^{\mathrm{even}}(n, k, m-k)| \leq 3 |A^{\mathrm{odd}}(n, k, m-k)| + |A(n, k, m-k-1)|\] \end{lemma} \proof{\emph{Proof}.} We partition $|A^{\mathrm{even}}(n, k, m-k)|$ into two disjoint sets \[|A^{\mathrm{even}}(n, k, m-k)|= |A_0^{\mathrm{even}}(n, k, m-k)| \cup |A_*^{\mathrm{even}}(n, k, m-k)|,\] where $|A_0^{\mathrm{even}}(n, k, m-k)|$ denotes the arrays in $|A^{\mathrm{even}}(n, k, m-k)|$ with an empty last border block and $|A_*^{\mathrm{even}}(n, k, m-k)|$ denotes the arrays with a nonempty border block. We show that $|A_0^{\mathrm{even}}(n, k, m-k)| \leq |A^{\mathrm{odd}}(n, k, m-k)|$ and $|A_*^{\mathrm{even}}(n, k, m-k)| \leq |A^{\mathrm{odd}}(n, k, m-k)|$. Let $\alpha \in A_*^{\mathrm{even}}(n, k, m-k)$. We can transform $\alpha $ to an array from $|A^{\mathrm{odd}}(n, k, m-k)|$ as follows: we take the first star to the left of the last border block and add it to the right of the first border block. Then shift all the vertices after the first border block to the right by 1. The resulting array is in $|A^{\mathrm{odd}}(n, k, m-k)|$. One can easily see this operation is injective. Therefore, $|A_*^{\mathrm{even}}(n, k, m-k)| \leq |A^{\mathrm{odd}}(n, k, m-k)|$. Let $\alpha \in A_0^{\mathrm{even}}(n, k, m-k)$. We distinguish two cases. 1) If $\alpha$ has a nonempty first border block, then take the rightmost star of the first border block, and move it the right of the array. The resulting array is in $|A^{\mathrm{odd}}(n, k, m-k)|$. 2) Assume $\alpha$ has an empty first border block. We first argue that since $k > m/2 $, then $m-k < k$. Since $\alpha$ has empty border blocks and $k \equiv m-k \ (\text{mod } 2)$, then $\alpha$ must either have an even block or an odd inner block with more than 3 vertices. Suppose $\alpha$ has an odd inner block with at least three vertices. Consider the rightmost odd inner block with at least three vertices in $\alpha$. Take the rightmost star from this block and move it to the end of the array. The resulting array is in $A(n,k,m-k-1)$. This operation is reversible and injective. Suppose $\alpha$ has an even block. Consider the rightmost even block in $\alpha$. Take the rightmost star of this even block and move it to the end of the array, and take the leftmost star of this even block and move it to the start of the array. The resulting array is in $A^{\mathrm{odd}}(n, k, m-k)$ and the operation is injective in $\alpha$. We finally conclude that $|A_0^{\mathrm{even}}(n, k, m-k)| \leq 2 |A^{\mathrm{odd}}(n, k, m-k)| + |A(n, k, m-k-1)|$. \Halmos \endproof \begin{corollary}\label{cor:greater_m2} For even $m$ and $m/2 < k \le m$, we have \[|\mathcal{L}^k(A,c)| \geq \frac{\sum_{s=0}^{m-k} |A(n,k,s)|}{4}.\] \end{corollary} \proof{\emph{Proof}.} By \cref{lemma:k_greater_m2}, \begin{equation}\label{eq:useful} |A^{\mathrm{even}}(n, k, m-k)| + |A^{\mathrm{odd}}(n, k, m-k)| + |A(n, k, m-k-1)| \leq 4 |A^{\mathrm{odd}}(n, k, m-k)| + 4|A(n, k, m-k-1)|. \end{equation} Recall that by \cref{cor:small_dimension}, we have \begin{align*} |\mathcal{L}^k(A,c)| & \geq \sum\limits_{s = 0}^{m-k-1} |A(n, k, s)| + |A^{\mathrm{odd}}(n, k, m-k)|\\ & \geq \sum\limits_{s = 0}^{m-k-2} |A(n, k, s)| + |A(n, k, m-k-1)| + |A^{\mathrm{odd}}(n, k, m-k)|\\ & \geq \sum\limits_{s = 0}^{m-k-2} |A(n, k, s)| + \frac{|A^{\mathrm{even}}(n, k, m-k)| + |A^{\mathrm{odd}}(n, k, m-k)| + A(n, k, m-k-1)|}{4}\\ & \geq \frac{\sum\limits_{s = 0}^{m-k} |A(n, k, s)|}{4}, \end{align*} where we use \eqref{eq:useful} to get the second to last inequality.\Halmos \endproof We finally show that for $1 \le k \le m$, we have \begin{equation}\label{eq:af} \sum_{s=0}^{m-k} |A(n,k,s)| = f_{k-1}\big(\mathcal{C}(n,m)\big). \end{equation} This in conjunction with our previous lemmas would imply that $|\mathcal{L}^k(A,c)|$, $\forall\ 1 \le k \le m$, is always at least a quarter (sometimes more) of $f_{k-1}\big(\mathcal{C}(n,m)\big)$. Consequently, our construction would be asymptotically a $1/4$-approximation because the upper bound we showed in \cref{thm:upperbound} is less than $f_{k-1}(\mathcal{C}(n+1, m))$, and asymptotically we know from \cref{lemma:asymptotic_fk} that \[\lim_{n\rightarrow \infty} \frac{f_{k-1}(\mathcal{C}(n, m))}{f_{k-1}(\mathcal{C}(n+1, m))} = 1.\] To prove \eqref{eq:af}, we invoke the following criterion for determining the faces of $\mathcal{C}(n, m)$. \begin{theorem}[\cite{shephard1968theorem}]\label{thm:shephard} For $1 \leq k \leq m$ , a subset $L \subseteq [n]$ is the set of vertices of a $(k - 1)$-dimensional face of $\mathcal{C}(n, m)$ if and only if $|L|=k$ and its associated array contains at most $m - k$ odd inner blocks. \end{theorem} An immediate consequence of the above theorem is that, for $1 \le k \le m$, $ f_{k-1}\big(\mathcal{C}(n,m)\big) = \sum_{s=0}^{m-k} |A(n,k,s)|. $ We are now ready to complete the proof of \Cref{thm:lowerbound}. \proof{\emph{Proof of \cref{thm:lowerbound}}.} We distinguish four cases. When $k < m/2 $, \cref{lemma:small_k} implies that $|\mathcal{L}^k(A,c)| \geq \sum_{s=0}^{m-k} |A(n,k,s)| = f_{k-1}\big(\mathcal{C}(n,m)\big)$. When $m$ is odd, by \cref{cor:m_odd} we have $|\mathcal{L}^k(A,c)| \geq \big(\sum_{s=0}^{m-k} |A(n,k,s)|\big)/2 = f_{k-1}\big(\mathcal{C}(n, m)\big)/2$. When $m$ is even and $k = m/2 $, by \cref{lemma:k_equal_m2}, $|\mathcal{L}^k(A,c)| \geq \big(\sum_{s=0}^{m-k} |A(n,k,s)|\big)/2 = f_{k-1}\big(\mathcal{C}(n, m)\big)/2$. When $m$ is even and $k > m/2 $, by \cref{cor:greater_m2} we have $|\mathcal{L}^k(A,c)| \geq \big(\sum_{s=0}^{m-k} |A(n,k,s)|\big)/4 = f_{k-1}(\mathcal{C}(n, m))/4$.\Halmos \endproof \section{Conclusion}\label{s:conclusion} We study the novel problem of diversity maximization, motivated naturally by the video game design context where designing for diversity is one of its core design philosophies. We model this diversity optimization problem as a parametric linear programming problem where we are interested in the diversity of supports of optimal solutions. Using this model, we establish upper bounds and construct game designs that match this upper bound asymptotically. To our knowledge, this is the first paper to systematically study the question of ``diversity maximization'' as we have defined it here. The goal here is ``diverse-in diverse-out'', if two players have ``diverse'' resources (meaning different right-hand resource vectors), they will optimally play different strategies. We believe there could be other applications for ``diverse-in diverse-out'' optimization problems. Consider, for example, a diet problem where a variety of ingredients are used in the making of meals, depending on different availability in resources. We leave this exploration for future work. There are also natural extensions to our model and analysis that could be pursued. For instance, we have studied the linear programming version of the problem. An obvious next step is the integer linear setting, which also arises naturally in the design of games. For example, \cite{supertank} proposed a $\{0,1\}$-formulation of the game SuperTank. Just as in our analysis of the linear program, a deep understanding of the parametric nature of the integer optimization problems is necessary to proceed in the integer setting. \cite{sturmfels1997variation} introduce a theory of reduced Gr\"obner bases of toric ideals that play a role analogous to triangulations of cones. We leave this as an interesting direction for further investigation to build on this parametric theory. Of course, an even more compelling extension would involve \emph{mixed}-integer decision sets. This will require a deep appreciation of parametric mixed-integer linear programming, a topic that remains of keen interest in the integer programming community (see, for instance, \cite{eisenbrand2008parametric,oertel2020distributions,gribanov2020parametric}). In this case, the integer programming theory necessary to study the diversity maximization problem is still being developed. Yet another direction is to consider multiple objectives for the player. In our setting, we have assumed a single meaningful objective for the player, such as maximizing the damage of a loadout of weapons. In some games, other objectives may be possible, including the cosmetics of the chosen weapons or balancing a mix of tools with offensive and defensive attributes. There exists theory on parametric multi-objective optimization that could serve as a starting point here (see, for instance \cite{tanino1988sensitivity}). \setlength{ amount}{0.0pt} \pagenumbering{arabic} \renewcommand*{\thepage}{\arabic{page}} \begin{APPENDIX}{Omitted Proofs} \section{Properties of a cone triangulation}\label{appendixA} \proof{\emph{Proof of \cref{lemma:3properties}}.} We present a geometric proof. Recall that we can think of the subdivision $\Delta_c(A)$ as follows: take the cost vector $c$, and use it to lift the columns of $A$ to $\mathbb{R}^{m+1}$ then look at the projection of the upper faces (those faces you would see if you ``look from above'') of the lifted point set. The projection of every one of these faces is a cell of $\Delta_c(A)$. \begin{itemize} \item[(CP)] Let $C$ be a cell of $\Delta_c(A)$ and $F$ be a face of $C$. Since $C$ is an upper face, every face of $C$ can also be ``seen from above'' and is therefore a cell of $\Delta_c(A)$. \item[(UP)] Let $x \in \cone(\{1,\ldots,n\})$. The intersection of $x \times \mathbb{R}$ with the convex hull of the elevated columns is a vertical segment from a bottom point $x_1$ to a top point $x_2$. Let $F$ be any proper face of this convex hull that contains $x_2$, which exists since $x_2$ is in the boundary. $F$ is an upper face and its projection is a cell in $\Delta_c(A)$ that contains $x$. \item[(IP)] The intersection property follows from the intersection property of the faces of the elevated polytope.\Halmos \end{itemize} \endproof \section{Maximizing loadouts is trivial when $n \leq m$} \begin{lemma}\label{lemma:n_leq_m} Suppose $n \leq m$ In this case, a trivial design is optimal. By setting $A = I_n$ to be the identity matrix of size $n$, and $c = (1,\ldots,1)$, we have that for $k \in [1,n]$, every one of the $\binom{n}{k}$ subsets is a loadout. \end{lemma} \proof{\emph{Proof}.} Consider $1 \leq k \leq n$, and $L \subseteq [n]$ such that $|L| = k$. Consider the resource vector $b \in \mathbb{R}^m$ such that $b_j = 1$ if $j \in L$ and $b_j = 0$ otherwise. In this case, the linear program $L(A,c,b)$ can be written as \begin{alignat*}{3} \text{maximize} & \sum\limits_{j \in L} x_j& \\ \text{s.t.} \quad& x_j \leq 1 \mbox{ for } j \in L\\ & x_j = 0 \mbox{ for } j \not\in L\\ & x_j \geq 0 \mbox{ for } j \in [n]. \end{alignat*} The unique optimal solution to $L(A,c,b)$ in this case is such that $x_j = 1 \mbox{ for } j \in L$ and $x_j = 0 \mbox{ for } j \not\in L$. Therefore $L$ is a loadout, and every subset of size $k$ is a loadout in the design $(A,c)$.\Halmos \endproof \section{Proof of \cref{lemma:asymptotic_fk}}\label{appx:asymptotic_fk} \begin{lemma}\label{lemma:asymptotic_fk} For $1 \leq k \leq m$ \begin{align*} \lim_{n\to\infty}\frac{f_{k-1}(\mathcal{C}(n,m))}{f_{k-1}(\mathcal{C}(n+1,m))}=1. \end{align*} \end{lemma} \proof{\emph{Proof}.} We prove the lemma when $m$ is even. The other case is argued symmetrically. When $m$ is even, the number of faces $f_{k-1}(\mathcal{C}(n,m))$ can be written as follows \citep{eu2010cyclic}, \[ f_{k-1}(\mathcal{C}(n,m)) = \sum\limits_{j = 1}^{\frac{m}{2}} \frac{n}{n-j} \binom{n-j}{j}\binom{j}{k-j},\] with the usual convention that $\binom{i}{j}$ 0 if $i < j$ or $j < 0$. Therefore, to show the lemma, it is sufficient to show that $\mbox{for } 1 \leq j \leq m/2$, \[ \lim_{n\to\infty} \frac{\frac{n+1}{n+1-j}}{\frac{n}{n-j}} = 1 \mbox{ and } \lim_{n\to\infty} \frac{\binom{n+1-j}{j}}{\binom{n-j}{j}} = 1\] It is clear that $\lim_{n\to\infty} \frac{n+1}{n+1-j}/\frac{n}{n-j} = 1$. Furthermore, \begin{align*} \frac{\binom{n+1-j}{j}}{\binom{n-j}{j}} & = \frac{(n+1-j)\cdots (n-2j+1)}{(n-j)\cdots(n-2j+1)}. \end{align*} It is clear that $\lim_{n\to\infty} \frac{n+1-j-\ell}{n-j-\ell} = 1$ for $0 \leq \ell \leq j-1 $. Therefore, \[\lim_{n\to\infty} \frac{\binom{n+1-j}{j}}{\binom{n-j}{j}} = 1,\] concluding the proof.\Halmos \endproof \section{Proof of \cref{lem:pointedcone}}\label{appx:lemma_pointed_cone} Let $\mathcal{K}$ be a pointed $m$-dimensional cone, then there exists a vector $\gamma \in \mathbb{R}^n$ such that $\mathcal{K} \subset \{ x \in \mathbb{R}^n \mid \gamma^\top x \geq 0\}$ and $\mathcal{K} \cap \{ x \in \mathbb{R}^n \mid \gamma^\top x = 0\} = 0$. Consider the hyperplane $\mathcal{H} = \{ x \in \mathbb{R}^n \mid \gamma^\top x = 1\}$. The set $\mathcal{H} \cap \mathcal{K}$ consists of more than just one point, and is a bounded section of $\mathcal{K}$. Therefore, $\mathcal{H} \cap \mathcal{K}$ is an $(m-1)$-dimensional polytope, whose vertices are determined by the generators of $\mathcal{K}$. Now, consider a triangulation $\mathcal{T}$ of $\mathcal{H} \cap \mathcal{K}$. Every simplex $S_i \in \mathcal{T}$ gives rise to a simplicial cone $\mathcal{K}_i = \cone(S_i)$. These simplicial cones, by construction, triangulate $\mathcal{K}$. \Halmos \endproof \section{Proof of \cref{lem:hyperplaneequation} and \cref{lemma:remaining_conditions}}\label{appendix:proof_lemmas} The equation of a hyperplane can be derived from computing determinants of the form \begin{equation*}\label{eq:determinant} \det\begin{pmatrix} 1 & \ldots & 1 & 1\\ v'_m(t_{i_1}) & \ldots & v'_m(t_{i_m}) & y \end{pmatrix}. \end{equation*} We present results that link the determinant above to the determinant that defines the facets of the cyclic polytope, and where the Vandermonde determinant shows up. We start by stating the known result that the Vandermonde matrix is totally positive. We then show that the sub-determinants of $A'$ have the same absolute value of the sub-determinants of the Vandermonde matrix. \begin{claim}\label{clm:totalpositivity} The Vandermonde matrix \begin{equation*} B = \begin{pmatrix} 1 & \ldots & 1\\ v_m(t_1) & \ldots & v_m(t_n) \end{pmatrix} \end{equation*} is totally positive, i.e., all square submatrices of size at most $m+1$ have strictly positive determinants. \end{claim} \begin{proof} \cite{fekete1912problem} prove that a sufficient condition for total positivity is that all solid minors have positive determinants. A minor is called solid if the indices of its rows and columns are consecutive. If this is applied to a Vandermonde matrix, then positivity of solid minors follows from the formula of the Vandermonde determinant, up to factoring out the appropriate (positive) scaling of each row. \end{proof} \begin{claim}\label{clm:determinant} Let $0 < t_{j_1} < \ldots < t_{j_{m+1}}$ with $j_1<\cdots<j_{m+1}$, we have \begin{equation} \label{eq:sign} \det\begin{pmatrix} 1 & \ldots & 1\\ v'_m(t_{j_1}) & \ldots & v'_m(t_{j_{m+1}}) \end{pmatrix} = (-1)^{\lfloor \frac{m}{2}\rfloor} \det \begin{pmatrix} 1 & \ldots & 1\\ v_m(t_{j_1}) & \ldots & v_m(t_{j_{m+1}}) \end{pmatrix}. \end{equation} \end{claim} \begin{proof} The matrix on the left of \cref{eq:sign} can be obtained from the matrix on the right through a series of linear operations. First we multiply exactly $\lfloor \frac{m}{2}\rfloor$ rows by -1 (row 2, 4, 6, $\ldots$), this multiplies the determinant by $(-1)^{\lfloor \frac{m}{2}\rfloor}$. Then we multiply the first row by $M$ and add it to these rows. This last operation does not change the determinant. \end{proof} \begin{claim}\label{clm:beta} Let $0 < t_1 < \ldots < t_{m}$. \begin{equation*} \mathrm{sign} \det\begin{pmatrix} v'_m(t_1) & \ldots & v'_m(t_{m}) \end{pmatrix} = \mathrm{sign} (-1)^{\lfloor \frac{m}{2}\rfloor} \end{equation*} \end{claim} \begin{proof} Let $t_0 < t_1$, and $D = \det\begin{pmatrix} v'_m(t_1) & \ldots & v'_m(t_{m}) \end{pmatrix}$. By developing the first column of the following determinant we establish that: \begin{align*} \det\begin{pmatrix} 1 & 1 & \ldots & 1\\ v'_m(t_0) & v'_m(t_1) & \ldots & v'_m(t_{m}) \end{pmatrix} & = D - t_0 \cdot \det \begin{pmatrix} 1 & \ldots & 1\\ M - t_1^2 & \ldots & M - t_m^2\\ t_1^3 & \ldots & t_m^3\\ \vdots & \ldots & \vdots \end{pmatrix} \\ & \quad + (M-t_0^2)\cdot \det \begin{pmatrix} 1 & \ldots & 1\\ t_1 & \ldots & t_m\\ t_1^3 & \ldots & t_m^3\\ M - t_1^4 & \ldots & M - t_m^4\\ \vdots & \ldots & \vdots\end{pmatrix} + \cdots \end{align*} By a similar argument to \cref{clm:determinant}, we see that \begin{equation*} \det \begin{pmatrix} 1 & \ldots & 1\\ M - t_1^2 & \ldots & M - t_m^2\\ t_1^3 & \ldots & t_m^3\\ \vdots & \ldots & \vdots \end{pmatrix} = (-1)^{\lfloor \frac{m}{2} \rfloor} \det \begin{pmatrix} 1 & \ldots & 1\\ t_1^2 & \ldots & t_m^2\\ t_1^3 & \ldots & t_m^3\\ \vdots & \ldots & \vdots \end{pmatrix} \end{equation*} $ $ \begin{equation*} \det \begin{pmatrix} 1 & \ldots & 1\\ t_1 & \ldots & t_m\\ t_1^3 & \ldots & t_m^3\\ M - t_1^4 & \ldots & M - t_m^4\\ \vdots & \ldots & \vdots\end{pmatrix} = -(-1)^{\lfloor \frac{m}{2} \rfloor} \det \begin{pmatrix} 1 & \ldots & 1\\ t_1 & \ldots & t_m\\ t_1^3 & \ldots & t_m^3\\ t_1^4 & \ldots & t_m^4\\ \vdots & \ldots & \vdots\end{pmatrix} \end{equation*} Using the total positivity from \cref{clm:totalpositivity}, \begin{equation}\label{eq:sign-second} \det\begin{pmatrix} 1 & 1 & \ldots & 1\\ v'_m(t_0) & v'_m(t_1) & \ldots & v'_m(t_{m}) \end{pmatrix} = D -(-1)^{\lfloor \frac{m}{2} \rfloor} t_0 \lambda_1 -(-1)^{\lfloor \frac{m}{2} \rfloor} (M-t_0)^2 \lambda_2-(-1)^{\lfloor \frac{m}{2} \rfloor} t_0^3 \lambda_3 \ldots \end{equation} where $\lambda_i > 0$ for $i \in [m]$. By \cref{clm:determinant}, the sign of the determinant on the left of \cref{eq:sign-second} is equal to the sign of $(-1)^{\lfloor \frac{m}{2} \rfloor}$. Therefore, by isolating $D$ in \cref{eq:sign-second}, $D$ can be expressed as the sum of $m+1$ terms all of sign equal to $(-1)^{\lfloor \frac{m}{2} \rfloor}$. Therefore, the sign of $D$ is equal to $(-1)^{\lfloor \frac{m}{2} \rfloor}$. \end{proof} We are now ready to present the proof of \cref{lem:hyperplaneequation}. \proof{\emph{Proof of \cref{lem:hyperplaneequation}}.} By Laplace expanding on the last column of the determinant in \cref{eq:hyperplane}, and subtracting $M \times$ first row, we get for any $k \in [m],$ \[ \alpha_k = (-1)^{k+m} \det\begin{pmatrix} 1 & \ldots & 1\\ (-1)^{1+1}t_{i_1} & \ldots & (-1)^{1+1}t_{i_m}\\ \vdots & \ldots &\vdots\\ (-1)^{k}t_{i_1}^{k-1} & \ldots & (-1)^{k}t_{i_m}^{k-1}\\ (-1)^{k+2}t_{i_1}^{k+1} & \ldots & (-1)^{k+2}t_{i_m}^{k+1}\\ \vdots & \ldots &\vdots\\ (-1)^{m+1}t_{i_1}^{m} & \ldots & (-1)^{m+1}t_{i_m}^{m}\\ \end{pmatrix} = (-1)^{k+m} (-1)^{\lfloor\frac{m}{2}\rfloor + k+1} \det\begin{pmatrix} 1 & \ldots & 1\\ t_{i_1} & \ldots & t_{i_m}\\ \vdots & \ldots &\vdots\\ t_{i_1}^{k-1} & \ldots & t_{i_m}^{k-1}\\ t_{i_1}^{k+1} & \ldots & t_{i_m}^{k+1}\\ \vdots & \ldots &\vdots\\ t_{i_1}^{m} & \ldots & t_{i_m}^{m}\\ \end{pmatrix}\] where the determinant in the far right is a minor of the Vandermonde matrix and is therefore positive by \cref{clm:totalpositivity}. Hence, for $k \in [m]:$ \[ \mbox{sign}(\alpha_k) = (-1)^{k+m} (-1)^{\lfloor\frac{m}{2}\rfloor + k+1} = (-1)^{\lfloor\frac{m}{2}\rfloor + m+1}.\] By Laplace expansion we also get, \[ \beta = (-1) \cdot (-1)^{m} \det\begin{pmatrix} v'_m(t_{i_1}) & \ldots & v'_m(t_{i_m}) \end{pmatrix}.\] By \cref{clm:beta}, $\mbox{sign}(\det\begin{pmatrix} v'_m(t_{i_1}) & \ldots & v'_m(t_{i_m}) \end{pmatrix}) = (-1)^{\lfloor \frac{m}{2} \rfloor}$ and, therefore, \[ \mbox{sign}(\beta) = (-1)^{\lfloor \frac{m}{2} \rfloor + m + 1}.\Halmos \] \endproof \proof{\emph{Proof of \cref{lemma:remaining_conditions}}.} Take an arbitrary $x\ge0$ with support equal to $F$, and define $b$ to equal $A'x$. We show the support of the unique optimal solution to $L(A',c,b)$ is equal to $F$. Let $y = \alpha/\beta$, we use $y$ as a certificate and show that $y$ and $x$ satisfy the complementary slackness conditions by showing that $y$ verifies \cref{def:inequalitycell}. By \cref{lem:hyperplaneequation}, $\beta$ and $\alpha$ have the same signs, and by the total positivity of the Vandermonde matrix, $\beta \neq 0$ and $\alpha_i \neq 0$ for $i \in [m]$. Therefore, \[ y_i > 0, \ \ \forall i \in [m]. \] For $i \in F$, \[ y^\top v'_m(t_i) = \frac{\alpha^\top v'_m(t_i)}{\beta} = \frac{\beta}{\beta} = 1 = c_i.\] Now, let $i \not\in F$, \begin{center} \begin{tabular}{lll} \\ $y^\top v'_m(t_i)$ & = & $\frac{1}{\beta}( \alpha^\top v'_m(t_i) - \beta + \beta)$ \\ \\ & = & $\frac{1}{\beta}\det\begin{pmatrix} 1 & \ldots & 1 & 1\\ v'_m(t_{i_1}) & \ldots & v'_m(t_{i_m}) & v'_m(t_i) \end{pmatrix} + 1$ \\ \\ & = & $\frac{1}{\beta} (-1)^g (-1)^{\lfloor \frac{m}{2} \rfloor} D+ 1,$ \\ \end{tabular} \end{center} where \[ D = \det\begin{pmatrix} 1 & \ldots & 1 & \ldots & 1\\ v_m(t_{i_1}) & \ldots & v_m(t_i) & \ldots & v_m(t_{i_m}) \end{pmatrix}> 0 \] and $i$ is inserted in the correct increasing order between $i_1$ and $i_m$. In the third equality we used the fact that the permutation that put $i$ in the correct order has parity $g$ by definition of $F$ and $g$. Therefore, to show that $y^\top v'_m(t_i)>1$ we only need to show that $\frac{1}{\beta} (-1)^{g+ \lfloor \frac{m}{2} \rfloor} D >~0$. By \cref{lem:hyperplaneequation} and \cref{clm:beta}, \[ \beta = (-1)^{\lfloor \frac{m}{2} \rfloor + m + 1} \det\begin{pmatrix} 1 & \ldots & 1\\ v_m(t_{i_1}) & \ldots & v_m(t_{i_m}) \end{pmatrix} = (-1)^{\lfloor \frac{m}{2} \rfloor + m + 1} E,\] where $E = \det\begin{pmatrix} 1 & \ldots & 1\\ v_m(t_{i_1}) & \ldots & v_m(t_{i_m}) \end{pmatrix} > 0.$ Hence \[ \frac{1}{\beta} (-1)^{g+ \lfloor \frac{m}{2} \rfloor} D = (-1)^{g+ \lfloor \frac{m}{2} \rfloor} (-1)^{\lfloor \frac{m}{2} \rfloor + m + 1} \frac{D}{E} = (-1)^{g+m+1} \frac{D}{E} > 0,\] where the last inequality stems from $g + m \equiv 1 \ (\text{mod } 2)$, and from $D > 0$, $E > 0$. Therefore, $y$ satisfies the following equations \begin{alignat*}{4} y_i & > & 0 & , \quad \forall \ i \in [m],\\ y^\top A_j & = & c_j & , \quad \forall \ j \in F\\ y^\top A_j & > & c_j & , \quad \forall \ j \not\in F. \end{alignat*} This shows that $y$, satisfies \cref{def:inequalitycell}, and implies that $F$ is in a loadout by \cref{lem:inequalitycell}.\Halmos \endproof \begin{claim}\label{lem:degeneracy} If (P) has multiple optimal solutions then every optimal basic solution to (D) is degenerate. \end{claim} \proof{\emph{Proof}.} We show that if (D) has a nondegenerate optimal solution, then $(P)$ will have a unique optimal solution. Assume $y_1$ is nondegenerate dual optimal, thus by definition of dual basic feasible solution, it satisfies exactly $m$ linear independent active constraints. \begin{alignat*}{4} y_i & = & \ 0, \ & \quad \forall \ i \in M_1,\\ (c_j - y^\top A_j) & = & \ 0, \ & \quad \forall \ j \in M_2\\ |M_1| + |M_2| & = & m. & \end{alignat*} Consider an optimal primal solution $x$. The solution $x$ must satisfy the complementary slackness conditions. Consider $j \in [n] \setminus M_2$. We have $(c_j - y^\top A_j) > 0$ so we must have $x_j = 0$. Note also that $y_i \neq 0$ for $i \not\in M_1$. Therefore, $(a_i^\top x-b_i) = 0$ for $i \not\in M_1$. This forms $m-|M_1| + n - |M_2| = n$ linear independent constraints and, therefore, an $n \times n$ matrix that uniquely determines $x$.\Halmos \endproof \section{Exact Tight Constructions for $m=3$ and $m=2$}\label{appx:exact_construction} For $m= 3$ and $n>m$, \cref{thm:upperbound} establishes that $ \mathcal{L}^3(A,c) \leq 2n-5$ and $\mathcal{L}^2(A,c) \leq 3n-6$ for every design $(A,c)$. We now provide a construction of a design that matches both upper bounds. \lowerboundSmallM* \proof{\emph{Proof}.} Let $n>m=3$, consider the following (inequality) design \begin{align*} c^\top &=\begin{pmatrix} 1 &1 &\sqrt{\frac{2}{3}} &\sqrt{\frac{2}{4}} &\cdots &\sqrt{\frac{2}{n}} \end{pmatrix} \in \mathbb{R}^n \end{align*} \begin{align*} A &=\begin{pmatrix} 1 &0 &\frac{1}{3} &\frac{1}{4} &\cdots &\frac{1}{n} \\ 0 &1 &\frac{1}{3} &\frac{1}{4} &\cdots &\frac{1}{n} \\ 1 &1 &1 &1 &\cdots &1 \end{pmatrix} \in \mathbb{R}^{3 \times n}. \end{align*} We index the columns of $c$ and $A$ by $1,\ldots,n$ from left to right. We claim that all of the following $2n-5$ subsets of indices are inequality cells: \begin{itemize} \item $\{1,j,j+1\}$ for all $j=3,\ldots,n-1$ ($n-3$ loadouts) \item $\{2,j,j+1\}$ for all $j=3,\ldots,n-1$ ($n-3$ loadouts) \item $\{1,2,3\}$ (1 loadout) \end{itemize} By \cref{lem:inequalitycell}, this will imply that the design $(A,c)$ has $2n-5$ loadouts of size 3, and $n-1 + n-2 + n-3 = 3n -6$ loadouts of size 2. Note that the loadouts of size 2 are as follows: \begin{itemize} \item $\{1,j\}$ for all $j=2,\ldots,n$ ($n-1$ loadouts) \item $\{2,j\}$ for all $j=3,\ldots,n$ ($n-2$ loadouts). \item $\{j,j+1\}$ for all $j=3,\ldots,n-1$ ($n-3$ loadouts) \end{itemize} Consider $j \in \{3,\ldots,n-1\}$. To show that $\{1,j,j+1\}$ is a loadout, we show that $\{1,j,j+1\}$ is an inequality cell by solving the system \begin{alignat}{4}\label{eq:system} y_i & > & 0 & , \quad \forall \ i \in \{1,2,3\};\\ \nonumber y^\top A_{\ell} & = & c_{\ell} & , \quad \forall \ \ell \in \{1,j,j+1\};\\ \nonumber y^\top A_{\ell} & > & c_{\ell} & , \quad \forall \ \ell \not\in \{1,j,j+1\}. \end{alignat} The three equalities of \eqref{eq:system} translate to \begin{align*} y_1 +y_3 &=1 \\ y_1 +y_2 +jy_3 &=\sqrt{2j} \\ y_1 +y_2 +(j+1)y_3 &=\sqrt{2(j+1)} \end{align*} By solving for $y$, \begin{align*} y_3 & = \sqrt{2(j+1)}-\sqrt{2j} > 0\\ y_1 & = 1 - (\sqrt{2(j+1)}-\sqrt{2j}) > 0\\ y_2 & = \sqrt{2(j+1)} - 1 - j(\sqrt{2(j+1)}-\sqrt{2j}) > 0 \end{align*} Now, take any $\ell=3,\ldots,n$, we show that $y^\top A_{\ell}\ge c_{\ell}$ with equality if and only if $\ell=j$ or $\ell=j+1$. Consider $\ell \in \{3,\ldots, n\}$, then \begin{align} y^\top A_{\ell} \geq c_{\ell} & \Longleftrightarrow \sqrt{2j}-jy_3+\ell y_3-\sqrt{2\ell} \ge0 \nonumber \\ & \Longleftrightarrow (\ell-j)(\sqrt{2(j+1)}-\sqrt{2j}) \ge\sqrt{2\ell}-\sqrt{2j} \nonumber \\ & \Longleftrightarrow (\ell-j)(\sqrt{j+1}-\sqrt{j}) \ge\sqrt{\ell}-\sqrt{j}. \label{eq:rhs_3} \end{align} It is clear that \eqref{eq:rhs_3} is an equality when $\ell = j, j+1$. Suppose $\ell>j+1$, then the rhs of \eqref{eq:rhs_3} can be written as \[(\sqrt{\ell}-\sqrt{\ell-1})+(\sqrt{\ell-1}-\sqrt{\ell-2})+\cdots+(\sqrt{j+1}-\sqrt{j}).\] There are $\ell-j$ terms in parentheses. All these terms are less than $\sqrt{j+1}-\sqrt{j}$, and at least one of then is strictly less than $\sqrt{j+1}-\sqrt{j}$. Therefore the inequality \eqref{eq:rhs_3} is strict and $y^\top A_{\ell} > c_{\ell}$ when $\ell > j+1$. When $\ell < j$, \eqref{eq:rhs_3} is equivalent to \[ (j-\ell)(\sqrt{j+1}-\sqrt{j}) \le \sqrt{j}- \sqrt{\ell}.\] The right-hand side of the last inequality can be written as \[(\sqrt{j}-\sqrt{j-1})+(\sqrt{j-1}-\sqrt{j-2})+\cdots+(\sqrt{\ell+1}-\sqrt{\ell}).\] There are $\ell-j$ terms in parentheses. All these terms are greater than $\sqrt{j+1}-\sqrt{j}$, and at least one of then is strictly greater than $\sqrt{j+1}-\sqrt{j}$. Therefore the inequality \eqref{eq:rhs_3} is strict and $y^\top A_{\ell} > c_{\ell}$ when $\ell < j$. Finally, we must check the case where $\ell=2$. We have \begin{align*} y^\top A_{2} > c_{2} & \Longleftrightarrow y_2 + y_3 > 0, \end{align*} and the right-hand side inequality is true since $y_3 > 0$. This shows that $\{1,j,j+1\}$ is an inequality cell. Arguing that $\{2,j,j+1\}$ is an inequality cell for $j \in \{3,\ldots,n-1\}$ can be done symmetrically. To see that $\{1,2,3\}$ is an inequality cell, we solve the system $ y^\top A_{\ell} = c_{\ell},\ \ \forall \ \ell \in \{1,2,3\}$, which is equivalent to \begin{align*} y_1 +y_3 &=1 \\ y_2 +y_3& =1 \\ y_1 +y_2 +3y_3 &=\sqrt{6} \end{align*} Solving this system yields \begin{align*} y_1 & = 3 - \sqrt{6} > 0\\ y_2 & =3 - \sqrt{6} > 0\\ y_3 & = \sqrt{6}-2> 0 \end{align*} Now, consider $\ell \in \{4,\ldots,n\}$, in which case \begin{align} y^\top A_{\ell} > c_{\ell} & \Longleftrightarrow y_1 + y_2 + \ell y_3 > \sqrt{2\ell} \nonumber\\ & \Longleftrightarrow \sqrt{6}+(\ell -3)(\sqrt{6}-2)-\sqrt{2\ell} > 0. \label{eq:sepcial_cell} \end{align} To see that the last inequality is true, we study the function $x \mapsto f(x) = \sqrt{6}+(x -3)(\sqrt{6}-2)-\sqrt{2x}$ for $x \geq 4$. The derivative of $f$ is \[ f'(x) = \sqrt{6}-2 - \frac{1}{\sqrt{2x}}.\] It is easy to see that $f'(x) > 0 $ for $x \geq 4$. Therefore $f$ is increasing over $[4,\infty]$. Furthermore, $f(4) > 0$. This implies that $f(x) > 0$ for $x \geq 4,$ and that \eqref{eq:sepcial_cell} is true for $\ell \in \{4,\ldots,n\}$. We, therefore, conclude that $\{1,2,3\}$ is an inequality cell for the design $(A,c)$.\Halmos \endproof For $m= 2$ and $n>m$, \cref{thm:upperbound} establishes that $ \mathcal{L}^2(A,c) \leq n-1$ for every design $(A,c)$. We provide a construction of a design that matches this upper bound. \lowerboundSmallMtwo* \proof{\emph{Proof}.} Let $n>m=2$, consider the following (inequality) design \begin{align*} c^\top &=\begin{pmatrix} 1 &2 &\cdots &n \end{pmatrix} \in \mathbb{R}^n \end{align*} \begin{align*} A &=\begin{pmatrix} 1^2 &2^2 &\cdots &n^2 \\ 1 &1 &\cdots &1 \end{pmatrix} \in \mathbb{R}^{2 \times n}. \end{align*} We claim that all of the $n-1$ subsets of indices of the form $\{j,j+1\}$ with $j \in \{1,\ldots,n-1\}$ are inequality cells. Consider $j \in \{1,\ldots,n-1\}$. To show that $\{j,j+1\}$ is an inequality cell, we solve the system \begin{alignat}{4}\label{eq:system_two} y_i & > & 0 & , \quad \forall \ i \in \{1,2\};\\ \nonumber y^\top A_{\ell} & = & c_{\ell} & , \quad \forall \ \ell \in \{j,j+1\};\\ \nonumber y^\top A_{\ell} & > & c_{\ell} & , \quad \forall \ \ell \not\in \{j,j+1\}. \end{alignat} The two equalities of \eqref{eq:system_two} translate to \begin{align*} y_1\cdot j^2 +y_2 &=j \\ y_1\cdot (j+1)^2 +y_2 &= j+1 \end{align*} By solving for $y$, \begin{align*} y_1 & = \frac{1}{2j+1} > 0\\ y_2 & =\frac{j^2 + j}{2j+1} > 0 \end{align*} Now, take any $\ell\in [n] \setminus \{j,j+1\}$, we show that $y^\top A_{\ell}> c_{\ell}$. \begin{align} y^\top A_{\ell} > c_{\ell} & \Longleftrightarrow y_1 \ell^2 + y_2 > \ell \nonumber \\ & \Longleftrightarrow \frac{\ell^2 + j^2 +j}{2j+1} > \ell. \label{eq:rhs_32} \end{align} To see that the last inequality is true, we study the function $x \mapsto f(x) = \frac{x^2 + j^2 +j}{2j+1}-x$ over $[1,n]$. The derivative of $f$ is \[ f'(x) = \frac{2x}{2j+1}-1.\] It is easy to see that $f'(x) < 0 $ for $x \leq j$ and $f'(x) > 0$ for $x \geq x+1$. Therefore $f$ is decreasing over $[1,j]$ and increasing over $[j+1,n]$. Furthermore, $f(j) = f(j+1) = 0$. This implies that $f(l) > 0 $ for $l \in \{1, \ldots, j-1,j+2,\ldots,n\}$, which proves \eqref{eq:rhs_32}. We, therefore, conclude that $\{j,j+1\}$ is an inequality cell for the design $(A,c)$.\Halmos \endproof \end{APPENDIX} \ACKNOWLEDGMENT{Thanks to Jesus De Loera for some enlightening guidance on triangulations and to Xiao Lei for some early useful feedback. The second author would like to thank combinatorialist Steven Karp for insightful discussions surrounding the cyclic polytope. The third author would like to thanks Paul Tozour for introducing him to the diversity optimization problem in game design. The third author's research is supported by a Discovery Grant of the Natural Science and Engineering Research Council of Canada and an exploratory research grant from the UBC Sauder School of Business.} \end{document}
arXiv
Substructure-based neural machine translation for retrosynthetic prediction Umit V. Ucak1, Taek Kang2, Junsu Ko3 & Juyong Lee ORCID: orcid.org/0000-0003-1174-43581 Journal of Cheminformatics volume 13, Article number: 4 (2021) Cite this article With the rapid improvement of machine translation approaches, neural machine translation has started to play an important role in retrosynthesis planning, which finds reasonable synthetic pathways for a target molecule. Previous studies showed that utilizing the sequence-to-sequence frameworks of neural machine translation is a promising approach to tackle the retrosynthetic planning problem. In this work, we recast the retrosynthetic planning problem as a language translation problem using a template-free sequence-to-sequence model. The model is trained in an end-to-end and a fully data-driven fashion. Unlike previous models translating the SMILES strings of reactants and products, we introduced a new way of representing a chemical reaction based on molecular fragments. It is demonstrated that the new approach yields better prediction results than current state-of-the-art computational methods. The new approach resolves the major drawbacks of existing retrosynthetic methods such as generating invalid SMILES strings. Specifically, our approach predicts highly similar reactant molecules with an accuracy of 57.7%. In addition, our method yields more robust predictions than existing methods. Although knowledge in organic chemistry has accumulated over decades, designing an efficient synthetic route for a target molecule remains a crucial task in organic synthesis [1]. The retrosynthetic approach suggests a logical synthetic route to generate a target molecule from a set of available reactants and reagents [2,3,4]. This approach is both iterative and recursive in nature since a sequential computation of retrosynthetic transformation is required. Retrosynthetic transformation occurs recursively until much simpler and commercially available molecules are identified. Computational retrosynthetic analysis initially formalized in 1969 by Corey and Wipke in an algorithmic manner [5]. The algorithm considers all possible disconnections with known reaction types, which reduce the complexity of a product and progress until chemically reasonable pathways are identified. Such disconnections were based on handcrafted minimal transformation rules known as reaction templates [5,6,7]. Manual encoding of those transformation rules necessitates deep chemical expertise and intuition. Manual management of synthetic knowledge is a highly complicated task considering a large number of transformation rules (> 10,000) that must be hand-coded [8,9,10,11]. Furthermore, being dependent on reaction templates potentially limits prediction accuracy, particularly if a reaction is outside of the template domain. Later studies offer valuable help to chemists in finding better routes faster by enabling automated extraction of reaction templates [12,13,14,15,16,17,18]. However, they do not address the above-mentioned limitations inherited from their precedents. Computer-aided synthesis planning has been well summarized in many recent reviews [19,20,21,22,23,24]. Reaction predictor developed by Kayala et al. [25, 26] was the first template-free approach. It was a mechanistic level of strategy that merges the idea of rule-based modeling and machine learning within its framework. Jin et al. [27] proposed a novel template-free, entirely data-driven approach based on the Weisfeiler-Lehman networks [28]. Both approaches provide end-to-end solutions to generate candidate products. Theoretical findings provided by Cadeddu et al. [29] have further motivated the development of other template-free methods for the forward- or retro-reaction prediction tasks using various types of neural machine translation (NMT) architectures [30,31,32,33,34,35,36,37,38]. Based on an explicit analogy between sentences in a language corpus and molecules in a chemical corpus, i.e. chemical space, Cadeddu et al. showed that the rank-frequency distributions of substructures as the building blocks of molecules are similar to those of words in a natural language corpus. This verification implies that the concepts of linguistic analysis are readily applicable to tackle the problems of forward- and retro-reaction prediction. In this context, a retrosynthetic prediction is appropriate for applying the sequence-to-sequence framework [39,40,41] of machine translation. Sequence-to-sequence learning uses a recurrent neural network (RNN) layer to map a source sequence of an arbitrary length into a fixed dimensional context vector consisting of real numbers. The context vector contains information about the syntactic and semantic structure of the source sequence. In connection with this RNN layer, another RNN decodes the context vector to a target sequence. In this regard, the two RNN units together act like a pair of encoder–decoder system. Sutskever et al. [41] showed that long short-term memory (LSTM) [42]-based architectures can solve general sequence-to-sequence problems because of their ability to handle long-range relations in sequences. Liu et al. [34] proposed the first multi-layered LSTM-based sequence-to-sequence model for retrosynthetic prediction. Its gated recurrent unit (GRU) [39] variant was proposed by Nam and Kim [32] for the forward reaction prediction. Recently, the best performing NMT models include an attention mechanism [40, 43] as a part of their neural architectures to enhance their performances on longer sentences [27, 32,33,34]. There are also retrosynthetic predictors built on the Transformer architecture [31, 37, 44,45,46], based solely on the attention mechanism. Encoder–decoder models, especially once an attention mechanism is introduced, all employ similar strategies to handle a translation task. The SMILES representations of molecular structures are typical inputs for the sequence-to-sequence based models. However, none of the previously reported models has focused on translation at a substructural, fragment, level. In this paper, we propose a template-free approach for retrosynthetic reaction prediction by learning the chemical change at a substructural level. Our approach represents a molecule as a sentence based on a set of substructures corresponding to a word by using the MACCS keys [47]. We also present a unique tokenization scheme that properly eliminates problematic issues originate from SMILES-based tokenization. Our model consists of bidirectional LSTM cells [48], and is trained in a fully data-driven and end-to-end fashion without prior reaction class information. We thoroughly discuss all the aspects of our methodology, including dataset and descriptor curation steps. Evaluation results are presented based on three datasets derived from the United States Patent and Trademark Office (USPTO) reaction dataset [49]. This paper is organized as follows. In "Method" section, we suggest a new way of tokenization followed by curation together with the analysis of the dataset and descriptor. We briefly describe the model architecture and evaluation procedure for accuracy calculations. In "Results and discussion" section, the results of a set of translation experiments are discussed with an emphasis on the benefits of the MACCS key-based molecular representation. Finally, the strengths and limitations of our approach are concluded in "Conclusion" section. In this study, we used the filtered US patent reaction dataset, USPTO, which is obtained with a text-mining approach [49, 50]. Schwaller et al. [33] eliminated the duplicated reaction strings in the dataset without atom-mapping. They also removed 780 reactions due to SMILES canonicalization failures with RDKit [51]. The inherent limitation of the data is that the vast majority of entries are single product reactions. Thus, only single product cases corresponding to 92% of the dataset are used in this study. The SMILES line notation [52] represents molecular structures as a linear sequence of letters, numbers, and symbols. Hence, from a linguistic perspective, SMILES can be regarded as a language with grammatical specifications. However, in our approach, molecules are represented as a set of fragments using the MACCS keys consisting of 166 pre-defined substructures [47]. This binary bit-based molecular descriptor converts a molecule into a 166 bit vector, in which each bit indicates the presence of a feature taken from a predefined dictionary of SMARTS patterns [53]. Descriptor curation In our approach, a molecule is represented as a set of fragments using the MACCS keys. The number of occurrences of each MACCS key in our dataset was investigated. Also, we compared the results obtained for 1 million randomly sampled drug-like small molecules, a subset of the Generated Data Base-13 (GDB-13) consisting of 975 million molecules [54, 55]. Figure 1 shows the normalized frequency distributions of the MACCS keys on both databases. A direct pairwise comparison rationalizes reducing the number of MACCS keys (Fig. 1). In this study, five keys that never occurred and nine keys that are not frequently observed in the USPTO database are omitted. Based on the comparison, additional 26 keys that are never or hardly ever observed in the GDB-13 database are also excluded. Descriptor curation based on the rate of occurrences. Filtered US patent reaction dataset and 1 million randomly sampled drug-like small molecules as a subset of the enumerated database (GDB-13) are compared to investigate the MACCS keys probability distribution profiles The molecules belong to different compound databases, such as drug-like or natural products, exhibit different characteristics in their fingerprint profiles. Thus, we narrowed our analysis to drug-like molecules and modified our fingerprint representation accordingly by tuning it with 1 million drug-like small molecules in GDB-13. Removing redundant keys based on the occurrence analysis has apparent advantages. It shortens the lengths of source and target sentences and provides a better rank distribution of the keys used in the translation process. In our approach, every molecule is represented by 126 MACCS keys, which are able to represent 98% of the 1 million randomly sampled subset of GDB-13 adequately. In machine translation tasks that chemists are dealing with, source and target molecules are placeholders corresponding to reactants and products interchangeably. The selection is dependent on the intended analysis. For a retrosynthetic prediction task, source and target sentences refer to products and reactants, respectively. Reaction preprocessing Our model considers only the non-zero indices of curated MACCS keys. English letters were assigned to the ranked non-zero MACCS keys based on their ranks of frequencies to form unique artificial "words". This further encoding transforms product and reactant sentences into the frequency-based sorted version of the lettered keys, which imply position-wise information of the words, and make our scheme suitable for using the sequence-to-sequence architecture. Single-lettered words were generated using the upper- and lower-cases of the most frequent 21 letters in English. Double-lettered words were constructed by adding "x" and "z" for every 42 single letters, which allowed us to cover all 126 MACCS keys. Thus, our lettered fragment vocabulary has a fixed length of 126. The generation process of an example product–reactant pair is illustrated in Fig. 2. The same procedure was applied to all reactions of the dataset. The complete mappings of the MACCS keys to artificial words are listed in Additional file 1. Data preparation procedure to obtain product and reactant sentences for a retrosynthetic prediction task The MACCS non-zero indices serve as good tokens and inputs for an LSTM model. The model further encodes the products and reactants into "language representation" by assigning one or two letters to each index in the MACCS keys. Applying further encoding is efficient, particularly given the relatively small size of curated MACCS keys. It gives a rank-order, enhances readability, and provides visual comprehension. Reaction dataset curation The product–reactant pair dataset was further curated before being processed by our translation machine. After representing every molecule with the 126 truncated MACCS keys, a series of filters were applied to remove identical product–reactant pairs and internal twins. Internal twins are the pair of data entries whose product and reactant sentences are identical. They appeared whenever the chemical changes were beyond the sensitivity of our MACCS key-based representation. Because we associate molecules with MACCS keys to operate on a substructural subspace, a certain amount of information is lost. Our preprocessing procedure resulted in 5748 internal twins, and they are removed from our dataset. In addition, the reactions with three or more reactants were excluded. The length of the longest pair was set to 100 to avoid lengthy fragment sequences, as shown in Additional file 2: Figure S1. The product–reactant pairs were then put into an injective map generator to guarantee one-to-one correspondence between product and reactant sentences. If a reactant sentence is composed of two reactants, we sorted them in descending order according to their sequence length. Reactants were separated by the "–" sign. The curated dataset, containing a total of 352,546 product–reactant pairs, was further subdivided by the number of the reactant molecules in each pair into two disjoint subsets: single reactant and double reactant datasets. Organizing the dataset in this manner was essential to assess model performance independently. These data sets are freely available online, and curation steps along with the dataset sizes are summarized in Fig. 3. Dataset curation process and obtaining training/test pairs. P Product, R Reactant. Details to the different steps are given in the text Model architecture Our sequence-to-sequence neural network comprises two bidirectional LSTMs: one for an encoder and the other for a decoder. Besides, we used unidirectional LSTMs to quantify the improvement in model's performance with the use of bidirectional LSTMs. The encoder and decoder layers were connected through Luong's global attention mechanism [56], which captures non-local relations between all elements of source sequences. The attention mechanism allows neural networks to focus on different parts of a source sentence, and to consider non-linear relationships between words during a training process. The global attention mechanism used in this study, in essence, is similar to the first attention mechanism suggested by Bahdanau et al. [40], for machine translation tasks. The global approach focuses the "attention" on all the words on the source sentence to compute a global context vector for each target word at each time step in the decoder unit. Therefore, the global context vector represents the weighted sum over all the source hidden states. This context information leads to improved prediction accuracy. Our curated datasets were randomly split into 9:1 to generate training and testing sets. The validation sets were randomly sampled from training sets (10%). The word embeddings were used to represent lettered fragments in the vocabulary. After the embedding layer was created, a trainable tensor holding 126-dimensional fixed-length dense vectors was randomly initialized. A method of embedding class then accessed the embedding of each word through a lookup on the tensor. We used the stochastic gradient descent algorithm [57] to train all parameters of the encoder–decoder model. The cross-entropy function was used as a loss function. For each dataset, we performed a series of tests within the range of hyper-parameter space as described in Additional file 8: Table S1, to achieve optimal performance. Based on the preliminary experiments, we generated an encoder and a decoder with two Bi-LSTM layers containing 2000 hidden units at each layer. A dropout layer with a dropout rate of 0.1 was included following the hidden layer to avoid overfitting. To avoid a potential exploding gradient problem, we introduced gradient clipping [58] to guarantee that the norm of the gradients did not exceed a threshold (0.25) during backpropagation. The initial learning rate was set to 4.0, and it decayed with a factor of 0.85 every three epochs [33]. With these hyper-parameters, the average training speed was approximately 3300 words per second, with a batch size of 64 on a single NVIDIA RTX 2080Ti GPU card. Larger batch sizes were not tested due to memory constraints, which likewise apply to the hidden layer's size. We trained our models for a minimum of 30 epochs, and each epoch took about 2 h for the curated dataset consisting of 320 K sentence pairs. The details of our key hyper-parameters are available in Additional file 8: Table S1. Our model was implemented in Python version 3.6.8 together with PyTorch [59] version 1.3.0. The open-source RDKit module version 2020.03.1 [51] was utilized to obtain MACCS keys and similarity maps [60]. Evaluation procedure Association coefficients such as Tanimoto, Sörensen–Dice, and asymmetric Tversky indexes are considered efficient similarity measures for structural similarity benchmarks, and thus they are widely used. To evaluate the performance of our retrosynthetic model, the Tanimoto coefficient was selected as a similarity metric, which is identified as one of the best metrics to compute structural similarity [61]. Pairwise similarities between the predicted sequences and ground truth of all test molecules were calculated. Tanimoto coefficient (\(T_c\)) measured between two chemical structures have a value between 0 and 1. The coefficient is zero if molecules share no common fragments while identical molecules have a Tanimoto coefficient of unity. Though these are the cases for the two ends of the Tanimoto similarity metric, there is no single criterion that defines similar and non-similar molecules. We defined three threshold values (0.50, 0.70, and 0.85) to assess the quality of translation experiments. The similarity between predicted and ground truth sentences was computed at the end of each epoch for every pair appear in the validation set using the Tanimoto similarity measure (Eq. 1). $$\begin{aligned} T_{c}\small {(\mathbf {R},\mathbf {P})} = \frac{{\displaystyle {\sum _{i}}}{R}_{i}{P}_{i}}{{\displaystyle {\sum _{i}}}{\left( {R}_{i}\right) }^2+{\displaystyle {\sum _{i}}}{\left( {P}_{i}\right) }^2-{\displaystyle {\sum _{i}}}{R}_{i}{P}_{i}}. \end{aligned}$$ Table 1 The possible pairs between predicted sequences and ground truths are presented Our machine yields predictions either with one or two reactants as all reactions are contained in the combined dataset. There are thus multiple possibilities for comparing predicted sequences with ground truths. The potential pairs for evaluation corresponding to the number of reactants are listed in Table 1. Tanimoto similarities between all possible pairs of predicted sequences and ground truths were calculated. Then, the pair(s) with the highest similarity was selected based on the assumption that more similar structures are more likely to be matched. Prediction accuracy The performance of our model was assessed based on three datasets: single reactant, double reactant, and the combined test set. Evaluation results on the test sets are summarized in Table 2. The quality of predictions of each test dataset is expressed in terms of pairwise Tanimoto similarity values. We introduced three criteria for evaluating the success rates of our translation models: (1) the number of exact matches (\(T_{c} = 1.0\)), (2) the number of bioactively similar matches (\(0.85< T_{c} < 1.00\)) and (3) the overall success rate presented as the average Tanimoto similarity between predicted and true sequences (a series of fragments) over all the test molecules. Table 2 Success rate over molecules on three test datasets For the single reactant reactions, our bidirectional-LSTM model achieved an accuracy of 57.7% based upon the combined use of the first two criteria. The percentages of exact and bioactively similar matches were 29.0% and 28.7%, respectively. The average \(T_{c}\) value between predicted and true sequences was 0.84. These results demonstrate that our machine predicts single reactant reactions with high accuracy. For the double reactant reactions, the success rate of the exact matches (27.9%) was almost identical to that of the single reactant reactions. However, the success rate of highly similar predictions deteriorated to 10.5% from 28.5%. For the combined set, 25.3% of predictions were accurate, and 12.9% of them were highly similar. Similarly, the average \(T_{c}\) values dropped from 0.84 to 0.66 and 0.68 for datasets containing double and combined reactants. One reason for the worse accuracy of the double and combined sets is that the "–" sign should be appropriately predicted. Another reason is the frequent occurrence of small molecules represented with a small number of MACCS keys in these datasets. In fact, 477 molecules represented with less than 7 MACCS keys appeared in 61822 different reactions. To be more specific, 3944 reactions contain a reactant represented with one of the seven MACCS keys described in Additional file 3: Figure S2. The number of unique structures corresponding to those keys was, however, only 29. Because such small and simple structures were dense in these datasets, wrongly predicted fragments contributed significantly (a value of zero in 1-bit cases) to the success rate. Our result also demonstrates that the bidirectional LSTM-based model outperforms the unidirectional LSTM-based model. The success rates of exact matches become lower by about 6% for all the datasets consistently. This is possibly due to the fact that our MACCS key-based representation of a molecule does not depend on the order of keys. In other words, most information about molecules and chemical reactions are embedded into the co-occurences of keys. Global vs. local attention We investigated the model performance on longer sequences with both global and local attention mechanisms. As a matter of fact, it may not be practical to use Luong's global attention [56] for longer sequences since it has to attend to all words on the encoder side for each target word. For our dataset, the average length of a reactant–product pair is 74. To investigate if the local attention may improve prediction quality, we augment the dataset with more complex molecules, and perform experiments by applying both the local and global attention mechanisms. As shown in Table 3, the local attention mechanism yields marginally better results than the global attention mechanism for longer sequences, containing more than 100 fragments. However, the performance of the model trained with sequences up to 100 fragments do not improve with the local attention mechanism. Table 3 Comparison of model accuracy based on selected attention mechanism on combined datasets Comparison with existing models We compared the prediction accuracy of our approach with other retrosynthetic prediction methods without considering reaction class labels because no prior reaction class information was provided to our model. Several recent reports summarized the prediction accuracy of various models [37, 62]. According to reproduced results presented by Lin et al. [37], Top-1 accuracy ranges from 28.3% (Liu et al. [34] LSTM model over the USPTO 50 K dataset) to 54.1% (Transformer model over the USPTO MIT dataset by Lin et al. [37]). In the most recent report by Tetko et al. [46], an augmented Transformer model has reached Top-1 accuracy of 53.5% trained with 100 times augmented USPTO-50 K dataset with beam size 10. Tetko et al. also trained their model using a fivefold augmented filtered USPTO-full training set, approx. 3.8M training data, and Top-1 accuracy is reported as 46.2%. These results are superior to our model's predictive accuracy of perfect predictions, 29%, but inferior overall, 57.7%, if highly similar predictions are considered. As an alternative approach, Coley's similarity-based model [63] achieved a Top-1 accuracy of 37.3% on the USPTO 50 K dataset. Fingerprint dependency We trained our Bi-LSTM model with Extended Connectivity Fingerprints (ECFP, Morgan fingerprint as RDKit implementation) on the single reactant reaction dataset following the same preprocessing steps. We selected four types of ECFP with a fixed-length folding of 1024 and 2048 bits (nBits), and for a radius of 1 and 2. Compared to the MACCS key-based model, the models trained with the ECFP with a radius of 1 are showing higher percentage of exact matches (see Table 4). The highest percentage of exact matches is observed with the model with ECFP of radius 1 and nBits 2048. The percentage increased by 8.6% compared to the MACCS key-based model. However, the percentage of bioactively similar reactions, \(T_{c}\) \(\ge\) 0.85, 52%, remains comparable to that of the MACCS key-based model, 57.7%. These results suggest that ECFP with a radius of 1 provides better resolution than the MACCS keys. Table 4 Comparison of model accuracy on single reactant reaction dataset using ECFP and MACCS keys However, the models trained with ECFP with a radius of 2 show dramatic decreases in accuracy of the exact matches, 9.1% and 10.1%. To identify the origin of such performance drop, we performed further analysis of fragments embedded in one bit of ECFP of various radii over the single reactant reaction dataset. The numbers of substructures associated with bits activated by atom environments of radius 1 and 2 are investigated (see Additional file 4: Figure S3). The set of regular expressions embedded in one token becomes more complex, suggesting that the recognition of chemical changes becomes more challenging in the same dataset. From the analysis, it is identified that the model becomes confused due to a high number of fragments embedded in one bit. With a radius of 1, each bit of ECFP contains 11 fragments on average. However, with a radius of 2, each bit includes 113 fragments on average, i.e., a large degeneracy of each bit. This large degeneracy makes the patterns of bits of chemical reactions highly complicated, which becomes too hard to learn. These analyses suggest that curating the optimal set of fragments and their proper representations is critical in improving retrosynthesis prediction quality. Learning behavior To identify how our model learns the grammar of chemical reactions, the evolution of prediction accuracy with respect to threshold values along training epoch for the single reactant validation set is illustrated in Fig. 4. In particular, it is demonstrated that the network successfully learned reaction rules by capturing the alterations of molecules at a substructural level. The number of exact matches (\(T_{c} = 1.0\)) increased rapidly during the first 10 epochs. After 20 epochs, the value became almost tripled. The likelihood of making a better prediction for each fragment becomes higher during training. This is a clear indication of successful training. The improvement in exact matches appears to be a result of the respective declines in non-exact matches except extremely bad predictions (\(T_{c} < 0.50\)). The quality of bad predictions (ca. 5% of the validation set) did not improve probably due to the insufficient information, complexity, and noise contained in the data. This observation similarly repeated for all the other datasets. Number of matches at different ranges of similarity. Tc refers to Tanimoto similarity coefficient Similarity measure dependency As an extension of our Tanimoto-based analyses, the effects of using other similarity metrics on our model's accuracy is investigated. We select the Sörensen–Dice similarity as a special case of the Tversky index, and three asymmetric Tversky variants that include \(\alpha\) and \(\beta\) parameters. As illustrated in Additional file 5: Figure S4, we find that the model performance remains independent regardless of the choice of similarity metric. The number of similar molecules, however, changes across different regions based on how similarity is quantified. The Sörensen–Dice similarity behaves in a similar way as Tversky index when parameters \(\alpha\) and \(\beta\) are 0.1 and 0.9, respectively. Predicted sequences make larger contributions to their similarity to true sequences with smaller values of \(\alpha\). Examples of retrosynthetic predictions In this study, we assumed that candidate reactants with \(T_c>0.85\) are similar enough to their true counterparts. To validate this assumption, we assessed the quality of candidate reactants by comparing them with true reactants. We investigated whether the following factors were correct: functional group interconversion (FGI) or bond disconnection, reactive functional group, and core structure. The accuracy of side-substituents is regarded as less significant for matching the reactants' functionality, especially when they are simple alkyls. Randomly chosen predictions exemplifying possible prediction cases are presented in Fig. 5. Similarity maps are presented to visualize similarities between candidates and true reactants. Non-exact candidates varied in their degree of similarity. *Similarity score calculations and similarity maps using the Morgan fingerprints and the Tanimoto metric are shown. Colors indicate atom-level contributions to the overall similarity (green: increases similarity score, red: decreases similarity score, uncolored: has no effect Reaction 1 resulted in a reactant where the main chain composed of eight carbons, and an \(\alpha\),\(\beta\)-unsaturated aldehyde group in the correct position was derived accurately (Fig. 5). Although an ester was expected rather than an aldehyde, an aldehyde reduction could also provide the same target alcohol. This indicates that our prediction identified the functional group interconversion correctly. On the other hand, one olefin was missing and the position and number of two methyl groups out of four were misinterpreted. In reaction 2, aside from the location of an ester group, core heterocyclic rings, pyridine and thiazole, and their connections were accurately generated. In the true reactant, a methyl ester group was attached to C6 of pyridine, whereas the ethyl ester group was attached to C4 of the thiazole ring in our candidate. If the position of the ethyl ester group was accurate, it would require a single-step reduction to obtain alcohol group. In reaction 3, the core structure of pyrazole ring and its methyl ester group were predicted accurately. However, there was no chloride, one of the reactive functional groups, and substituents on the pyrazole ring as well as structure of thiol were misinterpreted. The result of reaction 4 showed that our model correctly predicted the core structures, bond disconnections, and reactive functional groups. However, the number and position of halides were wrong. In the case of reaction 5, one reactant was predicted precisely, but the other was partially incorrect. In wrongly predicted candidate, a (phenyl)methyl group appeared instead of a (2-naphthyl)vinyl group, but the reactive functional group, acylhydrazine, was correctly produced. The result of reaction 6 revealed the exact match for N-Hydroxyphtalimide as a precursor for O-hydroxylamine. However, the structure of the alkyl halide lacked a phenylene group. The core structure estimation failed to a great extent for this reaction. On the other hand, the reactive functional groups and bond disconnection are suggested correctly. The quantitative summary of the assessment above is given in Table 5. The three criteria: functional group interconversion or bond disconnection, core structure, and reactive functional group are weighted equally. They are utilized to form a chemically reasonable score along with similarity scores. The evaluation was carried out by following procedure. First, we identified less significant parts of candidate molecules by comparing them with the product and true reactants. Second, core structures were identified; true reactants were separated into fragments, e.g., functional group, chain, ring. Afterwards, each fragment of a candidate molecule was evaluated against fragments found in second step in terms of the core structure, type and positions of side-substituents in an equally weighted manner. Finally, equal weight was given to the correctness of fragments' positions within candidate reactants. Concerning the core structure, the longest chain of carbons and/or a ring, either of which may possess heteroatoms such as O, N, S, were taken into account together with important side-substituents and their positions. Because functional group interconversion or bond disconnection as well as reactive functional groups are the most significant factors of retrosynthetic analysis, the correct positions of reacting sites are scored strictly as true/false values corresponding to 1 and 0, respectively. We scored each candidate reactant individually and averaged the results to obtain a final score for each criterion. It is noticeable that our model correctly predicted functional group interconversion or bond disconnection of all six reactions. Except for reaction 3, reactive functional groups are correctly reflected. We observe that prediction errors that affect the score are mainly associated with core structures. We applied this knowledge-based scoring strategy to a more specific set containing ten randomly chosen reactions where candidate reactants, on average, lies within bioactively similar region (Tc = 0.87) (Additional file 6: Figure S5A, Additional file 7: Figure S5B and Additional file 8: Table S2). The results clearly show that our model is highly accurate in predicting functional group interconversion or bond disconnection as well as reactive functional group for bioactively similar reactant candidates. A similar argument can also be made regarding the prediction errors, since they mainly originate from core structures. Table 5 Summary of quality assessment of candidate reactants The chemical inspection of reactions indicates that average similarity scores and knowledge-based scores are closely related. Our scoring approach offers a clear idea about the quality of candidate reactants and similarity scores are in good agreement with those manually inspected. Similarity measurements yield lower scores than knowledge-based scores possibly due to the inclusion of side chains and geometrical factors (more detailed topological exploration is provided by Morgan fingerprint). Although the interpretation of the similarity score is rather difficult to assess objectively, it can be used for assessing the quality of retrosynthetic predictions. Higher similarity scores indicate that the desired molecules are more synthetically accessible according to the rules of organic chemistry. Characteristics of our model The key advantage of our word-based MACCS keys model over the character-based SMILES methods is that the network needs to learn relatively simpler grammatical rules: ascending order and co-occurence of keys, to yield meaningful results. In the SMILES-based methods, a network has to comprehend not only the complicated grammar of SMILES but also the canonical representation to predict synthetically correct sequences. As summarized by Liu et al. [34], the difficulty of learning the syntactic structure of SMILES notation possibly causes problematic outcomes such as invalid SMILES strings. In general, existing character-based models suffer from the generation of literally invalid, literally valid but chemically unreasonable, or literally and chemically valid but unfeasible candidates. We avoided this problem by projecting the SMILES representation of a molecular structure into a substructural domain. Our approach can be an effective solution to these technical problems at a fundamental level. In general, the likelihood of making correct retrosynthetic predictions remains rather low. Indeed, the accuracy of retrosynthetic planning tasks is twice as much lower than the level of accuracy achieved at forward reaction prediction tasks [17, 27, 31]. This is especially true assuming that several possible synthetic routes are available for the forward reaction. It is worth noting that the content of the dataset used in the reverse mapping, could also be responsible for the network's behavior [62]. Mapping a reactant from a reactant domain to a product domain and then reversing it does not necessarily produce the original reactant considering the level of abstraction used to describe the molecules in our dataset. There is a chance that the presence of one-to-many mappings from a product to a reactant domain may create confusion during the learning process. Equipped with these observations, a simple idea is adopted to assure a stronger pairwise functional relationship between the domains. To achieve this, we identified all one-to-many mappings and collapsed them into an injective mapping (see Fig. 3, "Reaction dataset curation" section) by selecting the molecule with the shortest sequence length (presumably the reactants with the lowest level of structural complexity). Notably, our model yields robust predictions. For each independent run of the same input molecule, our model gives the same output consistently. This robustness of our model may be due to the low complexity and good interpretability of our molecular descriptor. Generally, retrosynthetic models have employed the top-N accuracy score to assess overall model performances [11, 34,35,36,37, 45, 63]. However, as recently discussed by Schwaller [38], top-N accuracy score may not be an adequate metric for assessing retrosynthetic models because with each suggestion, the model tends to yield expected answers from the dataset rather than making chemically more meaningful predictions. Although MACCS keys have been criticized for their poor performance on similarity benchmarks [64], an advantage of such descriptor is that there is an one-to-one correspondence between a bit and a substructure compared to fingerprints obtained by an exhaustive generation algorithm followed by a hashing procedure. Thus, MACCS keys were a natural choice to test the proof-of-concept level of our translation methodology. The diversity of reactant candidates is one of the important aspects of a retrosynthesis prediction. In the recently published paper [46], the diversity of the reactant candidates is discussed within the context of top-5 performance analysis. One of the goals of a retrosynthetic model is to obtain multiple precursor suggestions, and the top-N approach may suggest other probable reactant candidates. Our model is robust in terms of the number of predictions made, i.e., always yields the identical prediction, resulting in an absence from the top-N concept. However, our model has a certain level of flexibility. Since the model predicts fingerprints instead of the exact structures, multiple structures can be retrieved for a predicted sequence. We verify that the average number of molecules represented with the same modified (126-keys) MACCS keys are three within 154 million compounds in PubChem. In other words, we could find three valid reactant candidates on average using PubChem. This leads to a flexible interpretation because choices between reaction candidates enable us to use chemical expertise and intuition. Table 6 Success rate of retrieving a reactant candidate from the PubChem database By design, our model predicts the MACCS keys representations rather than SMILES strings. Converting predicted sequences of structural fingerprints to valid molecules requires a dictionary to look up the reactant candidates that match the fingerprint. Fortunately, for MACCS keys, the reference SMARTS value of any bit is preserved during translation. Unlike hash-based fingerprints, there is always a one-to-one correspondence between a key and its definition. We, therefore, take advantage of using a fingerprint built upon the predefined substructures and constructed a lookup table using the USPTO [49] and PubChem [65] databases to retrieve the molecules that match with predicted MACCS keys. If a perfect match is not found from the table, the closest match is selected as the candidate of a real molecule. Each object within the lookup table contains SMILES, MACCS keys, and "language representation" in our retrieval mechanism. A query is sent based on the "language representation". We investigated the success rate of retrieving a reactant candidate within the PubChem database (Table 6). More than 20 K medium length reactant predictions were compared with 154 million molecules of the database. Sixty-two percent of predictions matched with the existing molecules. The success rate increased to 91% when up to 2 key difference was allowed. Considering the average length of keys, 42, this difference corresponds to Tc of 0.94, which is reasonably high enough. Also, the maximum number of discrepant keys is four, corresponding to a Tc of 0.9. In other words, all predicted reactants were successfully retrieved from the database with up to 4 discrepant keys, Tc > 0.9. In summary, these results demonstrate that our approach is practical enough because all predicted reactants could find the exact molecules or highly similar molecules with a Tc threshold of 0.94. Figure 6 depicts the seven candidates for the first reactant of the fourth reaction in Fig. 5 retrieved from the USPTO reaction dataset. All of the seven candidates are associated with different reactions in the database. The MACCS key representation of the retrieved molecules are identical. This implies that it is possible to find more than one match corresponding to the predicted sequence. These closely related analogs can be ordered by computing the Tanimoto coefficients using path-based or circular fingerprints as they will be different for the same set. For this purpose, we used the circular fingerprint [66] with radius 2 as a bit vector. We selected the molecule with the highest similarity value among candidates as our final result. A detail look at the example 4 in Fig. 5. Similarity scores are shown using circular fingerprint (Morgan) with radius 2 and the Tanimoto metric. Distinct fragments are shown as SMARTS patterns [67] We developed a sequence-to-sequence NMT model to extract the reaction rules of a chemical reaction automatically by learning the relationships at substructural level. By constructing an abstract language with a small size fixed-length vocabulary of non-zero elements of MACCS keys, three conceptual problems are addressed and resolved jointly: (1) erratic predictions: SMILES-based representation makes model outcomes prone to error, (2) synthetic availability: predicted molecules may not be synthetically accessible, and (3) top-N accuracy metric: suggestions made by the model may vary by model run. The comparison and quality inspections showed that our method successfully produced candidate reactants within a region 0.90 < Tc ≤ 1.00, achieving a high level of overall accuracy, particularly at functional group interconversion or bond disconnections and reactive functional groups. We believe that this proposed approach has a high potential for broad applications in organic chemistry. For the future version, it is essential to develop a better defined structural key suitable for reaction prediction purposes. The datasets supporting the conclusions of this article are available via https://github.com/knu-chem-lcbc/fragment_based_retrosynthesis repository. Blakemore DC, Castro L, Churcher I, Rees DC, Thomas AW, Wilson DM, Wood A (2018) Organic synthesis provides opportunities to transform drug discovery. Nat Chem 10(4):383–394. https://doi.org/10.1038/s41557-018-0021-z Corey EJ (1988) Robert Robinson lecture. Retrosynthetic thinking—essentials and examples. In: Chemical society reviews, vol 17, pp 111–133. https://doi.org/10.1039/CS9881700111 Corey EJ, Cheng XM (1989) The logic of chemical synthesis. Wiley, Hoboken Corey EJ (1991) The logic of chemical synthesis: multistep synthesis of complex carbogenic molecules (Nobel lecture). Angew Chem Int Ed 30(5):455–465. https://doi.org/10.1002/anie.199104553 Corey EJ, Todd Wipke W (1969) Computer-assisted design of complex organic syntheses. Science 166(3902):178–192. https://doi.org/10.1126/science.166.3902.178 Pensak DA, Corey EJ (1977) LHASA-logic and heuristics applied to synthetic analysis. pp 1–32. https://doi.org/10.1021/bk-1977-0061.ch001 Salatin TD, Jorgensen WL (1980) Computer-assisted mechanistic evaluation of organic reactions. 1. Overview. J Org Chem 45(11):2043–2051. https://doi.org/10.1021/jo01299a001 Gasteiger J, Ihlenfeldt WD, Röse P (1992) A collection of computer methods for synthesis design and reaction prediction. Recl Trav Chim Pay-b 111(6):270–290. https://doi.org/10.1002/recl.19921110605 Fick R, Ihlenfeldt W-D, Gasteiger J (1995) Computer-assisted design of syntheses for heterocyclic compounds. Heterocycles 40(2):993–1007 Szymkuć S, Gajewska EP, Klucznik T, Molga K, Dittwald P, Startek M, Bajczyk M, Grzybowski BA (2016) Computer-assisted synthetic planning: the end of the beginning. Angew Chem Int Ed 55:5904–5937. https://doi.org/10.1002/anie.201506101 Segler MHS, Waller MP (2017) Neural-symbolic machine learning for retrosynthesis and reaction prediction. Chem Eur J 23(25):5966–5971. https://doi.org/10.1002/chem.201605499 Satoh H, Funatsu K (1995) SOPHIA, a knowledge base-guided reaction prediction system—utilization of a knowledge base derived from a reaction database. J Chem Inf Comp Sci 35(1):34–44. https://doi.org/10.1021/ci00023a005 Satoh K, Funatsu K (1999) A novel approach to retrosynthetic analysis using knowledge bases derived from reaction databases. J Chem Inf Comp Sci 39(2):316–325. https://doi.org/10.1021/ci980147y Law J, Zsoldos Z, Simon A, Reid D, Liu Y, Khew SY, Johnson AP, Major S, Wade RA, Ando HY (2009) Route designer: a retrosynthetic analysis tool utilizing automated retrosynthetic rule generation. J Chem Inf Model 49(3):593–602. https://doi.org/10.1021/ci800228y Bøgevig A, Federsel H-J, Huerta F, Hutchings MG, Kraut H, Langer T, Löw P, Oppawsky C, Rein T, Saller H (2015) Route design in the 21st century: The ICSYNTH software tool as an idea generator for synthesis prediction. Org Process Res Dev 19(2):357–368. https://doi.org/10.1021/op500373e Wei JN, Duvenaud D, Aspuru-Guzik A (2016) Neural networks for the prediction of organic chemistry reactions. ACS Cent Sci 2(10):725–732. https://doi.org/10.1021/acscentsci.6b00219 Coley CW, Barzilay R, Jaakkola TS, Green WH, Jensen KF (2017) Prediction of organic reaction outcomes using machine learning. ACS Cent Sci 3(5):434–443. https://doi.org/10.1021/acscentsci.7b00064 Segler MHS, Waller MP (2017) Modelling chemical reasoning to predict and invent reactions. Chem Eur J 23(25):6118–6128. https://doi.org/10.1002/chem.201604556 Ott MA, Noordik JH (1992) Computer tools for reaction retrieval and synthesis planning in organic chemistry. A brief review of their history, methods, and programs. Recl Trav Chim Pay-b 111(6):239–246. https://doi.org/10.1002/recl.19921110601 Todd MH (2005) Computer-aided organic synthesis. Chem Soc Rev 34(3):247–266. https://doi.org/10.1039/B104620A Cook A, Johnson AP, Law J, Mirzazadeh M, Ravitz O, Simon A (2012) Computer-aided synthesis design: 40 years on. Wiley Interdiscip Rev Comput Mol Sci 2(1):79–107. https://doi.org/10.1002/wcms.61 Warr WA (2014) A short review of chemical reaction database systems, computer-aided synthesis design, reaction prediction and synthetic feasibility. Mol Inform 33(6–7):469–476. https://doi.org/10.1002/minf.201400052 Coley CW, Green WH, Jensen KF (2018) Machine learning in computer-aided synthesis planning. Accounts Chem Res 51(5):1281–1289. https://doi.org/10.1021/acs.accounts.8b00087 Feng F, Lai L, Pei J (2018) Computational chemical synthesis analysis and pathway design. Front Chem. https://doi.org/10.3389/fchem.2018.00199 Kayala MA, Azencott C-A, Chen JH, Baldi P (2011) Learning to predict chemical reactions. J Chem Inf Model 51(9):2209–2222. https://doi.org/10.1021/ci200207y Kayala MA, Baldi P (2012) ReactionPredictor: prediction of complex chemical reactions at the mechanistic level using machine learning. J Chem Inf Model 52(10):2526–2540. https://doi.org/10.1021/ci3003039 Jin W, Coley CW, Barzilay R, Jaakkola T (2017) Predicting organic reaction outcomes with weisfeiler-lehman network. Adv Neural Int. 2017-Decem(Nips):2608–2617. arXiv:1709.04555 Lei T, Jin W, Barzilay R, Jaakkola T (2017) Deriving neural architectures from sequence and graph kernels. ICML 2017. 4:3181–3190. arXiv:1705.09037 Cadeddu A, Wylie EK, Jurczak J, Wampler-Doty M, Grzybowski BA (2014) Organic chemistry as a language and the implications of chemical linguistics for structural and retrosynthetic analyses. Angew Chem Int Ed 53(31):8108–8112. https://doi.org/10.1002/anie.201403708 Schneider N, Stiefl N, Landrum GA (2016) What's what: the (nearly) definitive guide to reaction role assignment. J Chem Inf Model 56(12):2336–2346. https://doi.org/10.1021/acs.jcim.6b00564 Schwaller P, Laino T, Gaudin T, Bolgar P, Hunter CA, Bekas C, Lee AA (2019) Molecular transformer: a model for uncertainty-calibrated chemical reaction prediction. ACS Cent Sci 5(9):1572–1583. https://doi.org/10.1021/acscentsci.9b00576 Nam J, Kim J (2016) Linking the neural machine translation and the prediction of organic chemistry reactions, 1–19. arXiv:1612.09529 Schwaller P, Gaudin T, Lányi D, Bekas C, Laino T (2018) Found in Translation: predicting outcomes of complex organic chemistry reactions using neural sequence-to-sequence models. Chem Sci 9(28):6091–6098. https://doi.org/10.1039/c8sc02339e. arXiv:1711.04810 Liu B, Ramsundar B, Kawthekar P, Shi J, Gomes J, Luu Nguyen Q, Ho S, Sloane J, Wender P, Pande V (2017) Retrosynthetic reaction prediction using neural sequence-to-sequence models. ACS Cent Sci 3(10):1103–1113. https://doi.org/10.1021/acscentsci.7b00303. arXiv:1706.01643 Zheng S, Rao J, Zhang Z, Xu J, Yang Y (2020) Predicting retrosynthetic reactions using self-corrected transformer neural networks. J Chem Inf Model 60(1):47–55. https://doi.org/10.1021/acs.jcim.9b00949 Duan H, Wang L, Zhang C, Guo L, Li J (2020) Retrosynthesis with attention-based NMT model and chemical analysis of wrong predictions. RSC Adv 10(3):1371–1378. https://doi.org/10.1039/c9ra08535a Lin K, Xu Y, Pei J, Lai L (2020) Automatic retrosynthetic route planning using template-free models. Chem Sci 11(12):3355–3364. https://doi.org/10.1039/c9sc03666k Schwaller P, Petraglia R, Zullo V, Nair VH, Haeuselmann RA, Pisoni R, Bekas C, Iuliano A, Laino T (2020) Predicting retrosynthetic pathways using transformer-based models and a hyper-graph exploration strategy. Chem Sci 11(12):3316–3325. https://doi.org/10.1039/c9sc05704h Cho K, van Merrienboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder–decoder for statistical machine translation. arXiv:1406.1078 Bahdanau D, Cho KH, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: 3rd Int Conf Learn Represent ICLR 2015—Conf Track Proc, 1–15. arXiv:1409.0473 Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. Adv Neural Int 4(January):3104–3112 arXiv:1409.3215 Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735 Graves A (2013) Generating sequences with recurrent neural networks. arXiv:1308.0850 Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Int 2017-Decem(Nips):5999–6009. arXiv:1706.03762 Karpov P, Godin G, Tetko IV (2019) A transformer model for retrosynthesis. Lect Notes Comput Sci (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11731 LNCS(1):817–830 Tetko IV, Karpov P, Van Deursen R, Godin G (2020) State-of-the-art augmented NLP transformer models for direct and single-step retrosynthesis. Nat Commun 11(1):1–11. https://doi.org/10.1038/s41467-020-19266-y. arXiv:2003.02804 Durant JL, Leland BA, Henry DR, Nourse JG (2002) Reoptimization of MDL keys for use in drug discovery. J Chem Inf Comp Sci 42(6):1273–1280. https://doi.org/10.1021/ci010132r Graves A, Schmidhuber J (2005) Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw 18(5–6):602–610. https://doi.org/10.1016/j.neunet.2005.06.042 Lowe DM (2012) Extraction of chemical structures and reactions from the literature. PhD thesis, University of Cambridge. https://doi.org/10.17863/CAM.16293 Lowe D (2017) Chemical reactions from US patents (1976-Sep2016). Figshare. https://doi.org/10.6084/m9.figshare.5104873.v1 Landrum G (2016) RDKit: Open-Source Cheminformatics Software. https://github.com/rdkit/rdkit/releases/tag/Release_2020_03_1 Weininger D (1988) SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules. J Chem Inf Comp Sci 28(1):31–36. https://doi.org/10.1021/ci00057a005 James CA, Weininger D, Delany JD (2002) Daylight theory manual. Daylight Chemical Information Systems Inc. https://daylight.com/dayhtml/doc/theory/index.html Blum LC, Reymond J-L (2009) 970 million druglike small molecules for virtual screening in the chemical universe database GDB-13. J Am Chem Soc 131(25):8732–8733. https://doi.org/10.1021/ja902302h Arús-Pous J, Blaschke T, Ulander S, Reymond JL, Chen H, Engkvist O (2019) Exploring the GDB-13 chemical space using deep generative models. J Cheminf 11(1):1–33. https://doi.org/10.1186/s13321-019-0341-z Luong MT, Pham H, Manning CD (2015) Effective approaches to attention-based neural machine translation. In: Conf Proc—EMNLP 2015 Conf Empir Methods Nat Lang Process, 1412–1421. https://doi.org/10.18653/v1/d15-1166. arXiv:1508.04025 Bottou L (1991) Stochastic gradient learning in neural networks. ProcNeuro-Nımes 91(8):12 Pascanu R, Mikolov T, Bengio Y (2013) On the difficulty of training recurrent neural networks. ICML 2013(PART 3):2347–2355. arXiv:1211.5063 Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, Desmaison A, Kopf A, Yang E, DeVito Z, Raison M, Tejani A, Chilamkurthy S, Steiner B, Fang L, Bai J, Chintala S (2019) Pytorch: an imperative style, high-performance deep learning library. In: Wallach H, Larochelle H, Beygelzimer A, Fox E, Garnett R, d' Alché-Buc F (eds) Advance Neural international, vol 32. Curran Associates, Inc., New York, pp 8024–8035 Riniker S, Landrum GA (2013) Similarity maps—a visualization strategy for molecular fingerprints and machine-learning methods. J Cheminf 5(9):1–7. https://doi.org/10.1186/1758-2946-5-43 Bajusz D, Rácz A, Héberger K (2015) Why is Tanimoto index an appropriate choice for fingerprint-based similarity calculations? J Cheminf 7(1):1–13. https://doi.org/10.1186/s13321-015-0069-3 Guo Z, Wu S, Ohno M, Yoshida R (2020) A Bayesian algorithm for retrosynthesis. arXiv:2003.03190 Coley CW, Rogers L, Green WH, Jensen KF (2017) Computer-assisted retrosynthesis based on molecular similarity. ACS Cent Sci 3(12):1237–1245. https://doi.org/10.1021/acscentsci.7b00355 O'Boyle NM, Sayle RA (2016) Comparing structural fingerprints using a literature-based similarity benchmark. J Cheminf 8(1):1–14. https://doi.org/10.1186/s13321-016-0148-0 Bolton EE, Wang Y, Thiessen PA, Bryant SH (2008) Chapter 12 PubChem: integrated platform of small molecules and biological activities, vol 4, Elsevier B.V, pp 217–241. https://doi.org/10.1016/S1574-1400(08)00012-1 Rogers D, Hahn M (2010) Extended-connectivity fingerprints. J Chem Inf Model 50(5):742–754. https://doi.org/10.1021/ci100050t Schomburg K, Ehrlich H-C, Stierand K, Rarey M (2011) Chemical pattern visualization in 2D—the SMARTSviewer. J Cheminf 3(1):12. https://doi.org/10.1186/1758-2946-3-S1-O12 The author thanks his colleagues Dr. Ali Canlier, and Sevde Ucak for useful discussions throughout the course of this project. Umit V. Ucak and Junsu Ko were supported by Arontier Co. This work was also supported by the National Research Foundation of Korea (NRF) grants funded by the Korean government (MSIT) (Nos.2019M3E5D4066898). Division of Chemistry and Biochemistry, Department of Chemistry, Kangwon National University, Chuncheon, South Korea Umit V. Ucak & Juyong Lee Center for Neuro-Medicine, Brain Science Institute, Korea Institute of Science and Technology, Seoul, South Korea Taek Kang Arontier co., Seoul, South Korea Junsu Ko Umit V. Ucak Juyong Lee UU conceived the model, implemented the model, analyzed the results and wrote the draft. TK helped to interpret the prediction results and drafted the manuscript. JK helped to design and implement the model. JL conceived the study, implemented the model, analyzed the results and wrote the draft. All authors read and approved the final manuscript. Correspondence to Juyong Lee. Dictionary Data. MACCS keys assignments. The set contains the assignments of letters to MACCS keys and list of used keys is presented. Additional file 2: Figure S1. Sentence length distribution. Distribution profile of product-reactant pairs. 1-bit keys. Examples to molecules that are represented with only one bit. Fingerprint dependency. Comparison of model accuracy using EFCP and MACCS. Similarity Measure Dependency. Effect of similarity metric type on model performance. Additional file 6: Figure S5A. Bioactively similar reactions. Depictions of ten bioactively similar reactant candidates (1–5). Additional file 7: Figure S5B. Bioactively similar reactions. Depictions of ten bioactively similar reactant candidates (6–10). Hyperparameter settings. Hyperparameter settings for the best model. Table S2. Scoring of bioactively similar reactions. Assessment of candidate reactants lie in bioactively similar region. Ucak, U.V., Kang, T., Ko, J. et al. Substructure-based neural machine translation for retrosynthetic prediction. J Cheminform 13, 4 (2021). https://doi.org/10.1186/s13321-020-00482-z DOI: https://doi.org/10.1186/s13321-020-00482-z Retrosynthesis planning Seq-to-seq
CommonCrawl
Asteroid orientation possible? I'm writing a SF story with a question about an element of my plot. I have a Manhattan-sized NiFe asteroid, with little or no spin, in orbit at 2.5 astronomical units from the Sun. Would it be reasonable to assume that the potato-shaped Ni-Fe asteroid, undisturbed for millennia, would settle into an orbit with its long axis (through the c.g.) parallel to its velocity vector? newtonian-mechanics newtonian-gravity angular-momentum orbital-motion asteroids Qmechanic♦ catsteevenscatsteevens $\begingroup$ Interesting question. Aligning the long axis with the direction (velocity vector) at all times, requires it to have a perfectly fitting spin. This is exactly the case for our Moon, which always has the same face towards Earth since it spins in perfect alignment with it's motion around Earth. See this article or google more: discovermagazine.com/2014/dec/2-ask-discover $\endgroup$ – Steeven Mar 24 '17 at 15:04 $\begingroup$ As pointed out by @Steeven, tidal forces can stop the rotation such that always the same side is facing the sun. My impression is that this mechanism would not move the long axis in direction of motion if, e.g., the initial condition was a spinning asteroid with the long axis perpendicular to the orbital plane. It would be perfectly allowed, though. And other mechanism might actually cause this. Although, I can't make one up, right now. $\endgroup$ – mikuszefski Mar 24 '17 at 15:12 $\begingroup$ @Steeven The planetoid has no spin, is that realistically impossible? $\endgroup$ – catsteevens Mar 24 '17 at 16:45 $\begingroup$ No spin at all might not be very likely. I don't think there is any known astronomical object that is perfectly spinless - chances are small. But lets leave answers about how realistic something is to someone who knows more about space than me $\endgroup$ – Steeven Mar 26 '17 at 17:32 No. Not only is there no reason to expect this, but it is especially unlikely to occur. For the asteroid to remain "pointing" along it's velocity vector, that means that it actually completes a rotation with the exact same period as it's orbit. This is called "tidal locking" and requires tidal forces to be important. Tidal forces scale like, $$F_\mathrm{tidal} \propto \frac{L}{r^3},$$ For an object of size $L$ (in this case very very small for an asteroid) and a distance between objects of $r$ (where here, $r = 2.5AU$). Even Mercury's orbit isn't tidally locked (corrected by @rob, thanks!). So the only reason this asteroid would continue to point in the same direction is if it coincidentally happened to have the exact right spin angular momentum. DilithiumMatrixDilithiumMatrix $\begingroup$ Is it possible for the asteroid to have no rotation and therefore no spin angular momentum? Or is that very unlikely? $\endgroup$ – catsteevens Mar 26 '17 at 0:04 $\begingroup$ @catsteevens, also unlikely, but definitely not impossible... it could also just be quite small, such that it rotates very slowly. It would be easier to come up with a no-spin explanation than a tidally-locked explanation, however. $\endgroup$ – DilithiumMatrix Mar 26 '17 at 15:19 $\begingroup$ For what it's worth, Mercury's spin and orbit are tidally locked --- but in a 3:2 resonance, rather than a 1:1 resonance. $\endgroup$ – rob♦ Mar 27 '17 at 16:36 N.B.: I did not read the question carefully enough, and so the answer below answers a different question than the one was asked, namely "is it possible for the long-term spin axis of an asteroid to align with its long axis?" I have left it up for posterity. No. In fact, it will be likely to settle into a rotation about its principal axis with the largest moment of inertia, i.e. one of the "short axes". Rotation about the "long axis"—i.e., the principal axis with the smallest moment of inertia—is unstable. The reason for this is the following. The angular momentum vector $\vec{L}$ of a rotating body is constant in space. However, if a body is not rotating about its symmetry axis, the different parts of the body will experience time-dependent centripetal acceleration (due to the precession of the angular velocity vector $\vec{\omega}$ in the body frame.) If the body is not perfectly rigid, it will deform ever so slightly under these forces, and frictional forces due to this time-dependent "kneading" will slowly sap the body's rotational kinetic energy. Now, the kinetic energy of a body rotating in 3D is $$ K = \frac{L_1^2}{2 I_1} + \frac{L_2^2}{2 I_2} + \frac{L_3^2}{2 I_3} $$ where $L_1, L_2, L_3$ are the components of $\vec{L}$ in the directions of the body's principal axes, and $I_1, I_2, I_3$ are the corresponding moments of inertia. It is not too hard to see (or to prove rigorously using Lagrange multipliers) that for a fixed value of $L^2 = L_1^2 + L_2^2 + L_3^2$, the magnitude of $K$ will be the lowest when $\vec{L}$ points along the principal axis with the highest moment of inertia. Don't believe me? Well, just ask the folks who launched Explorer I, the first US satellite: Explorer 1 changed rotation axis after launch. The elongated body of the spacecraft had been designed to spin about its long (least-inertia) axis but refused to do so, and instead started precessing due to energy dissipation from flexible structural elements. Later it was understood that on general grounds, the body ends up in the spin state that minimizes the kinetic rotational energy for a fixed angular momentum (this being the maximal-inertia axis). This motivated the first further development of the Eulerian theory of rigid body dynamics after nearly 200 years—to address this kind of momentum-preserving energy dissipation. The above argument is paraphrased from Ch. 7 of Kleppner & Kolenkow's Introduction to Mechanics. More detailed information can be found in the following review: M. Efroimsky, Relaxation of wobbling asteroids and comets—theoretical problems, perspectives of experimental observation. Planetary and Space Science, Volume 49, Issue 9, August 2001, Pages 937–955. Arχiv version. Michael SeifertMichael Seifert Not the answer you're looking for? Browse other questions tagged newtonian-mechanics newtonian-gravity angular-momentum orbital-motion asteroids or ask your own question. Stresses in asteroid during close flyby Mars just collided with Earth! A question of eccentricity Would a considerably big asteroid be disintegrated by the Earth's Roche limit? Would a black hole's rotational axis precess in orbit around the sun? How can an asteroid fall to a planet? NASA's asteroid mission and angular momentum What is the minimum possible size for an asteroid to be able to alter Earth's orbit? Asteroid hopping Is it possible to create Dyson sphere or ring using asteroid belt? Mechanics: angular momentum of disk
CommonCrawl
\begin{document} \title{Some Normality Criteria for Families of Holomorphic Functions of Several Complex Variables} \author[K. S. Charak]{Kuldeep Singh Charak} \address{ \begin{tabular}{lll} &Kuldeep Singh Charak\\ &Department of Mathematics\\ &University of Jammu\\ &Jammu-180 006\\ &India\\ \end{tabular}} \email{[email protected]} \author[R. Kumar]{Rahul Kumar} \address{ \begin{tabular}{lll} &Rahul Kumar\\ &Department of Mathematics\\ &University of Jammu\\ &Jammu-180 006\\ &India \end{tabular}} \email{[email protected]} \begin{abstract} We prove a Zalcman-Pang lemma in several complex variables and apply it to obtain several complex variables analogues of the known normality criteria like Lappan's five-point theorem and Schwick's theorem. \end{abstract} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \footnotetext{2010 {\it Mathematics Subject Classification}. 32A19.} \footnotetext{{\it Keywords and phrases}. Normal families, Zalcman's lemma, Holomorphic functions of several complex variables..} \maketitle \section{\textbf{Introduction}} Let $D$ be a domain in $\ensuremath{\mathbb{C}}^n$ and $\mathcal{F}$ be a family of holomorphic functions $f:D\rightarrow \ensuremath{\mathbb{C}}.$ $\mathcal{F}$ is said to be normal in $D$ if every sequence in $ \mathcal{F}$ contains a subsequence that converges locally uniformly to a limit function which is either holomorphic on $D$ or identically equal to $ \infty.$ $\mathcal{F}$ is said to be normal at a point $ z_{0} \in D $ if it is normal in some neighborhood of $z_{0}$ in $ D $. As an attempt to obtain a natural extension of the theory of normal families of holomorphic functions of one complex variable (see \cite{Schiff, Zalcman}) to several complex variables, Dovbush\cite{Dov1} defined the spherical derivative of a holomorphic function of several complex variables by using {\it Levi's form } as follows:\\ For every $\psi\in \mathcal{C}^2(D),$ at each point $z$ of $D$ define a Hermitian form \begin{equation}\label{eq1} L_z(\psi,~v):= \sum\limits_{k,l=1}^{n} \frac{\partial^2 \psi}{\partial z_k\partial \bar{z_l}}(z)v_k\bar{v_l} \end{equation} and is called the {\it Levi form } of the function $\psi$ at $z.$\\ For a holomorphic function $f$ defined on $D,$ define \begin{equation}\label{eq:2} f^{\#}(z):= \sup\limits_{|v|=1}\sqrt{L_z(\log(1+|f|^2),~v)}. \end{equation} Since $L_z(\log(1+|f|^2),~v)\geq 0, \ f^{\#}(z)$ given by (\ref{eq:2}) is well defined and for $n=1$ the formula $\eqref{eq:2}$ takes the form $$f^{\#}(z):= \frac{|f^\prime(z)|}{1+|f(z)|^2}$$ which is the spherical derivative on $\ensuremath{\mathbb{C}}.$ Hence (\ref{eq:2}) gives the natural extension of the spherical derivative to $\ensuremath{\mathbb{C}}^n.$\\ Also from \eqref{eq:2}, we find that \begin{equation}\label{eq3} f^{\#}(z) = \sup\limits_{|v|=1} \frac{|Df(z)v|}{1+|f(z)|^2} \end{equation} where $$D= (\frac{\partial}{\partial z_1},\frac{\partial}{\partial z_2}, \ldots ,\frac{\partial}{\partial z_n})$$ A well known powerful tool in the theory of normal families of holomorphic functions of one complex variables is the following lemma due to Zalcman\cite{Zalcman1}: {\bf Zalcman Lemma:} {\it A family $\mathcal{F}$ of holomorphic functions on the open unit disk $\ensuremath{\mathbb{D}}$ is not normal in $\ensuremath{\mathbb{D}}$ if and only if there exist a number $r: 0 < r < 1;$ points $z_n \in \{z:|z|<r\};$ functions $f_n\in \mathcal{F};$ and numbers $\rho_n \rightarrow 0$ such that $$g_n(\zeta)=f_n(z_n+\rho_n\zeta)\rightarrow g(\zeta), \ \mbox{ as } n\rightarrow \infty,$$ where $g$ is a nonconstant entire function satisfying $g^{\#}(\zeta)\leq g^{\#}(0)=1,$ for all $\zeta \in \ensuremath{\mathbb{C}}.$} Also, equally important is the following extension of the Zalcman Lemma due to Pang(\cite{Pang1}, Lemma $2$)( also\cite{Pang2}, Theorem $1$): {\bf Zalcman-Pang Lemma:} {\it Let $\mathcal{F}$ be a family of holomorphic functions on the open unit disk $\ensuremath{\mathbb{D}}$ and $-1<\alpha<1$. Then $\mathcal{F}$ is not normal in $\ensuremath{\mathbb{D}}$ if and only if there exist a number $r: 0 < r < 1;$ points $z_n \in \{z:|z|<r\};$ functions $f_n\in \mathcal{F};$ and numbers $\rho_n \rightarrow 0$ such that $$g_n(\zeta)=\rho_n^{-\alpha}f_n(z_n+\rho_n\zeta)\rightarrow g(\zeta), \ \mbox{ as } n\rightarrow \infty,$$ where $g$ is a nonconstant entire function satisfying $g^{\#}(\zeta)\leq g^{\#}(0)=1, \ \forall \zeta \in \ensuremath{\mathbb{C}}.$} Dovbush\cite{Dov1} besides extending Marty's theorem\cite{Marty} extended Zalcman Lemma to several complex variables as \begin{theorem}(Zalcman Lemma in $\ensuremath{\mathbb{C}}^n$) Suppose that a family $\mathcal{F}$ of functions holomorphic on $D\subseteq\mathbb{C}^n$ is not normal at some point $w_0 \in D.$ Then there exist sequences $f_j \in \mathcal{F}, \ w_j\to \ w_0, \ \rho_j=1/f_j^{\#}(w_j)\to 0,$ such that the sequence $g_j(z)=f_j(w_j+\rho_j z) $ converges locally uniformly in $\mathbb{C}^n$ to a nonconstant entire function $g$ satisfying $g^{\#}(z)\leq g^{\#}(0)=1$ for all $z\in \ensuremath{\mathbb{C}}^n.$ \label{ZLCN} \end{theorem} In this paper we give a several complex variables analogue of Zalcman-Pang Lemma, a generalization of Theorem \ref{ZLCN} and as applications, obtain several complex variables versions of Lappan's five-point theorem \cite{Lappan}, Schwick's theorem \cite{Schwick} and some other normality criteria. \section{\textbf{Main Results}} \begin{theorem}(Zalcman-Pang Lemma in $\ensuremath{\mathbb{C}}^n$)\label{thm:1} Let $\mathcal{F}$ be a family of holomorphic functions on $D=\{z\in\mathbb{C}^n:|z|<1\}$ . If $\mathcal{F}$ is not normal on $D,$ then for all $\alpha :0\leq \alpha<1,$ there exist real number $r:0<r<1,$ and sequences $\{z_j\}\subseteq D: |z_j|<r,$ $\{f_j\}\subseteq\mathcal{F},$ and $ \{\rho_j\}\subset (0,\ 1]: \rho_j\to 0 $ such that $$g_j({\zeta})= \frac{f_j(z_j+\rho_j\zeta)}{\rho_j^{\alpha}} $$ converges locally uniformly to a nonconstant entire function $g$ in $\mathbb{C}^n.$ \end{theorem} \begin{theorem}\label{thm:2} Let $\mathcal{F}$ be a family of holomorphic functions on $D=\{z:|z|<1\}\subseteq\mathbb{C}^{n}$ and let $\alpha$ and $\beta$ be real numbers such that $\alpha\geq 0$ and $\beta\geq \alpha+1.$ Then $\mathcal{F}$ is not normal on $D$ if and only if there exist real number $r:0<r<1$ and sequences $\{z_j\}\subseteq D: |z_j|<r,$ $\{f_j\}\subseteq\mathcal{F},$ and $\{\rho_j\}\subset (0, \ 1]: \rho_j\to 0 $ such that $$g_j({\zeta})= \rho_j^{-\alpha}f_j(z_j+\rho_j^{\beta}\zeta) $$ converges locally uniformly to a nonconstant entire function $g$ in $\mathbb{C}^n.$ \end{theorem} For $\alpha=0$ and $\beta=1$, Theorem\ref{thm:2} reduces to Theorem\ref{ZLCN}. By Theorem\ref{thm:2}, we extend Lappan's five-point theorem\cite{Lappan}((also see, Hinkkanen\cite{Hink}) to several complex variables as \begin{theorem}\label{thm2} A family $\mathcal{F}$ of holomorphic functions on a domain $D\subseteq \mathbb{C}^{n}$ is normal on $D$ if and only if there exists a set $E$ containing at least three points such that for each compact subset $K\subset D,$ there exists a positive constant $M(K)$ for which \begin{equation}\label{eq3} f^{\sharp}(z)\leq M(K) \mbox{ whenever } f(z)\in E, ~ z\in K, ~ f\in\mathcal{F}. \end{equation} \end{theorem} Schwick \cite{Schwick} sharpened Royden's theorem \cite{Royden} as: {\it Let $\mathcal{F}$ be a family of meromorphic functions on a domain $D$ with the property that for each compact set $K \subset D$ there is a function $h_{K} : [0,\infty] \rightarrow [0,\infty]$, which is finite somewhere on $(0,\infty)$, such that \begin{equation} |f^{'}(z)| \leq h_{K}(|f(z)|), \ \mbox{ for all } f \in \mathcal{F}, z \in K. \label{alpha} \end{equation} Then $\mathcal{F}$ is normal on $D.$} Actually, Schwick's theorem requires (\ref{alpha}) to be satisfied by $f$ at least on a circle. Wang\cite{Wang}, by applying Zalcman's lemma, obtained the following more sharpened version of Schwick's theorem wherein (\ref{alpha}) is required to be satisfied by $f$ at least for five points: \begin{theorem} Let $\mathcal{F}$ be a family of meromorphic functions on a domain $D\subset \ensuremath{\mathbb{C}}$ with the property that for each compact set $K \subset D$ there is a function $h_{K} : \overline{\ensuremath{\mathbb{C}}} \rightarrow [0,\infty]$, which is finite for at least five points on $\overline{\ensuremath{\mathbb{C}}}$, such that \begin{equation} |f^{'}(z)| \leq h_{K}(f(z)), \ \mbox{ for all } f \in \mathcal{F}, z \in K. \label{beta} \end{equation} Then $\mathcal{F}$ is normal on $D.$\\ Moreover, a family $\mathcal{F}$ of holomorphic functions is normal on $D,$ if (\ref{beta}) is satisfied and the function $h_K$ is finite for at least three points on $\ensuremath{\mathbb{C}}.$ \label{Wang} \end{theorem} By using Theorem \ref{thm2}, we obtain a several complex variables analogue of Theorem \ref{Wang}: \begin{theorem}\label{thm3} Let $\mathcal{F}$ be a family of holomorphic functions on a domain $D\subseteq\mathbb{C}^{n}$ with the property that for each compact subset $K\subset D$ there is a function $h_{K}:\overline{\ensuremath{\mathbb{C}}}\longrightarrow[0,~\infty]$, which is finite for at least three points on $\ensuremath{\mathbb{C}}$ such that $\left|Df(z)\right|\leq h_{K}(f(z))$ for all $f\in \mathcal{F}$ and $z\in K.$ Then $\mathcal{F}$ is normal on $D.$ \end{theorem} Further, we obtain a several complex variables version of a normality criterion due to Tan and Thin(\cite{Tan}, Theorem $1$, page $48$). For the sake of convenience, we shall use the following notations: $$f_{z_j}=\frac{\partial f}{\partial z_j} \ \ \mbox{ and } \ \ f_{z_kz_j}=\frac{\partial^2 f }{\partial z_j \partial z_k}.$$ \begin{theorem}\label{thm4} Let $\mathcal{F}$ be a family of holomorphic functions on a domain $D\subseteq\mathbb{C}^{n}.$ Assume that for each compact subset $K\subset D,$ there exist a set $E=E(K)\subset\mathbb{C}$ consisting of two distinct points and a positive constant $M=M(K)$ such that $$f^{\sharp}(z)\leq M \mbox{ and } (f_{z_k})^{\sharp}(z)\leq M,\mbox{ whenever } z\in K,~f(z)\in E,~ k=1, 2, \ldots, n.$$ Then $\mathcal{F}$ is normal on $D.$ \end{theorem} Finally, we obtain another version of a result due to Cao and Liu(\cite{Cao}, Theorem $1.8(i)$ page $1395$): \begin{theorem}\label{thm5} Let $\mathcal{F}$ be a family of holomorphic functions in a domain $D= \{z\in \mathbb{C}^n:|z|<1\}$ and $s>0$ be any real number. If $$\mathcal{G}= \{\frac{|Df(z)|}{1+|f(z)|^s}: f\in \mathcal{F}\}$$ is locally uniformly bounded in $D,$ then $\mathcal{F}$ is normal on $D.$ \end{theorem} \section{\textbf{Proofs of Main Results}} Let $f$ be a meromorphic function in $\ensuremath{\mathbb{C}}$ and $ a\in \overline{\ensuremath{\mathbb{C}}}.$ Then $a$ is called totally ramified value of $f$ if $f-a$ has no simple zeros. Following result known as {\it Nevanlinna's Theorem } (see \cite{Bergweiler}) plays a crucial role in our proofs: \begin{theorem}\label{thm1} Let $f$ be a non-constant meromorphic function $a_1, a_2, \ldots, a_q\in \overline{\ensuremath{\mathbb{C}}} \mbox{ and } m_1, m_2, \dots, m_q \in \ensuremath{\mathbb{N}}.$ Suppose that all $a_j$-points of $f$ have multiplicity at least $m_j, \mbox{ for }~ j=1, 2, \ldots, q.$ Then $$\sum\limits_{j=1}^{q}(1-\frac{1}{m_j}) \leq 2.$$ \end{theorem} If $f$ does not assume the value $a_j$ at all, then we take $m_j=\infty.$ From Theorem \ref{thm1}, it follows that if $f$ is entire function and $a_1, ~a_2 \in \ensuremath{\mathbb{C}}$ are distinct such that all $a_j-$ points of $f$ have multiplicity at least $3,$ then $f$ is constant. Also, it follows that if $a_1,\ a_2,\ a_3 \in \ensuremath{\mathbb{C}}$ are distinct such that all $a_j-$ points of $f$ have multiplicity at least $2,$ then $f$ is constant. Thus, a non-constant entire function can not have more than two totally ramified values. For the proof of Theorem \ref{thm:1} we need the following lemma: \begin{lemma}\label{lemma01} Let $f$ be a holomorphic function in $D=\{z\in\mathbb{C}^n:|z|<1\}$ and let $-1<\alpha<1.$ Let $\Omega:=\{z:|z|<r<1\}\times (0, \ 1]$ and $F:\Omega\rightarrow \ensuremath{\mathbb{R}}$ be defined as $$F(z,t)=\frac{(r-|z|)^{1+\alpha}t^{1+\alpha}(1+|f(z)|^2)f^{\sharp}(z)}{(r-|z|)^{2\alpha}t^{2\alpha}+|f(z)|^2}.$$ If $F(z,1)>1$ for some $z\in \{z:|z|<r<1\},$ then there exist $z_0\in \{z:|z|<r<1\}$ and $t_0 \in (0, \ 1)$ such that $$\sup_{|z|< r}F(z,t_0)=F(z_0,t_0)=1.$$ \end{lemma} A small variation in Lemma\ref{lemma01} yields: \begin{lemma}\label{lemma1} Let $f$ be a holomorphic function in $D=\{z\in\mathbb{C}^n:|z|<1\}$ and let $0\leq \alpha<\beta.$ Let $\Omega:=\{z:|z|<r<1\}\times (0, \ 1]$ and $F:\Omega\rightarrow \ensuremath{\mathbb{R}}$ be defined as $$F(z,t)=\frac{(r-|z|)^{\beta+\alpha}t^{\beta+\alpha}(1+|f(z)|^2)f^{\sharp}(z)}{(r-|z|)^{2\alpha}t^{2\alpha}+|f(z)|^2}.$$ If $F(z,1)>1$ for some $z\in \{z:|z|<r<1\},$ then there exist $z_0\in \{z:|z|<r<1\}$ and $t_0 \in (0, \ 1)$ such that $$\sup_{|z|< r}F(z,t_0)=F(z_0,t_0)=1.$$ \end{lemma} \textbf{Proof of Lemma \ref{lemma01}:} First, we show that \begin{equation} \lim\limits_{(r-|z|)t\to 0}F(z,t)=0. \label{00} \end{equation} Since $F$ is continuous on $\Omega$, we shall prove $(\ref{00})$ for $(r-|z|)t\to 0$ through an arbitrary sequence $x_j=(r-|z_j|)t_j\to 0 ~\mbox{as}~ j\to\infty$ where $z_j\in\{z:|z|<r\}, ~t_j\in(0,~1).$ Put $\lim_{j\to\infty}z_j=w_0.$ Then $|w_0|\leq r.$\\ If $f(w_0)\neq 0,$ then for $-1<\alpha,$ we have \begin{eqnarray*} 0 &\leq& \lim\limits_{j\to\infty}F(z_j,t_j)\\ &\leq& \lim\limits_{j\to\infty}\frac{x_j^{1+\alpha}(1+|f(z_j)|^2)f^{\sharp}(z_j)}{|f(z_j)|^2}\\ &=& 0 \end{eqnarray*} If $f(w_0)= 0,$ then for $\alpha<1,$ we have \begin{eqnarray*} 0 &\leq& \lim\limits_{j\to\infty}F(z_j,t_j)\\ &\leq& \lim\limits_{j\to\infty}x_j^{1-\alpha}(1+|f(z_j)|^2)f^{\sharp}(z_j)\\ &=& 0 \end{eqnarray*} Hence $(\ref{00})$ holds.\\ Let $$U:= \{(z,t)\in \Omega: F(z,t)>1\}.$$ Since $F(z,1)>1$ for some $z=z^*\in \{z:|z|<r<1\}$, $U\neq \emptyset.$ Clearly, $t_0:= \inf\{t: (z,t)\in U\}\neq 0.$ Also, $t_0\neq 1$ since otherwise there exists a sequence $\{t_j\} ~(<1)$ such that $t_j\to t_0$ as $j\to\infty$ and $F(z^*,t_j)\leq 1.$ This implies that $$\lim_{j\to\infty}F(z^*,t_j)=F(z^*,1)\leq 1,$$ which contradicts that $(z^*,1)\in U.$ Hence $0<t_0<1.$ Now we take $z_0\in\{z:|z|\leq r\} $ such that $$\sup_{|z|\leq r} F(z,t_0)= F(z_0, t_0). $$ To complete the proof we shall show that $F(z_0,t_0)=1.$ Suppose this is not true. Then we have the following two cases: {\it Case $1:$} When $F(z_0, t_0)<1.$ In this case there exists a sequence $(z_j,t_j)\in U$ such that $~t_j\to t_0.$ Let $z_j\to w_1.$ Then $|w_1|\leq r.$ Since $F(w_1,t_0)\leq F(z_0,t_0)<1,$ by continuity of $F$ it follows that for for sufficiently large $j,$ $F(z_j,t_j)<1,$ a contradiction. {\it Case $2:$} When $F(z_0, t_0)>1.$ Since $F(z_0,0)=0,$ by continuity of $F$ with respect to $t,$ there exists $t_1:0<t_1<t_0$ such that $$F(z_0,t_1)= 1+\frac{F(z_0,t_0)-1}{2}$$ which contradicts the definition of $t_0.$ $\Box$ \textbf{Proof of Theorem \ref{thm:1}:} Without loss of generality, we may assume that $D=\{z:|z|<1\}$ and let $\mathcal{F}$ be not normal at $z_0 = 0.$ Then by several complex variables analogue of Marty's theorem(see \cite{Dov1}, Theorem$2.1$), there exist $r_0:0<r_0<1, ~\{w_j\}\subset \{z:|z|<r_0\},$ and $\{f_j\}\subseteq\mathcal{F}$ such that $$\lim\limits_{j\to\infty} f_j^\sharp(w_j)=\infty.$$ Choose $r$ such that $0<r_0<r<1$ and corresponding to each $f_j \in \mathcal{F}$ define $F_j:\{z:|z|<r\}\times (0,~1]\rightarrow \ensuremath{\mathbb{R}}$ as $$F_j(z,t)= \frac{(r-|z|)^{1+\alpha}t^{1+\alpha}(1+|f_j(z)|^2)f_j^\sharp(z)}{(r-|z|)^{2\alpha}t^{2\alpha}+|f_j(z)|^2}.$$ Then \begin{eqnarray*} F_j(w_j,1) &=& \frac{(r-|w_j|)^{1+\alpha}(1+|f_j(w_j)|^2)f_j^\sharp(w_j)}{(r-|w_j|)^{2\alpha}+|f_j(w_j)|^2}\\ &=& \frac{(r-|w_j|)^{1-\alpha}(1+|f_j(w_j)|^2)f_j^\sharp(w_j)}{1+\frac{|f_j(w_j)|^2}{(r-|w_j|)^{2\alpha}}}\\ &>& \frac{(r-r_0)^{1-\alpha}(1+|f_j(w_j)|^2)f_j^\sharp(w_j)}{1+\frac{|f_j(w_j)|^2}{(r-r_0)^{2\alpha}}} \to \infty ~\mbox{as}~ j\to\infty \end{eqnarray*} Thus for sufficiently large $j,$ $F_j(w_j,1)>1.$ and hence by Lemma{\ref{lemma01}}, there exist $z_j\in \{z:|z|<r\} ~\mbox{and}~ t_j\in (0,~1)$ such that $$\sup_{|z|< r}F_j(z,t_j)= F_j(z_j,t_j)=1.$$ Thus, for sufficiently large $j$, we have \begin{eqnarray*} 1 &=& F_j(z_j, t_j)\\ &\geq& F_j(w_j, t_j)\\ &=& \frac{(r-|w_j|)^{1+\alpha}t_j^{1+\alpha}(1+|f_j(w_j)|^2)f_j^\sharp(w_j)}{(r-|w_j|)^{2\alpha}t_j^{2\alpha}+|f_j(w_j)|^2}\\ &\geq& \frac{t_j^{1+\alpha}(r-|w_j|)^{1+\alpha}(1+|f_j(w_j)|^2)f_j^\sharp(w_j)}{(r-|w_j|)^{2\alpha}+|f_j(w_j)|^2}\\ &=& t_j^{1+\alpha}F_j(w_j,1) \end{eqnarray*} which implies that $\lim\limits_{j\to\infty}t_j=0.$ Let $\rho_j=(r-|z_j|)t_j\to 0.$ Then $$\lim\limits_{j\to\infty}\frac{\rho_j}{r-|z_j|}=0.$$ Thus the function $$g_j(\zeta):= \frac{f_j(z_j+\rho_j\zeta)}{\rho_j^{\alpha}}$$ is defined for $$|\zeta|<R_j=\frac{r-|z_j|}{\rho_j}\to\infty.$$ Now \begin{eqnarray}\label{*} \sup_{|v|=1}\frac{|Dg_j(\zeta)v|}{1+|g_j(\zeta)|^2} &=& \sup_{|v|=1}\frac{\rho_j^{1-\alpha}|Df_j(z_j+\rho_j\zeta)v|}{1+\frac{|f_j(z_j+\rho_j\zeta)|^2}{\rho_j^{2\alpha}}}\nonumber\\ &=& \sup_{|v|=1}\frac{\rho_j^{1+\alpha}|Df_j(z_j+\rho_j\zeta)v|}{\rho_j^{2\alpha}+|f_j(z_j+\rho_j\zeta)|^2} \end{eqnarray} Since $$\frac{r-|z_j|}{r-|z_j+\rho_j\zeta|}\to 1,$$ there exists $\epsilon_j \to 0$ such that $$\rho_j^{1+\alpha}\leq (1+\epsilon_j)^{1+\alpha}(r-|z_j+\rho_j\zeta|)^{1+\alpha}t_j^{1+\alpha}, $$ and $$\rho_j^{2\alpha}\geq(1-\epsilon_j)^{2\alpha}(r-|z_j+\rho_j\zeta|)^{2\alpha}t_j^{2\alpha}.$$ Thus from $(\ref{*}),$ we get \begin{eqnarray*} \sup_{|v|=1}\frac{|Dg_j(\zeta)v|}{1+|g_j(\zeta)|^2} &\leq& \sup_{|v|=1}\frac{(1+\epsilon_j)^{1+\alpha}(r-|z_j+\rho_j\zeta|)^{\alpha+1}t_j^{1+\alpha}|Df_j(z_j+\rho_j\zeta)v|}{(1-\epsilon_j)^{2\alpha}(r-|z_j+\rho_j\zeta|)^{2\alpha}t_j^{2\alpha}+|f_j(z_j+\rho_j\zeta)|^2}\\ &=& \frac{(1+\epsilon_j)^{1+\alpha}(r-|z_j+\rho_j\zeta|)^{1+\alpha}t_j^{1+\alpha}(1+|f_j(z_j+\rho_j\zeta)|^2)f_j^{\sharp}(z_j+\rho_j\zeta)}{(1-\epsilon_j)^{2\alpha}(r-|z_j+\rho_j\zeta|)^{2\alpha}t_j^{2\alpha}+|f_j(z_j+\rho_j\zeta)|^2}\nonumber\\ &\leq& \frac{(1+\epsilon_j)^{1+\alpha}}{(1-\epsilon_j)^{2\alpha}} \end{eqnarray*} That is, $$g_j^{\sharp}(\zeta)\leq \frac{(1+\epsilon_j)^{1+\alpha}}{(1-\epsilon_j)^{2\alpha}}$$ and hence by Marty's theorem $\{g_j\}$ is normal in $\mathbb{C}^n.$ Without loss of generality we may assume that $\{g_j\}$ converges locally uniformly to a holomorphic function $g$ or $\infty$ in $\mathbb{C}^n.$ Since \begin{eqnarray*} g_j^{\sharp}(0) &=& \sup_{|v|=1}\frac{|Dg_j(0)v|}{1+|g_j(0)|^2}\\ &=& \sup_{|v|=1} \frac{\rho_j^{1+\alpha}|Df_j(z_j)v|}{\rho_j^{2\alpha}+|f_j(z_j)|^2}\\ &=& \frac{(r-|z_j|)^{1+\alpha}t_j^{1+\alpha}f_j^{\sharp}(z_j)(1+|f_j(z_j)|^2)}{(r-|z_j|)^{2\alpha}t_j^{2\alpha}+|f_j(z_j)|^2}\\ &=& F_j(z_j,t_j) = 1, \end{eqnarray*} it follows that $g(\zeta)$ is a nonconstant entire function in $\mathbb{C}^n.$ $\Box$ \textbf{Proof of Theorem \ref{thm:2}:} Let $\mathcal{F}$ be a family of holomorphic functions on $D=\{z:|z|<1\}\subseteq\mathbb{C}^{n}$ and let $\alpha$ and $\beta$ be real numbers such that $\alpha\geq 0$ and $\beta\geq \alpha+1.$ Further, suppose that there exist real number $r:0<r<1$ and sequences $\{z_j\}\subseteq D: |z_j|<r,$ $\{f_j\}\subseteq\mathcal{F},$ and $\{\rho_j\}\subset (0, \ 1]: \rho_j\to 0 $ such that $$g_j({\zeta})= \rho_j^{-\alpha}f_j(z_j+\rho_j^{\beta}\zeta) $$ converges locally uniformly to a nonconstant entire function $g$ in $\mathbb{C}^n.$ Then There is some $\zeta_0\in\mathbb{C}^n$ such that $g^{\sharp}(\zeta_0)>0.$ Suppose $z_j\to z_0$ as $j\to\infty.$ Then $|z_0|\leq r.$ Since $$|g_{j_{z_1}}(\zeta_0).v_1+ \ldots +g_{j_{z_n}}(\zeta_0).v_n|= \rho_j^{\beta-\alpha}|f_{j_{z_1}}(z_j+\rho_j^{\beta}\zeta_0).v_1+ \ldots +f_{j_{z_n}}(z_j+\rho_j^{m}\zeta_0).v_n|, $$ it follows that \begin{eqnarray*} f_j^{\sharp}(z_j+\rho_j^{\beta}\zeta_0) &=& \sup_{|v|=1}\frac{|f_{j_{z_1}}(z_j+\rho_j^{\beta}\zeta_0).v_1+ \ldots +f_{j_{z_n}}(z_j+\rho_j^{\beta}\zeta_0).v_n|}{1+|f_j(z_j+\rho_j^{\beta}\zeta_0)|^2}\\ &=& \sup_{|v|=1}\frac{\rho_j^{\alpha-\beta}|g_{j_{z_1}}(\zeta_0).v_1+ \ldots +g_{j_{z_n}}(\zeta_0).v_n|}{1+\rho_j^{2\alpha}|g_j(\zeta_0)|^2}\\ &\geq& \sup_{|v|=1}\frac{\rho_j^{\alpha-\beta}|g_{j_{z_1}}(\zeta_0).v_1+ \ldots +g_{j_{z_n}}(\zeta_0).v_n|}{1+|g_j(\zeta_0)|^2}\\ &=& \rho_j^{\alpha-\beta}g_j^{\sharp}(\zeta_0)\to\infty ~\mbox{as}~ j \to\infty \end{eqnarray*} and so by Marty's theorem $\mathcal{F}$ is not normal at $z_0$ and hence $\mathcal{F}$ is not normal on $D.$ Conversely, suppose that $\mathcal{F}$ is not normal at $z_0 = 0.$ Then by Marty's theorem, there exist $0<r^*<1, ~|z_j^*|<r^*, ~\{f_j\}\subseteq\mathcal{F}$ such that $$\lim\limits_{j\to\infty} f_j^\sharp(z_j^*)=\infty.$$ Choose $r$ such that $0<r^*<r<1$ and corresponding to each $f_j\in\mathcal{F}$ define $$F_j(z,t):= \frac{(r-|z|)^{\beta+\alpha}t^{\beta+\alpha}(1+|f_j(z)|^2)f_j^\sharp(z)}{(r-|z|)^{2\alpha}t^{2\alpha}+|f_j(z)|^2},$$ where $|z|<r, ~0<t\leq 1.$ Then \begin{eqnarray*} F_j(z_j^*,1) &=& \frac{(r-|z_j^*|)^{\beta+\alpha}(1+|f_j(z_j^*)|^2)f_j^\sharp(z_j^*)}{(r-|z_j^*|)^{2\alpha}+|f_j(z_j^*)|^2}\\ &=& \frac{(r-|z_j^*|)^{\beta-\alpha}(1+|f_j(z_j^*)|^2)f_j^\sharp(z_j^*)}{1+\frac{|f_j(z_j^*)|^2}{(r-|z_j^*|)^{2\alpha}}}\\ &>& \frac{(r-r^*)^{\beta-\alpha}(1+|f_j(z_j^*)|^2)f_j^\sharp(z_j^*)}{1+\frac{|f_j(z_j^*)|^2}{(r-r^*)^{2\alpha}}} \to \infty ~\mbox{as}~ j\to\infty. \end{eqnarray*} Thus for large $j$, we have $$F_j(z_j^*,1)>1$$ and therefore, by Lemma {\ref{lemma1}}, there exist ${z_j} ~\mbox{and}~ {t_j}$ satisfying $|z_j|< r, ~0<t_j<1$ such that $$\sup_{|z|< r}F_j(z,t_j)= F_j(z_j,t_j)=1.$$ Now rest of the proof goes on the same lines as that of the proof of Theorem\ref{thm:1}. $\Box$ \textbf{Proof of Theorem \ref{thm2}:} By Marty's theorem in $\ensuremath{\mathbb{C}}^n$ (see \cite{Dov1}, Theorem${2.1}$) we find that \eqref{eq3} is necessary with $E= \ensuremath{\mathbb{C}}.$ To prove the sufficiency, suppose \eqref{eq3} holds but $\mathcal{F}$ is not normal. Then by Theorem\ref{ZLCN} there exist sequences $\{f_j\}\subset \mathcal{F};$ ~$\{w_j\}\subset D : w_j\rightarrow w_0$ and $\{\rho_j\}\subset (0,~1):\rho_j \rightarrow 0,$ such that the sequence $\{g_j\}$ defined as $g_j(\zeta )=f_j(w_j+\rho_j\zeta)$ converges locally uniformly on $\ensuremath{\mathbb{C}}^n$ to a nonconstant entire function $g.$ Let $K$ be a compact set containing $w_0$ and suppose $g(\zeta_0)\in E$. By Hurwitz's theorem, there exists $\zeta_j\rightarrow\zeta_0$ such that $f_j(w_j+\rho_j\zeta_j)=g_j(\zeta_j )=g(\zeta_0)~\mbox{for large}~j.$ Since $f^{\sharp}_j(w_j+\rho_j\zeta_j)\leq M$ for $j$ sufficiently large, we have $$g^{\sharp}(\zeta_0)= \lim\limits_{j\to\infty} g_j^{\sharp}(\zeta_j)= \lim\limits_{j\to\infty} \rho_j f^{\sharp}_j(w_j+\rho _j\zeta_j)\leq \lim\limits_{j\to\infty}\rho_j M= 0.$$ Thus $g^{\sharp}(\zeta_0)=0$ whenever $g(\zeta_0)\in E$ implying that $$\sup\limits_{|v|=1}\left[\frac{\left|g_{z_1}(\zeta_0)\cdot v_1+g_{z_2}(\zeta_0)\cdot v_2+ \ldots +g_{z_n}(\zeta_0)\cdot v_n\right|}{1+ |g(\zeta_0)|^2}\right] =0 $$ whenever $g(\zeta_0)\in E$ which further implies that $$g_{z_1}(\zeta_0)\cdot v_1 + g_{z_2}(\zeta_0)\cdot v_2 + \ldots + g_{z_n}(\zeta_0)\cdot v_n=0$$ whenever $g(\zeta_0)\in E,$ for all $(v_1,~v_2, \ldots,~v_n)$ such that $$\sqrt{|v_1|^2+|v_2|^2+ \ldots +|v_n|^2}=1.$$ Taking $v_k=1$ and $v_m=0$ for all $m\neq k.$ Then $g_{z_k}(\zeta_0)=0 \mbox{ whenever } g(\zeta_0)\in E.$ Now, let $w= (a_1,~a_2,\ldots,~a_n), ~ w'=(b_1,~b_2,\ldots,~b_n)\in\ensuremath{\mathbb{C}}^n$ and define $$h_j(z_j):= g(b_1,\ldots, ~b_{j-1},~z_j,~a_{j+1}, \ldots ,~a_n), \ \ j=1,2, \ldots, n.$$ Suppose $h_j(a)\in E.$ Then $ g(b_1,\ldots,~b_{j-1},~a,~a_{j+1}, \ldots,~a_n)\in E$ and hence $$g_{z_j}(b_1,\ldots,b_{j-1},~a,~a_{j+1},\ldots,~a_n)=0.$$ That is, $$\frac{dh_j}{dz_j}(a)=0, \ j=1,2, \ldots, n.$$ This, by Theorem \ref{thm1}, implies that each $h_j(z_j)$ is constant. Thus for $j=1,2,\ldots, n, $ we have $$h_j(z_j)= g(b_1,\ldots,b_{j-1},~z_j,~a_{j+1},\ldots,~a_n)= \mbox{ a constant }$$ which implies that $g(w)=g(w')$ for all $w,~w'\in\ensuremath{\mathbb{C}}^n$ showing that $g$ is constant, a contradiction. $\Box$ \textbf{Proof of Theorem \ref{thm3}:} Let $K$ be a compact subset of $D$ and let $\zeta_1, \zeta_2, \zeta_3 \in \ensuremath{\mathbb{C}}$ be such that $h_{K}(\zeta_j)< \infty.$ Put $$E(K)=\{\zeta_1,\ \zeta_2, \ \zeta_3\} \mbox{ and } M(K)= \max_{\zeta\in E}|h_K(\zeta)|.$$ Then, for each $f\in \mathcal{F},$ we have \begin{eqnarray*} f^{\sharp}(z)&=&\sup\limits_{\left|v\right|=1}\frac{\left|Df(z)v\right|}{1+\left|f(z)\right|^{2}}\\ &\leq& \sup\limits_{\left|v\right|=1}\left|Df(z)\right|\left|v\right|\\ &=&\left|Df(z)\right| \leq h_{K}(f(z))\leq M(K) \end{eqnarray*} whenever $ z\in K$ and $f(z)\in E(K).$\\ By Theorem \ref{thm2} the normality of $\mathcal{F}$ on $D$ follows. $\Box$ \textbf{Proof of Theorem \ref{thm4}:} Suppose $\mathcal{F}$ is not normal. Then, by Theorem \ref{ZLCN}, there exist sequences $f_{j}\in \mathcal{F},~w_{j}\to w_{0},~\rho_{j} \to 0,$ such that the sequence $g_{j}(\zeta)=f_{j}(w_{j}+\rho_{j}\zeta)$ converges locally uniformly in $\ensuremath{\mathbb{C}}^{n}$ to a non-constant entire function $g.$ Let $K$ be a compact set containing $w_{0}$. Then there exists a set $E$ containing two points and $M>0$ such that $f^{\sharp}(z)\leq M, ~(f_{z_{k}})^{\sharp}(z)\leq M$ whenever $z\in K,~f(z)\in E.$ Let $w =(a_{1},a_{2},\ldots, a_{n}),~w' =(b_{1},b_{2}, \ldots,b_{n}) \in \ensuremath{\mathbb{C}}^{n}$ and define $$h_{i}(z_{i}):=g(b_{1}, \ldots, b_{i-1},z_{i},a_{i+1}, \ldots,a_{n}), ~i= 1,2, \ldots, n.$$ First, we shall show that for any $a\in E$, all zeros of $h_{i}(z_{i})-a$ have multiplicity at least $3.$ Let $c$ be zero of $h_{i}(z_{i})-a$. Then $\zeta_{0}=(b_{1},\ldots, b_{i-1},c,a_{i+1}, \ldots, a_n)$ is a zero of $g(z)-a.$ By Hurwitz's theorem, there exists a sequence $\zeta_{j}\to \zeta_{0}$ such that $f_{j}(w_{j}+\rho_{j}\zeta_{j}) \to a$ and therefore, $w_{j}+\rho_{j}\zeta_{j}\in K$ and $f_{j}(w_{j}+\rho_{j}\zeta_{j})\in E \mbox{ for large } j. $ Hence $$f_{j}^{\sharp}(w_{j}+\rho_{j}\zeta_{j}) \leq M, ~(f_{j_{z_{k}}})^{\sharp}(w_{j}+\rho_{j}\zeta_{j})\leq M,~k=1,2, \ldots,n.$$ Now, \begin{eqnarray*} g_{j}^{\sharp}(\zeta_{j}) &=& \sup_{|v|=1} \frac{|{g_j}_{z_1}(\zeta_{j}).v_{1} + \ldots +{g_j}_{z_n}(\zeta_{j}).v_{n}|}{1+|g_{j}(\zeta_{j})|^{2}} \\ &=& \sup_{|v|=1} \frac{\rho_{j}|f_{j_{z_{1}}}(w_{j}+\rho_{j}\zeta_{j}).v_{1}+ \ldots +f_{j_{z_{n}}}(w_{j}+\rho_{j}\zeta_{j}).v_{n}|}{1+|f_{j}(w_{j}+\rho_{j}\zeta_{j})|^{2}} \\ &=& \rho_{j}f_{j}^{\sharp}(w_{j}+\rho_{j}\zeta_{j}) \\ &\leq& \rho_{j}M \to 0 ~as ~j \to \infty. \end{eqnarray*} Thus $g^{\sharp}(\zeta_{0}) = 0 $ which implies that $$\sup_{|v|=1} \frac{|g_{z_1}(\zeta_{0}).v_{1} + \ldots +g_{z_i}(\zeta_{0}).v_{i}+ \ldots +g_{z_n}(\zeta_{0}).v_{n}|}{1+|g(\zeta_{0})|^{2}} = 0$$ and hence $g_{z_i}(\zeta_{0}) = 0$ for each $i=1,2,\ldots, n$. That is, $ g_{z_i}(b_{1}, \ldots, b_{i-1},c,a_{i+1}, \ldots ,a_{n}) = 0$ implying that $$ \frac{dh_i}{dz_i}(c)= 0, \ i=1,2, \ldots, n.$$ This shows that $c$ is an $a-$point of $h_{i}$ with multiplicity at least $2.$\\ Next, \begin{eqnarray*} (g_{j_{z_i}})^{\sharp}(\zeta_{j}) &=& \sup_{|v|=1} \frac{|g_{j_{z_iz_1}}(\zeta_{j}).v_{1} + \ldots +g_{j_{z_iz_n}}(\zeta_{j}).v_{n}|}{1+|g_{j_{z_{i}}}(\zeta_{j})|^{2}} \\ &=& \sup_{|v|=1} \frac{\rho_{j}^{2}|f_{j_{z_{i}z_{1}}}(w_{j}+\rho_{j}\zeta_{j}).v_{1}+ \ldots +f_{j_{z_{i}z_{n}}}(w_{j}+\rho_{j}\zeta_{j}).v_{n}|}{1+\rho_{j^{2}}|f_{j_{z_{i}}}(w_{j}+\rho_{j}\zeta_{j})|^{2}} \\ &=& \sup_{|v|=1} \frac{\rho_{j}^{2}(f_{j_{z_{i}}})^{\sharp}(w_{j}+\rho_{j}\zeta_{j})[1+|f_{j_{z_{i}}}(w_{j}+\rho_{j}\zeta_{j})|^{2}]}{1+\rho_{j}^{2}|f_{j_{z_{i}}}(w_{j}+\rho_{j}\zeta_{j})|^{2}} \end{eqnarray*} Since\\ $\sup_{|v|=1} |f_{j_{z_{1}}}(w_{j}+\rho_{j}\zeta_{j}).v_{1}+ \ldots +f_{j_{z_{n}}}(w_{j}+\rho_{j}\zeta_{j}).v_{n}| = f_{j}^{\sharp}(w_{j}+\rho_{j}\zeta_{j})[1+|f_{j}(w_{j}+\rho_{j}\zeta_{j})|^{2}], $ therefore, $|f_{j_{z_{i}}}(w_{j}+\rho_{j}\zeta_{j})| < M[1+\max_{d\in E}|d|^{2}].$ Thus, \begin{eqnarray*} (g_{j_{z_{i}}})^{\sharp}(\zeta_{j}) &\leq& \frac{\rho_{j}^{2}.M[1+\{M(1+\max_{d \in E}|d|^{2})\}^{2}]}{1+\rho_{j}^{2}|f_{j_{z_{i}}}(w_{j}+\rho_{j}\zeta_{j})|^{2}}\\ &\leq& M[1+\{M(1+\max\limits_{d \in E}|d|^{2})\}^{2}]\rho_{j}^{2}\\ &\to & 0 ~as ~j \to\infty \end{eqnarray*} and hence $(g_{z_{i}})^{\sharp}(\zeta_{0})=0.$ That is, $$\sup_{|v|=1} |g_{z_iz_1}(\zeta_{0}).v_{1}+ \ldots +g_{z_iz_n}(\zeta_{0}).v_{n}| = 0.$$ That is, $g_{z_iz_i}(\zeta_{0}) = 0 $ implying that $g_{z_iz_i}(b_{1},\ldots,b_{i-1},c,a_{i+1},\ldots, a_{n})= 0 $. Hence $$\frac{d^{2}}{dz_{i}^{2}}h_{i}(c)= 0 $$ showing that $c$ is an $a-$point of $h_{i}(z_{i})$ with multiplicity at least $3$. Now by Theorem\ref{thm1}, we conclude that each $h_{i}(z_{i})$ is constant and hence $g$ is constant, a contradiction. $\Box$ \textbf{Proof of Theorem \ref{thm5}:} Suppose $\mathcal{F}$ is not normal. Then, by Theorem \ref{ZLCN}, there exist sequences $f_{j}\in \mathcal{F},~w_{j}\to w_{0},~\rho_{j} \to 0,$ such that the sequence $g_{j}(\zeta)=f_{j}(w_{j}+\rho_{j}\zeta)$ converges locally uniformly in $\ensuremath{\mathbb{C}}^{n}$ to a non-constant entire function $g.$ \\ Let $K$ be compact set containing $w_0.$ Since $\mathcal{G}$ is locally uniformly bounded in $D.$ So there exist some constant $M(K)>0$ such that $$\frac{|Df(z)|}{1+|f(z)|^s}\leq M, ~z\in K, ~f\in\mathcal{F}.$$ Now \begin{eqnarray*} |Dg_j(\zeta)| &=& \rho_j|Df_j(w_j+\rho_j\zeta)|\\ &\leq& \rho_j.M(1+|f_j(w_j+\rho_j\zeta)|^s)\\ &=& \rho_j. M(1+|g_j(\zeta)|^s) \to 0 ~\mbox{as}~ j \to \infty \end{eqnarray*} implies that $|Dg(\zeta)|\equiv 0.$ That is, $$\frac{\partial g(\zeta)}{\partial z_i}= 0, ~i= 1, 2, \ldots, n.$$ which shows that $g$ is constant, a contradiction. $\Box$\\ \end{document}
arXiv
\begin{document} \title{Global lower mass-bound for critical configuration models in the heavy-tailed regime} \author{Shankar Bhamidi$^1$, Souvik Dhara$^{2,3}$, Remco van der Hofstad$^{4}$, Sanchayan Sen$^5$} \maketitle \blfootnote{\emph{Emails:} \href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}} \blfootnote{$^1$Department of Statistics and Operations Research, University of North Carolina} \blfootnote{$^2$ Department of Mathematics, Massachusetts Institute of Technology} \blfootnote{$^3$ Microsoft Research Lab -- New England} \blfootnote{$^4$Department of Mathematics and Computer Science, Eindhoven University of Technology} \blfootnote{$^5$Department of Mathematics, Indian Institute of Science} \blfootnote{2010 \emph{Mathematics Subject Classification.} Primary: 60C05, 05C80.} \blfootnote{\emph{Keywords and phrases}. Global lower mass-bound, critical configuration model, heavy-tailed degrees} \blfootnote{\emph{Acknowledgment}. The authors are grateful to two anonymous referees for their careful reading and many comments and suggestions on an earlier version of the paper. SB was partially supported by NSF grants DMS-1613072, DMS-1606839, and ARO grant W911NF-17-1-0010. SD and RvdH were supported by the Netherlands Organisation for Scientific Research (NWO) through Gravitation Networks grant 024.002.003. In addition, RvdH was supported by VICI grant 639.033.806. SS has been supported in part by the Infosys foundation, Bangalore, and by MATRICS grant MTR/2019/000745 from SERB. SD thanks Eindhoven University of Technology, where a major part of this work was done. } \maketitle \begin{abstract} We establish the global lower mass-bound property for the largest connected components in the critical window for the configuration model when the degree distribution has an infinite third moment. The scaling limit of the critical percolation clusters, viewed as measured metric spaces, was established in \cite{BDHS17} with respect to the Gromov-weak topology. Our result extends those scaling limit results to the stronger Gromov-Hausdorff-Prokhorov topology under slightly stronger assumptions on the degree distribution. This implies the distributional convergence of global functionals such as the diameter of the largest critical components. Further, our result gives a sufficient condition for compactness of the random metric spaces that arise as scaling limits of critical clusters in the heavy-tailed regime. \end{abstract} \section{Introduction} Any finite, connected graph $\mathscr{C}$ can be viewed as a metric space with the distance between points given by $a \ensuremath{\mathrm{d}} (\cdot,\cdot)$ for some constant $a>0$, where $\ensuremath{\mathrm{d}}(\cdot,\cdot)$ is used as a generic notation to denote the graph-distance (i.e., number of edges in the shortest path between vertices). There is a natural probability measure $\mu$ associated to the metric space $(\mathscr{C},a \ensuremath{\mathrm{d}})$ given by $\mu (A) = |A|/|\mathscr{C}| $ for any $A\subset \mathscr{C}$, where $|A|$ denotes the number of vertices in $A$. We denote this metric measure space by $(\mathscr{C},\red{a})$. Fix any $\delta>0$ and define the $\delta$-lower mass of $(\mathscr{C},\red{a})$ by \begin{eq}\label{eq:defn:GLM} \fm(\delta):= \frac{\inf_{u\in \mathscr{C}}\big|\{v\in \mathscr{C}: a \ensuremath{\mathrm{d}}(v,u) \leq \delta\}\big|}{|\mathscr{C}|}. \end{eq} Thus, $\fm (\delta)$ is the least \emph{mass} in any $\delta$-neighborhood of a vertex in $(\mathscr{C},\red{a})$. For a sequence $(\mathscr{C}_n,a_n)_{n\geq 1}$ of graphs viewed as metric measure spaces, the global lower mass-bound property is defined as follows: \begin{defn}[Global lower mass-bound property \cite{ALW16-2}]\normalfont For $\delta>0$, let $\fm_n(\delta)$ denote the $\delta$-lower mass of $(\mathscr{C}_n,a_n)$. Then $(\mathscr{C}_n,a_n)_{n\geq 1}$ is said to satisfy the global lower mass-bound property if and only if $\sup_{n\geq 1} \fm_n(\delta)^{-1}<\infty$ for any $\delta >0$. When $(\mathscr{C}_n)_{n\geq 1}$ is a collection of random graphs, $(\mathscr{C}_n,a_n)_{n\geq 1}$ is said to satisfy the global lower mass-bound property if and only if $(\fm_n(\delta)^{-1})_{n\geq 1}$ is a tight sequence of random variables for any $\delta >0$. \end{defn} The aim of this paper is to prove the global lower mass-bound property for largest connected components of random graphs with given degrees (configuration model) at criticality, when the third moment of the empirical degree distribution tends to infinity (Theorem~\ref{thm:gml-bound}). Informally speaking, the global lower mass-bound property ensures that all the small neighborhoods of vertices in the `large' critical component have mass bounded away from zero, so that the component does not have any \emph{light spots} and the total mass is well-distributed over the whole component. This has several interesting consequences in the theory of critical random graphs. Our main motivation comes from the work of Athreya, L\"ohr, and Winter~\cite{ALW16-2}, who have shown that the global lower mass-bound property can be used to prove Gromov-Hausdorff-Prokhorov (GHP) convergence of random metric spaces. In a previous paper \cite{BDHS17}, we have studied the critical percolation clusters for the configuration model in the heavy-tailed universality class. We have proved that the ordered vector of components converges in distribution to suitable random objects in the Gromov-weak topology. The global lower mass-bound in this paper shows that the result of \cite{BDHS17} in fact holds with respect to the stronger GHP-topology. One motivating reason for proving the GHP-convergence is that it yields the scaling limit of global functionals like the diameter of large critical components. Finding the scaling limit for the diameter of critical components is a daunting task even for the Erd\H{o}s-R\'enyi random graph. Nachmias and Peres~\cite{NP08} estimated the tail probabilities of the diameter, but showing a distributional convergence result was a difficult question, until the seminal paper by Addario-Berry, Broutin and Goldschmidt~\cite{ABG09} that proved the GHP-convergence for critical Erd\H{o}s-R\'enyi random graphs. As a corollary of Theorem~\ref{thm:gml-bound}, we also get distributional convergence of the suitably rescaled diameter of the critical percolation clusters in the heavy-tailed regime (Theorem~\ref{thm:GHP}), where the scaling limit and exponents turn out to be different than those for the Erd\H{o}s-R\'enyi case. We will further discuss the applications and the scope of this work as well as its technical contributions after stating our results in Section~\ref{sec:discussion}. We start by defining the configuration model and state the precise assumptions. \subsection{The configuration model} Consider a non-increasing sequence of degrees $\boldsymbol{d} = ( d_i )_{i \in [n]}$ such that $\ell_n = \sum_{i \in [n]}d_i$ is even. For notational convenience, we suppress the dependence of the degree sequence on $n$. The configuration model on $n$ vertices having degree sequence $\boldsymbol{d}$ is constructed as follows \cite{B80,BC78}: \begin{itemize} \item[] Equip vertex $j$ with $d_{j}$ stubs, or \emph{half-edges}. Two half-edges create an edge once they are paired. Therefore, initially we have $\ell_n=\sum_{i \in [n]}d_i$ half-edges. Pick any one half-edge and pair it with another uniformly chosen half-edge from the remaining unpaired half-edges and keep repeating the above procedure until all the unpaired half-edges are exhausted. \end{itemize} Let $\mathrm{CM}_n(\bld{d})$ denote the graph constructed by the above procedure. Note that $\mathrm{CM}_n(\bld{d})$ may contain self-loops or multiple edges. Given any degree sequence, let $\mathrm{UM}_n(\bld{d})$ denote the graph chosen uniformly at random from the collection of all simple graphs with degree sequence $\boldsymbol{d}$. It can be shown that the conditional law of $\mathrm{CM}_{n}(\boldsymbol{d})$, conditioned on it being simple, is the same as $\mathrm{UM}_n(\bld{d})$ (see e.g. \cite[Proposition 7.13]{RGCN1}). \subsection{Main results} Fix a constant $\tau\in (3,4)$, which will denote the power-law exponent of the asymptotic degree distribution of $\mathrm{CM}_n(\bld{d})$. Throughout this paper we will use the shorthand notation \begin{equation}\label{eqn:notation-const} \alpha= 1/(\tau-1),\quad \rho=(\tau-2)/(\tau-1),\quad \eta=(\tau-3)/(\tau-1). \end{equation} We use the standard notation of $\xrightarrow{\scriptscriptstyle\ensuremath{\mathbbm{P}}}$ and $\xrightarrow{\scriptscriptstyle d}$ to denote convergence in probability and in distribution, respectively. Also, we use a generic notation $C$ to denote a positive universal constant whose exact value may change from line to line. We use Bachmann–Landau asymptotic notation $o(\cdot)$, $O(\cdot)$, $\Theta (\cdot)$, $\omega (\cdot)$, $\Omega(\cdot)$. A sequence of events $(\mathcal{E}_n)_{n\geq 1}$ is said to occur with high probability~(whp) with respect to the probability measures $(\mathbbm{P}_n)_{n\geq 1}$ when $\mathbbm{P}_n\big( \mathcal{E}_n \big) \to 1$. For (random) variables $X_n$ and $Y_n$, define $X_n = O_{\scriptscriptstyle\mathbbm{P}}(Y_n)$ when $ ( |X_n|/|Y_n| )_{n \geq 1} $ is a tight sequence; $X_n =o_{\scriptscriptstyle\mathbbm{P}}(Y_n)$ when $X_n/Y_n \xrightarrow{\scriptscriptstyle\ensuremath{\mathbbm{P}}} 0 $; $X_n =\ensuremath{\Theta_{\scriptscriptstyle\PR}}(Y_n)$ if both $X_n=O_{\scriptscriptstyle \PR}(Y_n) $ and $Y_n=O_{\scriptscriptstyle \PR}(X_n)$. We first state the general assumptions that are used to prove scaling limits for critical configuration models with heavy-tailed degree distributions as identified previously in~\cite{DHLS16,BDHS17}: \begin{assumption}[General assumptions]\label{assumption1} \normalfont For each $n\geq 1$, let $\bld{d}=\boldsymbol{d}_n=(d_1,\dots,d_n)$ be a degree sequence satisfying $d_1\geq d_2\geq\ldots\geq d_n$. We assume the following about $(\boldsymbol{d}_n)_{n\geq 1}$ as $n\to\infty$: \begin{enumerate}[(i)] \item \label{assumption1-1} (\emph{High-degree vertices}) For each fixed $i\geq 1$, \begin{equation}\label{defn::degree} n^{-\alpha}d_i\to \theta_i, \end{equation} where $\boldsymbol{\theta}=(\theta_1,\theta_2,\dots) \in \ell^3_{{\scriptscriptstyle \downarrow}} \setminus \ell^2_{{\scriptscriptstyle \downarrow}}$, where $\red{\ell^p_{{\scriptscriptstyle \downarrow}}}:=\{(x_i)_{i\geq 1}: x_1\geq x_2\geq \dots \text{ and }\sum_{i}x_i^p<\infty\}$. \item \label{assumption1-2} (\emph{Moment assumptions}) Let $D_n$ denote the degree of a typical vertex, i.e., a vertex chosen uniformly at random from the vertex set $[n]$, independently of $\mathrm{CM}_n(\boldsymbol{d})$. Then, $D_n$ converges in distribution to some discrete random variable $D$ and \begin{gather} \ensuremath{\mathbbm{E}}[D_n] = \frac{1}{n}\sum_{i\in [n]}d_i\to \mu := \ensuremath{\mathbbm{E}}[D], \qquad \ensuremath{\mathbbm{E}}[D_n^2] = \frac{1}{n}\sum_{i\in [n]}d_i^2 \to \mu_2:=\ensuremath{\mathbbm{E}}[D^2],\label{eqn:669}\\ \lim_{K\to\infty}\limsup_{n\to\infty}n^{-3\alpha} \sum_{i=K+1}^{n} d_i^3=0.\label{eqn:670} \end{gather} \item Let $n_1$ be the number of degree-one vertices. Then $n_1=\Theta(n)$, which is equivalent to assuming that $\prob{D=1}>0$. \end{enumerate} \end{assumption} \begin{remark} \normalfont \label{rem:assumption-1} As important examples, Assumption~\ref{assumption1} was shown to hold when the degree distribution is power-law with exponent $\tau\in (3,4)$ \cite[Section 2]{DHLS16}. More precisely, if $F$ is a distribution function on the nonnegative integers satisfying $[1-F](x) = (1+o(1))C x^{-(\tau-1)}$ as $x\to\infty$, then Assumptions~\ref{assumption1}(i),~\ref{assumption1}(ii) are satisfied when (a) $d_i= [1-F]^{-1}(i/n)$, and when (b) $d_i$ are the order statistics of an i.i.d.~sample from $F$ (we add a dummy half-edge to vertex 1 if $\sum_{i\in [n]} d_i$ is odd). Assumptions~\ref{assumption1}(iii) is also satisfied in these examples if $F$ has non-zero mass at 1. \end{remark} We further assume that the configuration model lies within the critical window of the phase transition, i.e., for some $\lambda\in \ensuremath{\mathbb{R}}$, \begin{equation}\label{defn:criticality} \nu_n=\frac{\sum_{i\in [n]}d_i(d_i-1)}{\sum_{i\in [n]}d_i} = 1 + \lambda n^{-\eta} + o(n^{-\eta}). \end{equation} Denote the $i$-th largest connected component of $\mathrm{CM}_n(\bld{d})$ by $\mathscr{C}_{\scriptscriptstyle (i)}$, breaking ties arbitrarily. For each $v\in [n]$ and $\delta>0$, let $\mathcal{N}_v(\delta)$ denote the $\delta n^{\eta}$ neighborhood of $v$ in $\mathrm{CM}_n(\bld{d})$ in the graph distance. For each $i\geq 1$, define \begin{equation} \label{eq:m-i-defn} \mathfrak{m}_i^n(\delta) = \inf_{v\in\mathscr{C}_{\scriptscriptstyle (i)}}n^{-\rho} |\mathcal{N}_v(\delta)|. \end{equation} Our goal is to prove the global lower mass-bound property for the critical components~$\mathscr{C}_{\scriptscriptstyle (i)}$. For $\mathrm{CM}_n(\bld{d})$ satisfying Assumption~\ref{assumption1} and \eqref{defn:criticality}, it was shown in \cite[Theorem 1]{DHLS16} that \begin{eq}\label{eq:comp-size-conv} (n^{-\rho} |\mathscr{C}_{\scriptscriptstyle (i)}|)_{i\geq 1} \ensuremath{\xrightarrow{d}} (\xi_i)_{i\geq 1}, \end{eq} with respect to the $\ell^2_{{\scriptscriptstyle \downarrow}}$-topology, where the $\xi_i$'s are non-degenerate random variables with support $(0,\infty)$. Therefore, it is enough to rescale by $n^{\rho}$ in \eqref{eq:m-i-defn} instead of the component sizes as given in \eqref{eq:defn:GLM}. In order to prove tightness of $\mathfrak{m}_i^n(\delta)$, we will need a further technical assumption on the degrees. \red{ \begin{assumption}\label{assumption-extra} \normalfont Let $V_n^*$ be a vertex chosen in a size-biased manner with sizes being $(d_i/\ell_n)_{i\in [n]}$, i.e., $\ensuremath{\mathbbm{P}}(V_n^* = i) = d_i/\ell_n$, and let $D_n^*$ be the degree of $V_n^*$. There exist constants $c_0>0$ and $c_1 >1$ such that for all $n\geq 1$, \begin{eq}\label{eq:defn-D-n-lb} \ensuremath{\mathbbm{P}}(l <D_n^*\leq c_1l) \geq \frac{c_0}{l^{\tau-2}} \ \ \text{ for }\ \ 1\leq l< d_1\, . \end{eq} \end{assumption} } \begin{remark} \label{rem:assumption-2}\normalfont \red{ Assumption~\ref{assumption-extra} says that the mass distribution in the tail of $D_n^*$ is well-behaved in the sense that we have a uniform (over $n$) lower bound of the form \eqref{eq:defn-D-n-lb}. Such lower bounds can be used to obtain tail-bounds on the heights of branching processes; see Proposition~\ref{prop:RW-hitting-estimate} below. (See also \cite[Theorem 1.3]{A17}.) It can be easily shown that Assumption~\ref{assumption-extra} holds in the examples discussed in Remark~\ref{rem:assumption-1} by observing that the size-biased distribution is a power-law with exponent $\tau -1$. } \end{remark} \noindent The following theorem is the main result of this paper: \begin{theorem}[Global lower mass-bound for $\mathrm{CM}_n(\bld{d})$] \label{thm:gml-bound} Suppose that {\rm Assumptions~\ref{assumption1},~\ref{assumption-extra}} and the criticality condition \eqref{defn:criticality} hold. Then, for each fixed $i\geq 1$, $(\mathscr{C}_{\scriptscriptstyle (i)},n^{-\eta})_{n\geq 1}$ satisfies the global lower mass-bound, i.e., for any $\delta>0$, the sequence $(\mathfrak{m}_i^n(\delta)^{-1})_{n \geq 1}$ is tight. \end{theorem} \noindent By \cite[Theorem 1.1]{J09c}, under the condition \eqref{eqn:669} in Assumption~\ref{assumption1}, \begin{equation} \liminf_{n\to\infty} \ensuremath{\mathbbm{P}}(\mathrm{CM}_n(\bld{d}) \text{ is simple})>0. \end{equation} This immediately implies the following: \begin{theorem}[Global lower mass-bound for $\mathrm{UM}_n(\bld{d})$]\label{cor:GLM-uniform} Under {\rm Assumption~\ref{assumption1},~\ref{assumption-extra}} and \eqref{defn:criticality}, the largest components of $\mathrm{UM}_n(\bld{d})$ also satisfy the global lower mass-bound property. \end{theorem} Next we state another important corollary, which says that the global lower mass-bound property is also satisfied by critical percolation clusters in $\mathrm{CM}_n(\bld{d})$ and $\mathrm{UM}_n(\bld{d})$. To this end, let us assume that \begin{equation} \label{eq:defn-super-crit} \lim_{n\to\infty}\frac{\sum_{i\in [n]}d_i(d_i-1)}{\sum_{i\in [n]}d_i} = \nu >1. \end{equation} In this regime, $\mathrm{CM}_n(\bld{d})$ is supercritical in the sense that there exists a unique \emph{giant} component whp for $\nu>1$, and when $\nu<1$, all the components have size $o_{\scriptscriptstyle \PR}(n)$ \cite{JL09,MR95}. Percolation refers to deleting each edge of a graph independently with probability $1-p$. The critical window for percolation on $\mathrm{CM}_n(\bld{d})$ in the heavy-tailed setting was studied in \cite{DHLS16,BDHS17}, and is defined by the values of $p$ given by \begin{equation}\label{eq:critical-window-defn} p_c(\lambda) = \frac{1}{\nu_n}+\frac{\lambda}{n^{\eta}}+o(n^{-\eta}). \end{equation} Let $\mathscr{C}_{\scriptscriptstyle (i)}(p_c(\lambda))$ denote the $i$-th largest component of the graph obtained by percolation with probability $p_c(\lambda)$ on the graph $\mathrm{CM}_n(\bld{d})$. Then the following result holds: \begin{theorem}[Global lower mass-bound for critical percolation]\label{cor:GLM-percoltion} Under {\rm Assumptions~\ref{assumption1}(i), \ref{assumption1}(ii), \ref{assumption-extra},} \eqref{eq:defn-super-crit} and \eqref{eq:critical-window-defn}, $(\mathscr{C}_{\scriptscriptstyle (i)}(p_c(\lambda)),n^{-\eta})_{n\geq 1}$ satisfies the global lower mass-bound property, for each fixed $i\geq 1$. This result also holds for percolation on $\mathrm{UM}_n(\bld{d})$. \end{theorem} Let $\cG_n$ denote the graph obtained by doing percolation with edge retention probability $p_c(\lambda)$ (defined in \eqref{eq:critical-window-defn}) on $\mathrm{CM}_n(\bld{d})$. Let $\bld{d}^p=(d_i^p)_{i\in [n]}$ denote the degree sequence of $\cG_n$. By \cite[Lemma 3.2]{F07}, the conditional law of $\cG_n$, conditionally on $\bld{d}^p$, is same as the law of $\mathrm{CM}_n(\bld{d}^p)$. Thus, Theorem~\ref{cor:GLM-percoltion} follows from Theorem~\ref{thm:gml-bound} if we can show that the percolated degree sequence~$\bld{d}^p$ satisfies (with possibly different parameters) Assumptions~\ref{assumption1}~and~\ref{assumption-extra} with high probability when the original degree sequence $(d_i)_{i\in [n]}$ satisfies Assumptions~\ref{assumption1}(i),~\ref{assumption1}(ii),~\ref{assumption-extra}, and also \eqref{defn:criticality} holds \blue{for~$\bld{d}^p$ if further the percolation probability is given by \eqref{eq:critical-window-defn}}. The verification of these assumptions are provided in Section~\ref{sec:perc-degrees}. \begin{remark}\normalfont It is worthwhile to point out that Theorem~\ref{thm:gml-bound} can be proved when the $\mathscr{C}_{\scriptscriptstyle (i)}$'s are endowed with a more general measure rather than the counting measure. To be precise, for any sequence of vertex weights $(w_v)_{v\in [n]}$, the component $\mathscr{C}_{\scriptscriptstyle (i)}$ can be equipped with the measure $\mu_{\scriptscriptstyle (i)} (A) = \sum_{v\in A} w_v / \sum_{v\in \mathscr{C}_{\scriptscriptstyle (i)}} w_v$, for any $A \subset \mathscr{C}_{\scriptscriptstyle (i)}$. Then Theorem~\ref{thm:gml-bound} can also be proved using identical methods as in this paper, with the additional assumptions that $$\lim_{n\to\infty}\frac{1}{\ell_n}\sum_{i\in [n]} d_i w_i = \mu_{w}, \quad \max\bigg\{\sum_{i\in [n]}d_iw_i^2,\sum_{i\in [n]}d_i^2w_i\bigg\} = O(n^{3\alpha}). $$ These additional assumptions are required when we apply the results from \cite{DHLS16} (see \cite[Theorem 21]{DHLS16}). We adopted the simpler version of the counting measure here because it relates directly to \cite[Theorem 2.1]{BDHS17}. \end{remark} \subsection{Discussion} \label{sec:discussion} \paragraph*{Scaling limit of critical percolation clusters.} We write $n^{-\eta} \mathscr{C}_{\scriptscriptstyle (i)} (p_c(\lambda))$ to denote the $i$-th largest component of $\mathrm{CM}_n(\bld{d},p_c(\lambda))$, viewed as a measured metric space with the metric being the graph distance re-scaled by $n^{\eta}$, and the measure being proportional to the counting measure. Athreya, L\"ohr, and Winter~\cite{ALW16-2} showed that the global lower mass-bound property forms a crucial ingredient to prove convergence of random metric spaces such as $n^{-\eta} \mathscr{C}_{\scriptscriptstyle (i)} (p_c(\lambda))$ with respect to the Gromov-Hausdorff-Prokhorov (GHP) topology on the space of compact metric spaces. The other key ingredient is the scaling limit for $n^{-\eta}\mathscr{C}_{\scriptscriptstyle (i)}(p_c(\lambda))$ with respect to the Gromov-weak topology, which was established in \cite[Theorem 2.1]{BDHS17}. The Gromov-weak topology is an analogue of finite-dimensional convergence, since it considers distances between a finite number of sampled points from the underlying metric space. Thus, global functionals such as the diameter are not continuous with respect to this topology. Indeed, it may be the case that there is a long path of growing length, that has asymptotically negligible mass. In our context, the problem could arise due to paths of length much larger than $ n^{\eta}$. The global lower mass-bound property ensures that the components have sufficient mass everywhere. This forbids the existence of long thin paths, when the total mass of the component converges. For this reason, Gromov-weak convergence and global lower mass-bound together imply GHP-convergence when the support of the limiting measure is the entire limiting space \cite[Theorem 6.1]{ALW16-2}. For formal definitions of the Gromov-weak topology, and the GHP-topology on the space of compact measured metric spcaes, we refer the reader to \cite{BHS15,GPW09,ALW16-2}. Following the above discussion, the next theorem is a direct consequence of Theorem~\ref{cor:GLM-percoltion}, \cite[\red{Theorem 2.3}]{BDHS17} and \cite[Theorem 6.1]{ALW16-2}: Let $\mathbb{M}$ denote the space of measured compact metric spaces equipped with the GHP-topology, and let $\mathbb{M}^\ensuremath{\mathbb{N}}$ denote the product space with the associated product topology. \begin{theorem}[GHP convergence of critical percolation clusters]\label{thm:GHP} There exists a sequence of measured metric spaces $(\blue{\mathscr{M}_i})_{i\geq 1} = ((M_i,\ensuremath{\mathrm{d}}_i,\mu_i))_{i\geq 1} \in \mathbb{M}^\ensuremath{\mathbb{N}}$ such that, under {\rm Assumptions~\blue{\ref{assumption1}(i),~\ref{assumption1}(ii),}~\ref{assumption-extra}}\blue{, \eqref{eq:defn-super-crit}} and \eqref{eq:critical-window-defn}, as $n\to\infty$, \begin{eq} (n^{-\eta} \mathscr{C}_{\scriptscriptstyle (i)}(p_c(\lambda)))_{\blue{i\geq 1}} \ensuremath{\xrightarrow{d}} (\blue{\mathscr{M}_i})_{i\geq 1} \quad \text{ in } \ \mathbb{M}^\ensuremath{\mathbb{N}}. \end{eq} Moreover, the results also hold for $\mathrm{UM}_n(\bld{d},p_c(\lambda))$. \end{theorem} The exact description of the space $\mathscr{M}_i$ can be found in \cite{BDHS17}. It is worthwhile mentioning a recent work by Conchon-Kerjan and Goldschmidt~\cite{CG20} which is closely related to Theorem~\ref{thm:GHP}. Conchon-Kerjan and Goldschmidt~\cite{CG20} deduce scaling limits for the vector of components in GHP-topology for critical configuration models having i.i.d power law degrees with exponent $\tau\in (3,4)$. \blue{In Remarks~\ref{rem:assumption-1} and \ref{rem:assumption-2}, we noted that Assumptions~\ref{assumption1}(i),~\ref{assumption1}(ii),~and~\ref{assumption-extra} hold when the degrees are i.i.d samples from a power-law distribution with exponent $\tau \in (3,4)$. Therefore, Theorem~\ref{thm:GHP} implies that the conditional law of $(n^{-\eta} \mathscr{C}_{\scriptscriptstyle (i)}(p_c(\lambda)))_{i\geq 1}$, conditioned on the i.i.d degree sequence, converges to the law of $(\blue{\mathscr{M}_i})_{i\geq 1}$ in $\mathbb{M}^\ensuremath{\mathbb{N}}$ for almost every realization of the i.i.d degree sequence. Hence, Theorem~\ref{thm:GHP} gives a quenched result whereas \cite{CG20} proves an annealed result.} The method of \cite{CG20} relies on an alternative approach showing convergence of the height processes corresponding to the components. The associated limiting object was studied in \cite{GHS18}, which interestingly turns out to have a quite different description than those in \cite{BDHS17,BHS15}. \paragraph*{Scaling limit of maximal distances.} For any metric space $(X,\ensuremath{\mathrm{d}})$ and a point $x\in X$, define the radius of $x$ in $X$ and the diameter of $X$ by \begin{eq} \mathrm{Rad}(x,X) = \sup_{y\in X} \ensuremath{\mathrm{d}}(x,y)\quad \text{and} \quad \mathrm{diam}(X) = \sup_{x\in X} \mathrm{Rad}(x,X) = \sup_{x,y\in X} \ensuremath{\mathrm{d}}(x,y). \end{eq} An important corollary of Theorem~\ref{thm:GHP} is the convergence of the radius and the diameter of the critical components: Let $V_{n,i}$ be a uniformly chosen vertex in $\mathscr{C}_{\scriptscriptstyle (i)}(p_c(\lambda))$, where $(V_{n,i})_{i\geq 1}$ is an independent collection conditionally on $(\mathscr{C}_{\scriptscriptstyle (i)}(p_c(\lambda)))_{i\geq 1}$. Similarly, using the notation of the scaling limits in Theorem~\ref{thm:GHP}, let $V_i$ be chosen from $M_i$ according to the measure $\mu_i$ and let $(V_i)_{i\geq 1}$ be an independent collection conditionally on $(\blue{\mathscr{M}_i})_{i\geq 1}$. \begin{corollary}[Convergence of radius and diameter] \label{cor:diameter} Under {\rm Assumptions~\blue{\ref{assumption1}(i),~\ref{assumption1}(ii),}~\ref{assumption-extra}}\blue{, \eqref{eq:defn-super-crit}} and \eqref{eq:critical-window-defn}, as $n\to\infty$, \begin{eq} \big(n^{-\eta}\mathrm{Rad} (V_{n,i},\mathscr{C}_{\scriptscriptstyle (i)}(p_c(\lambda)))\big)_{i\geq 1} &\ensuremath{\xrightarrow{d}} (\mathrm{Rad}(V_i,\blue{\mathscr{M}_i}))_{i\geq 1}, \\ \big(n^{-\eta}\ensuremath{\mathrm{diam}}(\mathscr{C}_{\scriptscriptstyle (i)}(p_c(\lambda)))\big)_{i\geq 1} &\ensuremath{\xrightarrow{d}} (\mathrm{diam}(\blue{\mathscr{M}_i}))_{i\geq 1}, \end{eq} with respect to the product topology, where $(\blue{\mathscr{M}_i})_{i\geq 1}$ is given by {\rm Theorem~\ref{thm:GHP}}. Moreover, the result also holds for $\mathrm{UM}_n(\bld{d})$. \end{corollary} Proving scaling limits for the diameter of the critical tree-like objects is often a difficult task. In \cite{Sze83}, Szekeres proved that, for the uniform random rooted labelled tree on $m$ vertices, the diameter, rescaled by $\sqrt{m}$, converges in distribution. Szekeres also provided an explicit formula for the density of the limiting distribution in \cite[Page 395, (12)]{Sze83}. Szekeres' method was based on generating functions. {\L}uczak~\cite{Luc95} also considered enumeration of trees with diameter $\gg \sqrt{m}$. On the other hand, Aldous~\cite{Ald91} (see \cite[Section 3.4]{Ald91}) noted that the GHP-convergence can be used as an effective tool to prove scaling limit results for the diameter. This is the motivating idea behind Corollary~\ref{cor:diameter}. Aldous~\cite{Ald91} also raised a natural question whether it is possible to obtain an explicit formula from a result such as Corollary~\ref{cor:diameter}. In a recent paper, Wang~\cite{Wang15} showed that it is indeed possible to get such a formula for the Brownian tree. In the context of Corollary~\ref{cor:diameter}, the difficulty is two-fold: First, the critical components have surplus edges. For the scaling limits of critical Erd\H{o}s-R\'enyi random graphs, Miermont and Sen \cite{MS19} recently gave a breadth-first construction, which yields an alternative description of the scaling limit of the radius function from a fixed point (rescaled by $n^{1/3}$). However, the description for the diameter and an explicit formula such as the one by Wang~\cite{Wang15} is still an open question. Second, the scaling limit in Corollary~\ref{cor:diameter} is in the heavy-tailed universality class. Even for $\mathbf{p}$-trees (see \cite{CP99}) that satisfy $p_i/(\sum_i p_i^2)^{1/2} \to \beta_i>0$, with $(\beta_i)_{i\geq 1}\in \ell^2_{{\scriptscriptstyle \downarrow}}\setminus\ell^1_{{\scriptscriptstyle \downarrow}}$, obtaining an explicit description for the \blue{limiting} distribution of the diameter is an interesting question. \paragraph*{Compactness of the limiting metric space.} The \red{limiting spaces $\mathscr{M}_i$ are constructed by tilting the distribution of an inhomogeneous continuum random tree (ICRT), and then identifying a Poisson number of vertices to create cycles. This object is well-defined as a metric measure space for $\bld{\theta} \in \ell^{3}_{{\scriptscriptstyle \downarrow}} \setminus \ell^2_{{\scriptscriptstyle \downarrow}}$. However, it may not be compact for all $\bld{\theta} \in \ell^{3}_{{\scriptscriptstyle \downarrow}} \setminus \ell^2_{{\scriptscriptstyle \downarrow}}$.} It is interesting to find an explicit criterion for the compactness of the limiting objects $\mathscr{M}_i$ in terms of the underlying parameters. Indeed, in the context of compactness of \blue{ICRTs}, Aldous, Miermont, and Pitman \cite[Section~7]{AMP04} \blue{state} an additional condition, which was conjectured to be necessary and sufficient for the compactness of \blue{ICRTs}. This conjectured was recently proved in \cite{blanc2022}. In the context of critical random graphs, a recent paper by Broutin, Duquesne, and Wang \cite{BDW18} shows that the following criterion, analogous to \cite{AMP04}, is sufficient for the almost sure compactness of $\mathscr{M}_i$ \footnote{\red{\textbf{Note: }The condition \eqref{eq:iff-compactness} does not hold for all $\bld{\theta} \in \ell^3_{{\scriptscriptstyle \downarrow}} \setminus \ell^2_{{\scriptscriptstyle \downarrow}}$. Indeed, take $\theta_i = i^{-1/2} $. For $u\in (\theta_2^{-1},\infty )$, let $i_0 = i_0(u)$ be such that $\theta_{i_0}^{-1}<u\leq \theta_{i_0+1}^{-1}$, i.e., $\sqrt{i_0} < u\leq \sqrt{i_0+1}$. Then, \begin{eq} \Psi_{\theta}(u) \leq C\bigg[\sum_{i\leq i_0}\theta_i (u\theta_i) +\sum_{i>i_0} \theta_i (u\theta_i)^2\bigg] =C\bigg[u\sum_{i< u^2}\frac{1}{i} + u^2\sum_{i\geq u^2} \frac{1}{i^{3/2}}\bigg] \leq C[u \log u+u], \end{eq} and thus \eqref{eq:iff-compactness} cannot hold.}}: \begin{eq}\label{eq:iff-compactness} \int_1^\infty \frac{\mathrm{d} u}{\Psi_{\theta} (u)} <\infty, \quad \text{where} \quad \Psi_{\theta} (u) = \sum_{i\geq 1} \theta_i (\mathrm{e}^{-u\theta_i} -1+u\theta_i). \end{eq} Our GHP convergence \blue{from Theorem~\ref{thm:GHP}} indirectly yields a sufficient condition for the compactness of the limiting metric space almost surely, \red{by considering an asymptotic version of Assumption~\ref{assumption-extra}: Suppose $\bld{\theta} \in \ell^3_{{\scriptscriptstyle \downarrow}} \setminus \ell^2_{{\scriptscriptstyle \downarrow}}$ and there exist constants $c_0>0$ and $c_1>1$ such that} \begin{eq}\label{eq:compactness} \red{x^{\tau-2} \times \sum_{i=1}^\infty \theta_i \ind{x < \theta_i \leq c_1x} \geq c_0 \ \ \text{ for all } x\in (0,\theta_1). } \end{eq} \red{The fact that \eqref{eq:compactness} is a sufficient condition for the compactness of $\mathscr{M}_i$ follows immediately from Theorem~\ref{thm:GHP} and the following proposition: \begin{proposition}\label{prop:deg-compact} Consider any $\bld{\theta} \in \ell^3_{{\scriptscriptstyle \downarrow}} \setminus \ell^2_{{\scriptscriptstyle \downarrow}}$ such that \eqref{eq:compactness} holds. Then there exists a sequence of degree sequences satisfying {\rm Assumptions~\ref{assumption1}(i), \ref{assumption1}(ii), \ref{assumption-extra}}, and \eqref{eq:defn-super-crit}. \end{proposition} \noindent We will prove Proposition~\ref{prop:deg-compact} in Appendix~\ref{sec:appendix-comapctness}. A natural question is how the conditions in \eqref{eq:iff-compactness} and \eqref{eq:compactness} compare. We argue below that, in fact, \eqref{eq:compactness} is strictly stronger than \eqref{eq:iff-compactness}. } \red{Recall that $C>0$ is a generic notation for a constant whose value can be different in different expressions. We first show that \eqref{eq:compactness} implies \eqref{eq:iff-compactness}. Suppose $\theta_i >\theta_{i+1}$. Then \begin{eq}\label{cond:simplified} \theta_{i+1}^{\tau-2} \sum_{j=1}^i \theta_j \geq \theta_{i+1}^{\tau-2} \sum_{j=1}^\infty \theta_j\cdot\ind{\theta_{i+1}<\theta_j\leq c_1\theta_{i+1}} \geq c_0, \end{eq} where the last step uses~\eqref{eq:compactness}. } Now, for $u\in (\frac{1}{\theta_i}, \frac{1}{\theta_{i+1}}]$, \begin{eq}\label{eq:psi-theta-lb} \Psi_{\theta} (u) &\geq C \bigg[\sum_{k=1}^i u\theta_k^2 + \sum_{k=i+1}^\infty u^2 \theta_k^\blue{3}\bigg] \geq Cu\theta_{i+1}\sum_{k=1}^i \theta_k \\ &\red{= \frac{Cu\theta_{i+1}^{\tau -2 }\sum_{k=1}^i \theta_k}{\theta_{i+1}^{\tau -3 }} } \geq \frac{Cu c_0}{\theta_{i+1}^{\tau -3 }} \geq C c_0 u^{\tau-2}. \end{eq} Thus, $\int_{\theta_1^{-1}}^\infty \frac{\mathrm{d} u}{\Psi_{\theta} (u)} \leq C \int_{\theta_1^{-1}}^\infty u^{-(\tau-2)} \mathrm{d} u < \infty$, since $\tau >3$. This yields~\eqref{eq:iff-compactness}. \red{To see that the implication is strict, take $\theta_i = (i^{\alpha} \log (i+2))^{-1} $. Then \begin{eq}\label{eqn:1} \theta_{i+1}^{\tau -2} \sum_{j=1}^i \theta_j \leq \big((i+1)^\alpha\log (i+3)\big)^{-(\tau-2)} \sum_{j=1}^i j^{-\alpha}\leq \frac{C}{\log ^{\tau-2} i}, \end{eq} which tends to zero as $i\to\infty$. However, as we have seen in \eqref{cond:simplified}, \eqref{eq:compactness} would imply that the left side of~\eqref{eqn:1} is bounded away from zero. Thus, \eqref{eq:compactness} does not hold in this case. To see that \eqref{eq:iff-compactness} does hold, note that $\theta_{i} \geq \theta_i':= i^{-\alpha'}$ for all large enough $i$, where $\alpha'=\frac{1}{\tau'-1}$ and $3<\tau'<\tau$. Then $(\theta_i')_{i\geq 1}$ satisfies \eqref{eq:compactness}. Therefore, a computation similar to \eqref{eq:psi-theta-lb} yields, for $u\in (\frac{1}{\theta_i}, \frac{1}{\theta_{i+1}}]$, \begin{eq}\label{eq:psi-theta-lb-2} \Psi_{\theta} (u) & \geq Cu\theta_{i+1}\sum_{k=1}^i \theta_k \geq Cu\theta_{i+1}'\sum_{k=1}^i \theta_k'\geq \frac{Cu}{(\theta_{i}')^{\tau -3 }}. \end{eq} Since $u\leq (i+1)^{\alpha} \log (i+3)$, we can choose $\delta>0$ such that $u^{1+\delta} \leq C i^{\alpha'} = C/\theta_i'$. Therefore, $\Psi_\theta (u) \geq C u^{1+(\tau-3) (1+\delta)}$. Thus, \eqref{eq:iff-compactness} follows. } \paragraph*{Proof ideas and technical motivation for this work.} The proof of Theorem~\ref{thm:gml-bound} consists of two main steps, that form the key ideas in the argument. The first step is to show that the neighborhoods of the high-degree vertices, called \emph{hubs}, have mass $\ensuremath{\Theta_{\scriptscriptstyle\PR}}(n^{\rho})$. Secondly, any small~$\varepsilon n^{\eta}$ neighborhood \blue{contains a hub with high probability}. These two facts, summarized in Propositions~\ref{prop:size-nbd bound} and~\ref{prop:diamter-small-comp} below, together ensure that the total mass of any neighborhood of $\mathscr{C}_{\scriptscriptstyle(i)}$ of radius $\varepsilon n^{\eta}$ is bounded away from zero. These two facts were proved in \cite{BHS15} in the context of rank-one inhomogeneous random graphs. However, the proof techniques are completely different here. The main advantage in \cite{BHS15} was that the breadth-first exploration of components could be dominated by a branching process with \emph{mixed Poisson} progeny distribution that is \emph{independent of $n$}. This allows one to use existing literature to estimate the probabilities that a long path exists in the branching process. However, such a technique is specific to rank-one inhomogeneous random graphs and does not work in the cases where the above stochastic domination does not hold. This was one of the technical motivations for this work. Moreover, the final section contains results about exponential tail-bounds for the number of edges in large critical components (Proposition~\ref{lem:volume-large-deviation}), as well as a coupling of the neighborhood exploration with a branching process with stochastically larger progeny distribution (Section~\ref{sec:BP-approximation}), which are both interesting in their own right. \paragraph*{Organization of this paper.} The rest of this paper is organized as follows: In Section~\ref{sec:proof-main-thm}, we state two key propositions, the first involving the total mass of small neighborhoods, and the second involving a bound on the diameter of a \emph{slightly subcritical} $\mathrm{CM}_n(\bld{d})$. The proof of Theorem~\ref{thm:gml-bound} is completed in Section~\ref{sec:proof-main-thm}. In Section~\ref{sec:total-mass-hubs}, we derive the required bounds on the total mass of small neighborhoods. In Section~\ref{sec:diameter-after-removal}, we obtain bounds on the diameter of the connected components after removing the high-degree vertices. \blue{In Section~\ref{sec:perc-degrees}, we prove Assumptions~\ref{assumption1},~\ref{assumption-extra} for the percolated degree sequence, which allows us to conclude Theorem~\ref{cor:GLM-percoltion}.} \section{Proof of the global lower mass-bound} \label{sec:proof-main-thm} In this section, we first state the two key ingredients in Propositions~\ref{prop:size-nbd bound}~and~\ref{prop:diamter-small-comp}, and then complete the proof of Theorem~\ref{thm:gml-bound}. The proofs of Propositions~\ref{prop:size-nbd bound}~and~\ref{prop:diamter-small-comp} are given in the subsequent sections. The first ingredient shows that hub $i$ has sufficient mass close to it with high probability: \begin{proposition} \label{prop:size-nbd bound} Assume that {\rm Assumptions~\ref{assumption1}}~and \eqref{defn:criticality} hold. Recall that $\mathcal{N}_v(\delta)$ denotes the $\delta n^{\eta}$ neighborhood of $v$. For each fixed $i\geq 1$ and $\varepsilon_2>0$, there exists $\delta_{i, \varepsilon_2}>0$ and $n_{i,\varepsilon_2}\geq 1$ such that, for any $\delta\in (0,\delta_{i,\varepsilon_2}]$ and $n\geq n_{i,\varepsilon_2}$, \begin{equation}\label{eq:size-nbd bound} \ensuremath{\mathbbm{P}}\big(|\mathcal{N}_i(\delta)|\leq \theta_i \delta n^{\rho}\big)\leq \frac{\varepsilon_2}{2^{i+1}}. \end{equation} \end{proposition} Next, we need some control on the diameter of the graph after removing the hubs. Denote by $\mathcal{G}^{\scriptscriptstyle >K}_n$ the graph obtained by removing the vertices $[K] = \{1,\dots,K\}$ having the largest degrees and the edges incident to them from $\mathrm{CM}_n(\bld{d})$. Note that $\mathcal{G}^{\scriptscriptstyle >K}_n$ is a configuration model conditionally on its degree sequence. Let $\Delta^{\scriptscriptstyle >K}$ denote the maximum of the diameters of the connected components of $\mathcal{G}^{\scriptscriptstyle >K}_n$. The following proposition shows that, for large $K$, $\Delta^{\scriptscriptstyle >K}$ is small with high probability: \begin{proposition}\label{prop:diamter-small-comp} Assume that {\rm Assumptions~\ref{assumption1},~\ref{assumption-extra}} and \eqref{defn:criticality} hold. Then, for any $\varepsilon_1,\varepsilon_2 > 0$, there exists $K =K(\varepsilon_1,\varepsilon_2)$ and $n_0=n_{0}(\varepsilon_1,\varepsilon_2)$ such that for all $n\geq n_0$, \begin{equation}\label{eq:diamter-small-comp} \prob{\Delta^{\scriptscriptstyle >K}>\varepsilon_1 n^{\eta}}\leq \frac{\varepsilon_2}{4}. \end{equation} \end{proposition} \noindent We now prove Theorem~\ref{thm:gml-bound} assuming Propositions~\ref{prop:size-nbd bound} and \ref{prop:diamter-small-comp}: \begin{proof}[Proof of Theorem~\ref{thm:gml-bound}] Fix $i\geq 1$ and $\varepsilon_1,\varepsilon_2>0$. For a component $\mathscr{C} \subset \mathrm{CM}_n(\bld{d})$, we write $\Delta(\mathscr{C})$ to denote its diameter. Let us choose $K$ and $n_0$ so that~\eqref{eq:diamter-small-comp} holds for all $n\geq n_0$. In view of Proposition~\ref{prop:size-nbd bound}, let $\delta_0 = \min\{\varepsilon_1,\delta_{1,\varepsilon_2},\dots,\delta_{K,\varepsilon_2}\}/2$, and $n_0' = \max\{n_0,n_{1,\varepsilon_2},\dots,n_{K,\varepsilon_2}\}$. Thus, for all $n\geq n_0'$, \eqref{eq:size-nbd bound} is satisfied for all $i\in [K]$. Define \begin{equation} F_1 := \{\Delta^{\scriptscriptstyle >K}< \varepsilon_1 n^{\eta}/2\}, \quad F_2 := \{\Delta(\mathscr{C}_{\scriptscriptstyle (i)} )>\varepsilon_1 n^{\eta}/2\}. \end{equation} Notice that, on the event $F_1\cap F_2$, it must be the case that one of the vertices in $[K]$ belongs to~$\mathscr{C}_{\scriptscriptstyle (i)}$, and that the union of the neighborhoods of $[K]$ of radius $\lceil\varepsilon_1 n^{\eta}/2\rceil+1 \approx \varepsilon_1 n^{\eta}/2$ covers~$\mathscr{C}_{\scriptscriptstyle (i)} $. Therefore, given any vertex $v\in \mathscr{C}_{\scriptscriptstyle (i)}$, $\cN_v(\varepsilon_1)$ contains at least one of the neighborhoods~$(\cN_j(\varepsilon_1/2))_{j\in [K]}$. This observation yields that \begin{equation} \inf_{v\in\mathscr{C}_{\scriptscriptstyle (i)}}n^{-\rho} |\mathcal{N}_v(\varepsilon_1)|\geq \min_{j\in [K]} n^{-\rho}| \mathcal{N}_j(\varepsilon_1/2)| \geq \min_{j\in [K]} n^{-\rho}|\mathcal{N}_j(\delta_0)|. \end{equation} Thus, for all $n\geq n_0'$, \begin{equation}\label{eq:f1-f2} \begin{split} &\ensuremath{\mathbbm{P}}\Big(F_1\cap F_2 \cap \Big\{\inf_{v\in\mathscr{C}_{\scriptscriptstyle (i)}}n^{-\rho}| \mathcal{N}_v(\varepsilon_1)| \leq \theta_K \delta_0 \Big\}\Big)\\ &\qquad \leq \sum_{j\in [K]} \ensuremath{\mathbbm{P}}\big(|\mathcal{N}_j(\delta)|\leq \theta_j \delta_0 n^{\rho}\big) \leq \sum_{j=1}^K \frac{\varepsilon_2}{2^{j+1}}\leq \frac{\varepsilon_2}{2} , \end{split} \end{equation}where the one-but-last step follows from Proposition~\ref{prop:size-nbd bound}. Further, on the event $F_2^c$, $|\mathcal{N}_v(\varepsilon_1)| = |\mathscr{C}_{\scriptscriptstyle (i)}|$ for all $v\in \mathscr{C}_{\scriptscriptstyle (i)}$. Moreover, using \eqref{eq:comp-size-conv}, it follows that $n^{-\rho}|\mathscr{C}_{\scriptscriptstyle (i)}|$ converges in distribution to a random variable with strictly positive support. Using the Portmanteau theorem, the above implies that for any $\delta_0'>0$, there exists $\tilde{n}_0 = \tilde{n}_0(\varepsilon_2,\delta_0')$ such that, for all $n\geq \tilde{n}_0$, \begin{equation} \ensuremath{\mathbbm{P}}\big(n^{-\rho}|\mathscr{C}_{\scriptscriptstyle (i)}|\leq \delta_0'\big)\leq \frac{\varepsilon_2}{4}. \end{equation} Therefore, \begin{equation}\label{eq:f1-f2c} \ensuremath{\mathbbm{P}}\bigg( F_2^c\cap \bigg\{\inf_{v\in\mathscr{C}_{\scriptscriptstyle (i)}}n^{-\rho}| \mathcal{N}_v(\varepsilon_1)| \leq \delta_0' \bigg\}\bigg)\leq \frac{\varepsilon_2}{4}. \end{equation} Now, using \eqref{eq:f1-f2} \blue{and} \eqref{eq:f1-f2c}, together with Proposition~\ref{prop:diamter-small-comp}, it follows that, for any $n\geq \max\{n_0',\tilde{n}_0\}$, and $K$ chosen as above, \begin{eq} \ensuremath{\mathbbm{P}}\bigg(\inf_{v\in\mathscr{C}_{\scriptscriptstyle (i)}}n^{-\rho}| \mathcal{N}_v(\varepsilon_1)| \leq \min\{\delta_0',\theta_K\delta_0\} \bigg)\leq \varepsilon_2. \end{eq} This completes the proof of Theorem~\ref{thm:gml-bound}. \end{proof} \section{Lower bound on the total mass of neighborhoods of hubs} \label{sec:total-mass-hubs} In this section, we prove Proposition~\ref{prop:size-nbd bound}. \begin{proof}[Proof of Proposition~\ref{prop:size-nbd bound}] Let us denote the component of $\mathrm{CM}_n(\bld{d})$ containing vertex~$i$ by $\mathscr{C}(i)$. Consider the breadth-first exploration of $\mathscr{C}(i)$ starting from vertex~$i$, given by the following exploration algorithm \cite{DHLS16}: \begin{algo}[Exploring the graph]\label{algo-expl}\normalfont The algorithm carries along vertices that can be alive, active, exploring and killed, and half-edges that can be alive, active or killed. We sequentially explore the graph as follows: \begin{itemize} \item[(S0)] At stage $l=0$, all the vertices and the half-edges are \emph{alive}, and only the half-edges associated to vertex $i$ are \emph{active}. Also, there are no \emph{exploring} vertices except $i$. \item[(S1)] At each stage $l$, \blue{if there is an exploring vertex,} take an active half-edge $e$ of an exploring vertex $v$ and pair it uniformly to another alive half-edge $f$. Kill $e,f$. If $f$ is incident to a vertex $v'$ that has not been discovered before, then declare all the half-edges incident to $v'$ (if any) active, except $f$. If $\mathrm{degree}(v')=1$ (i.e. the only half-edge incident to $v'$ is $f$) then kill~$v'$. Otherwise, declare $v'$ to be active and larger than all other vertices that are alive. After killing $e$, if $v$ does not have another active half-edge, then kill $v$ also. \blue{If there is no exploring vertex at the beginning of stage $l$, we pick the oldest active half-edge, declare the corresponding vertex to be exploring, and then execute the same process as above.} \item[(S2)] Repeat (S1) until there is no active half-edges left. \end{itemize} \end{algo} \noindent Call a vertex \emph{discovered} if it is either active or killed. Let $\mathscr{V}_l$ denote the set of vertices discovered up to time $l$ and $\mathcal{I}_j^n(l):=\ind{j\in\mathscr{V}_l}$. Define the exploration process by \begin{equation}\label{def:exploration-process} S_n(l)= d_i+\sum_{j\neq i} d_j \mathcal{I}_j^n(l)-2l=d_i+\sum_{j\neq i} d_j \left( \mathcal{I}_j^n(l)-\frac{d_j}{\ell_n}l\right)+\bigg( \frac{1}{\ell_n} \sum_{j\neq i}d_j^2-2\bigg)l. \end{equation} Note that the exploration process keeps track of the number of active half-edges. Thus, $\mathscr{C}(i)$ is explored when $\bld{S}_n$ hits zero. Moreover, since one edge is explored at each step, the hitting time of zero is the total number of edges in $\mathscr{C}(i)$. Define the re-scaled version $\bar{\bld{S}}_n$ of~$\bld{S}_n$ by $\bar{S}_n(t)= n^{-\alpha}S_n(\lfloor tn^{\rho} \rfloor)$. Then, by Assumption~\ref{assumption1} and \eqref{defn:criticality}, \begin{equation} \label{eqn::scaled_process} \bar{S}_n(t) = \theta_i-\frac{\theta_i^2 t}{\mu}+ n^{-\alpha} \sum_{j\neq i} d_j\left(\mathcal{I}_j^n(tn^\rho)-\frac{d_j}{\ell_n}tn^{\rho} \right)+\lambda t +o(1). \end{equation} The convergence of this exploration process was considered in \cite[Theorem 8]{DHLS16} except for the fact that the exploration process started at zero in \cite{DHLS16}. However, using identical arguments to \cite[Theorem 8]{DHLS16}, it can be shown that \begin{eq}\label{eq:dist-conv-S} \bar{\bld{S}}_n\xrightarrow{\scriptscriptstyle d} \bld{S}_\infty, \end{eq} with respect to the Skorohod $J_1$-topology, where \begin{equation} S_\infty(t) = \theta_i - \frac{\theta_i^2 t}{\mu} +\sum_{j\neq i}\theta_j\Big(\mathcal{I}_j(t)- \frac{\theta_jt}{\mu}\Big)+\lambda t, \end{equation}with $\mathcal{I}_j(s):=\ind{\xi_j\leq s }$ and $\xi_j\sim \mathrm{Exponential}(\theta_j/\mu)$ independently of each other. Let $h_n(u)$ (respectively $h_\infty(u)$) denote the first hitting time of $\bar{\bld{S}}_n$ (respectively $\bld{S}_\infty$) of~$u$. More precisely, \begin{eq} h_n(u):= \inf\Big\{t: \bar{S}_n(t) \leq u \text{ or } \lim_{t'\nearrow t}\bar{S}_n( t') \leq u\Big\}, \end{eq} and define $h_\infty(u)$ similarly by replacing $\bar{S}_n(t)$ by $\bar{S}_\infty(t)$. Note that, by \cite[Lemma 36]{DHLS16}, the distribution of $h_\infty(u)$ does not have any atoms and therefore, for any $\varepsilon_2>0$, there exists $\beta_{\varepsilon_2,i}>0$ such that \[ \ensuremath{\mathbbm{P}}\big(h_{\infty}(\theta_i/2)\leq \beta_{\varepsilon_2,i}\big)\leq \frac{\varepsilon_2}{2^{i+1}}. \] Now we use the following fact: \begin{fact}\label{fact} Let $(X_n(t))_{t\geq 0}\ensuremath{\xrightarrow{d}} (X(t))_{t\geq 0}$ in Skorohod $J_1$-topology and let $h(X_n)$ (respectively $h(X)$) denote the hitting time to zero of $X_n$ (respectively $X$). Then, $\liminf_{n\to\infty} \ensuremath{\mathbbm{P}}(h(X_n) > a) \geq \ensuremath{\mathbbm{P}}(h(X) >a) $, for all $a>0$. \end{fact} \begin{proof} Let $(f_n)_{n\geq 1}$ be such that $h(f_n) \leq a$ for all $n\geq 1$ and $f_n\to f$ in the Skorohod $J_1$-topology as $n\to\infty$. Now, $h(f_n) \leq a$ implies that $\inf_{t\in [0,a]} f_n(t) \red{\leq} 0$. Using \cite[Theorem 13.4.1]{W02}, it follows that $\inf_{t\in [0,a]} f(t) \red{\leq} 0$ and thus $h(f) \leq a$. Therefore, we have shown that $\{f\colon h(f)\leq a\}$ is a closed set in the Skorohod $J_1$-topology, and therefore $\{f: h(f)> a\}$ is an open set. The proof follows using the Portmanteau theorem \cite[Theorem 2.1~(iv)]{Bil99}. \end{proof} \noindent Using \eqref{eq:dist-conv-S} and Fact~\ref{fact}, there exists $n_{i,\varepsilon_2}\geq 1$ such that, for all $n\geq n_{i,\varepsilon_2}$, \begin{equation} \ensuremath{\mathbbm{P}}(h_n(\theta_i/2)\leq \red{ \beta_{\varepsilon_2,i}})\leq \frac{\varepsilon_2}{2^{i}}. \end{equation} Our first goal is to show that there exists a $\delta_{i,\varepsilon}$ such that for any $\delta\in (0,\delta_{i,\varepsilon_2}]$, \begin{eq}\label{degree-implies-height} \sum_{k\in \cN_i(\delta)}d_k \leq \theta_i\delta n^{\rho}\quad \implies \quad h_n(\theta_i/2)\leq \red{\beta_{\varepsilon_2,i}}. \end{eq} Recall that $\mathcal{N}_v(\delta)$ denotes the $\delta n^{\eta}$ neighborhood of $v$ in $\mathrm{CM}_n(\bld{d})$. To prove \eqref{degree-implies-height}, let $\partial(j)$ denote the set of vertices at distance $j$ from $i$. Let $E_{j1}$ denote the total number of edges between vertices in $\partial(j)$ and $\partial(j-1)$, and let $E_{j2}$ denote the number of edges within the vertices in $\partial(j-1)$. Define $E_j = E_{j1}+E_{j2}$. Fix any $\delta<2\beta_{\varepsilon_2,i}/\theta_i$. Note that if $\sum_{k\in\mathcal{N}_i(\delta)}d_k\leq \theta_i \delta n^{\rho}$, then the total number of edges in $\mathcal{N}_i(\delta)$ is at most $\theta_i \delta n^{\rho}/2$. Thus there exists $j\leq \delta n^{\eta}$ such that $E_j\leq\theta_i\delta n^{\rho}/2\delta n^{\eta} = \theta_in^{\alpha}/2 $. This implies that $\bld{S}_n$ must go below $\theta_i n^\alpha/2$ before exploring all the vertices in $\cN_i(\delta)$. This is because we are exploring the components in a breadth-first manner and $\bar{\bld{S}}_n$ keeps track of the number of active half-edges, which in turn are the potential connections to vertices at the next level. Since one edge is explored in each time step, and we rescale time by $n^{\rho}$, this implies that \begin{equation} h_n(\theta_i/2)\leq \frac{1}{2} n^{-\rho} \sum_{k\in\mathcal{N}_i(\delta)}d_k\leq \theta_i\delta/2 \leq \beta_{\varepsilon_2,i}. \end{equation} Therefore, for all $n\geq n_{i,\varepsilon_2}$, \begin{equation} \label{eq:dk-bound} \ensuremath{\mathbbm{P}}\bigg(\sum_{k\in\mathcal{N}_i(\delta)}d_k\leq \theta_i\delta n^{\rho}\bigg)\leq \ensuremath{\mathbbm{P}}(h_n(\theta_i/2)\leq \beta_{\varepsilon_2,i})\leq \frac{\varepsilon_2}{2^{i}}. \end{equation} Finally, to conclude Proposition~\ref{prop:size-nbd bound} from \eqref{eq:dk-bound}, we use the \blue{result from \cite[Lemma 22]{DHLS16} that, for any $T>0$,} \begin{equation}\label{weight-expl-prop} \sup_{u\leq T}\bigg| \sum_{i\in [n]}\mathcal{I}_i^n(un^{\rho})-un^{\rho}\bigg|=o_{\scriptscriptstyle \PR}(n^{\rho}). \end{equation} This implies that the difference between the number of edges and the number of vertices explored up to time $un^{\rho}$ is $o_{\scriptscriptstyle \PR}(n^{\rho})$ uniformly over $u\leq T$. The proof of Proposition~\ref{prop:size-nbd bound} now follows. \end{proof} \section{Diameter after removing hubs} \label{sec:diameter-after-removal} Throughout the remainder of the paper, we fix the convention that $C,C',C''>0$ etc.~denote constants whose value can change from line to line. Recall the definition of the graph $\cG_n^{\scriptscriptstyle >K}$ from Proposition~\ref{prop:diamter-small-comp}. If we keep on exploring $\cG_n^{\scriptscriptstyle >K}$ in a breadth-first manner using Algorithm~\ref{algo-expl} and ignore the cycles created, then we get a random tree. The idea is to couple neighborhoods in $\cG_n^{\scriptscriptstyle >K}$ with a suitable branching process such that the progeny distribution of the branching process dominates the number of children of each vertex in the breadth-first tree. Therefore, when there is a long path in $\cG_n^{\scriptscriptstyle >K}$ that makes the diameter large, that long path must be present in the branching process as well under the above coupling. In this way, the question about the diameter of $\cG_n^{\scriptscriptstyle >K}$ reduces to the question about the height of a branching process. To estimate the height suitably, we use a recent beautiful proof technique by Addario-Berry~\cite{A17} which allows one to relate the height of a branching process to the sum of inverses of the associated breadth-first random walk. In Section~\ref{sec:asymp-edges}, we establish \blue{tail bounds} for the number of edges within components. This allows us to formulate the desired coupling in Section~\ref{sec:BP-approximation}. In Section~\ref{sec:height-vs-rw}, we analyze the breadth-first random walk to show that it is unlikely that the height of the branching process is larger than $\varepsilon n^{\eta}$. These bounds are different from those derived in~\cite{A17} since our branching process depends on $n$ and there is a joint scaling involved between the distances and the law of the branching process. \subsection{Asymptotics for the number of edges} \label{sec:asymp-edges} For a graph $G$, let $\rE(G)$ denote the number of edges in $G$. \begin{proposition}\label{lem:volume-large-deviation} Suppose that {\rm Assumption~\ref{assumption1}} and \eqref{defn:criticality} hold. \blue{For all $\varepsilon \in (0,\frac{4-\tau}{\tau-1})$}, and sufficiently large~$n$, \begin{eq} \ensuremath{\mathbbm{P}}(\rE(\mathscr{C} (i))> n^{\rho+\varepsilon}) \leq C\mathrm{e}^{-C' n^{\varepsilon/2}}, \end{eq}for some absolute constants $C,C'>0$ and all $i\in [n]$. \end{proposition} The proof of Proposition~\ref{lem:volume-large-deviation} relies on concentration techniques for martingales. We start by defining the relevant notation. Consider exploring $\mathrm{CM}_n(\bld{d})$ with Algorithm~\ref{algo-expl}, and let the associated exploration process be defined in \eqref{def:exploration-process}. Let us denote the degree of the vertex found at step $l$ by $d_{\scriptscriptstyle (l)}$. If no new vertex is found at step $l$, then $d_{\scriptscriptstyle (l)} = 0$. Also, let $\mathscr{F}_l$ denote the sigma-algebra containing all the information revealed by the exploration process up to time $l$. Thus, \begin{eq} S_n(0) = d_i, \quad \text{and}\quad S_n(l) = S_n(l-1) + (d_{\scriptscriptstyle (l)}-2). \end{eq} Using the Doob-Meyer decomposition, one can write \begin{equation} S_n(l) = S_n(0)+M_n(l) + A_n(l), \end{equation}where $M_n$ is a martingale with respect to $(\mathscr{F}_l)_{l\geq 1}$. The drift $A_n$ and the quadratic variation $\langle M_n \rangle$ of $M_n$ are given by \begin{equation} A_{n}(l)= \sum_{j=1}^{l} \mathbbm{E}\big[d_{\scriptscriptstyle(j)}-2 \vert \mathscr{F}_{j-1} \big], \quad \langle M_n \rangle(l)= \sum_{j=1}^{l} \var{d_{\scriptscriptstyle(j)}\vert \mathscr{F}_{j-1}} . \end{equation} We will show that for any $\varepsilon \in (0,\varepsilon_0)$, the following two lemmas hold: \begin{lemma}\label{lem:small-martingale} Suppose that {\rm Assumption~\ref{assumption1}} and \eqref{defn:criticality} hold. \blue{For all $\varepsilon \in (0,\frac{4-\tau}{\tau-1})$}, and sufficiently large~$n$, \begin{eq} \ensuremath{\mathbbm{P}}(n^{-(\alpha+\varepsilon)} M_n(n^{\rho +\varepsilon}) > 1) \leq C\mathrm{e}^{-C' n^{\varepsilon}}, \end{eq} for some absolute constants $C,C'>0$. \end{lemma} \begin{lemma}\label{lem:drift-superlinear} Suppose that {\rm Assumption~\ref{assumption1}} and \eqref{defn:criticality} hold. For all fixed $K\geq 1$, \blue{$\varepsilon \in (0,\frac{4-\tau}{\tau-1})$}, and sufficiently large~$n$, \begin{eq} \ensuremath{\mathbbm{P}}\bigg(n^{-(\alpha+\varepsilon)} A_n(n^{\rho +\varepsilon}) \geq -C\sum_{i=1}^K\theta_i^2\bigg) \leq C\mathrm{e}^{-C' n^{\varepsilon/2}}, \end{eq}for some absolute constants $C,C'>0$. \end{lemma} \begin{proof}[Proof of Proposition~\ref{lem:volume-large-deviation} subject to Lemmas~\ref{lem:small-martingale},~\ref{lem:drift-superlinear}] Throughout, we write $t_n := n^{\rho +\varepsilon}$. Note that we can choose $K\geq 1$ such that $\sum_{i=1}^K\theta_i^2$ is arbitrarily large since $\bld{\theta}\notin \ell^2_{{\scriptscriptstyle \downarrow}}$. Thus, if $n^{-(\alpha+\varepsilon)} M_n(t_n) \leq 1$ and $n^{-(\alpha+\varepsilon)}A_n(t_n) \leq -C\sum_{i=1}^K\theta_i^2$, then $n^{-(\alpha + \varepsilon)}S_n(t_n) <0$, and therefore $\mathscr{C}(i)$ must be explored before time $t_n$, and thus $\rE(\mathscr{C}(i))\leq t_n$. As a result, Lemmas~\ref{lem:small-martingale} and~\ref{lem:drift-superlinear} together complete the proof of Proposition~\ref{lem:volume-large-deviation}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:small-martingale}] First note that $\blue{\frac{4-\tau}{\tau-1} +\rho = 2\alpha} < 1$ and therefore $t_n = o(n)$. Thus, uniformly over $j\leq t_n$, \begin{equation} \var{d_{\scriptscriptstyle (j)} \vert \mathscr{F}_{j-1}} \leq \ensuremath{\mathbbm{E}}[d_{\scriptscriptstyle (j)}^2 \vert \mathscr{F}_{j-1}] = \frac{\sum_{j\notin \mathscr{V}_{j-1}} d_j^3}{\ell_n - 2j +2} \leq \frac{\sum_{j\in [n]} d_j^3}{\ell_n - 2t_n+2}\leq Cn^{3\alpha - 1}, \end{equation}so that, almost surely, \begin{equation}\label{eq:bound-QV} \langle M_n\rangle (t_n) \leq t_n Cn^{3\alpha-1} = C n^{2\alpha + \varepsilon}. \end{equation} Also, $d_{\scriptscriptstyle (j)} \leq C n^{\alpha}$ almost surely. We can now use Freedman's inequality \cite[Proposition 2.1]{Fre75} which says that if $Y(k) = \sum_{j\leq k} X_j$ with $\ensuremath{\mathbbm{E}}[X_j\vert \mathcal{F}_{j-1}] =0$ (for some filtration $(\mathcal{F}_j)_{j\geq 1}$) and $\ensuremath{\mathbbm{P}}(|X_j|\leq R, \ \forall j\geq 1)=1$, then, for any $a,b >0$, \begin{eq}\label{eq:freedman} \ensuremath{\mathbbm{P}}(Y(k) \geq a, \text{ and } \langle Y\rangle(k) \leq b ) \leq \exp\bigg(\frac{-a^2}{2(Ra+b)}\bigg). \end{eq} We apply \eqref{eq:freedman} with $a= n^{\alpha+\varepsilon}$, $b=Cn^{2\alpha+\varepsilon}$ and $R=Cn^{\alpha}$. Note that $\langle M_n \rangle (t_n) \leq b$ \blue{almost surely} by \eqref{eq:bound-QV}. It follows that \begin{eq} \ensuremath{\mathbbm{P}}(M_n(t_n) > n^{\alpha + \varepsilon}) \leq \exp \bigg(- \frac{n^{2\alpha + 2\varepsilon}}{2 \blue{C} (n^{\alpha} n^{\alpha+ \varepsilon} + n^{2\alpha+ \varepsilon})}\bigg) \leq C\mathrm{e}^{-C'n^{\varepsilon}}, \end{eq} and the proof follows. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:drift-superlinear}] Note that \begin{eq}\label{the-split-up} &\mathbbm{E} \big[ d_{\scriptscriptstyle(i)} -2 \vert \mathscr{F}_{i-1} \big] = \frac{\sum_{j \notin \mathscr{V}_{i-1}} d_{j}^2}{\ell_n-2i+1}-2\\ &\hspace{2cm}= \frac{1}{\ell_n}\sum_{j \in [n]} d_{j}(d_{j}-2)- \frac{1}{\ell_n} \sum_{j \in \mathscr{V}_{i-1}} d_{j}^2 + \frac{(2i-1)\sum_{j \notin \mathscr{V}_{i-1}} d_{j}^2 }{\ell_n (\ell_n-2i+1)} \\ &\hspace{2cm}\leq \lambda n^{-\eta} - \frac{1}{\ell_n} \sum_{j \in \mathscr{V}_{i-1}} d_{j}^2 + \frac{(2i-1)}{(\ell_n-2i+1)^2} \sum_{j \in [n]} d_{j}^{2} +o(n^{-\eta}) \end{eq}uniformly over $i\leq t_n$. Therefore, for all sufficiently large $n$, \begin{eq}\label{upper-bound-drift-large} A_n(t_n) = \sum_{j=1}^{t_n} \mathbbm{E}\big[d_{\scriptscriptstyle(j)}-2 \vert \mathscr{F}_{j-1} \big]&\leq \lambda t_n n^{-\eta} - \frac{1}{\ell_n}\sum_{i=1}^{t_n} \sum_{j \in \mathscr{V}_{i-1}} d_{j}^{2} + \frac{Ct_n^2}{\ell_n}+ o(n^{\alpha+\varepsilon}) \\ &= \lambda n^{\alpha+\varepsilon} - \frac{1}{\ell_n}\sum_{i=1}^{t_n} \sum_{j \in \mathscr{V}_{i-1}} d_{j}^{2} + o(n^{\alpha+\varepsilon}), \end{eq}where in the second step we have used $\sum_{i\in [n]} d_i^2 / \ell_n = O(1)$, and in the last step we have used that $t_n^2/\ell_n = O(n^{2\rho+2\varepsilon-1}) = o(n^{\alpha+\varepsilon})$ for $\varepsilon< 1+ \alpha - 2\rho= \frac{4-\tau}{\tau-1} $. Let us denote the second term in \eqref{upper-bound-drift-large} by (A). To analyze~(A), define the event \begin{eq} \cA_n:= \big\{\exists j: d_j>n^{\alpha - \varepsilon/2}, j\notin \mathscr{V}_{t_n/2}\big\}. \end{eq} Then, for all sufficiently large $n$, \begin{eq}\label{eq:estimate-bad-event-exponential} \ensuremath{\mathbbm{P}}(\cA_n) \leq \sum_{j: d_j>n^{\alpha-\varepsilon/2}} \bigg(1-\frac{d_j}{\ell_n-2t_n}\bigg)^{t_n} \leq n \mathrm{e}^{-Cn^{\varepsilon/2}}. \end{eq} On the event $\cA_n^c$, \begin{eq}\label{eq:expression-A} \mathrm{(A)} = \frac{1}{\ell_n} \sum_{i=1}^{t_n} \sum_{j\in [n]}d_j^2\mathbbm{1}\{j\in \mathscr{V}_{i-1}\} \geq \frac{1}{\ell_n} \sum_{i = \frac{t_n}{2}+1}^{t_n} \sum_{j=1}^K d_j^2 \geq Cn^{\alpha+\varepsilon} \sum_{j=1}^K \theta_j^2. \end{eq} Combining \eqref{upper-bound-drift-large}, \eqref{eq:estimate-bad-event-exponential} and \eqref{eq:expression-A} now completes the proof. \end{proof} \subsection{Coupling with branching processes} \label{sec:BP-approximation} Recall that $\mathscr{C}(i)$ is the connected component in $\mathrm{CM}_n(\bld{d})$ containing vertex $i$. Define the event $\cK_n:= \{\rE(\mathscr{C}(i))>n^{\rho+\varepsilon}\}$. Proposition~\ref{lem:volume-large-deviation} implies that the probability of $\cK_n$ happening is exponentially small in $n$. On the event~$\mathcal{K}_n^c$, we can couple the breadth-first exploration starting from vertex $i$ with a suitable branching process. Let $n_k$ denote the number of vertices of degree $k$ and consider the branching process $\mathcal{X}_n(i)$ starting with $d_i$ individuals, and the progeny distribution $\bar{\xi}_n$ given by \begin{equation}\label{upperbounding-BP} \begin{split}\prob{\bar{\xi}_n=k}=\bar{p}_k= \begin{cases} \frac{(k+1)n_{k+1}}{\ubar{\ell}_n} \quad &\text{for } k\geq 1,\\ \frac{n_1-2n^{\rho+\varepsilon}}{\ubar{\ell}_n} \quad &\text{for } k=0, \end{cases} \end{split} \end{equation}where $\ubar{\ell}_n=\ell_n-2n^{\rho+\varepsilon}$. Note that, at each step of the exploration, we have at most $(k+1)n_{k+1}$ half-edges that are incident to vertices having $k$ further unpaired half-edges. Further, on the event $\mathcal{K}_n^c$, we have at least $\ubar{\ell}_n$ choices for pairing. Therefore, the number of active half-edges discovered at each step in the breadth-first exploration of the neighborhoods of $i$ is stochastically dominated by $\bar{\xi}_n$. This proves the next proposition, which we state after setting up some further notation. Recall that $\cG_n^{\scriptscriptstyle >i-1}$ denotes the graph obtained by deleting vertices in $[i-1]$ and the associated edges from $\mathrm{CM}_n(\bld{d})$. Let $\partial_i(r)$ denote the number of vertices at distance $r$ from vertex $i$ in the graph $\cG_n^{\scriptscriptstyle >i-1}$. Let $\bar{\xi}_n(i)$ denote the random variable with the distribution in \eqref{upperbounding-BP} truncated in such a way that $\{d_1,\dots,d_{i-1}\}$ are excluded from the support. More precisely, \begin{eq}\label{eqn:668} \ensuremath{\mathbbm{P}}(\bar{\xi}_n(i)=k) = \begin{cases} 0\quad &\text{for } k> d_{i}, \\ \frac{(k+1)}{L}\#\{j\geq i:\ d_j=k+1\} \quad &\text{for } 1\leq k\leq d_{i},\\ \frac{n_1-2n^{\rho+\varepsilon}}{L} \quad &\text{for } k=0, \end{cases} \end{eq}where $L=\underline{\ell}_n-\sum_{j=1}^{i-1}d_j$ is the appropriate normalizing constant. Let $\cX_{n,\mathrm{res}}(i)$ denote the branching process starting with~$d_i$ individuals and progeny distribution $\bar{\xi}_n(i)$ and let $\bar{\partial}_i(r)$ denote the number of individuals in generation $r$ of $\mathcal{X}_{n,\mathrm{res}}(i)$. Then the above stochastic domination argument immediately yields the next proposition: \begin{proposition}\label{prop:coupling-uppperbound} Suppose that {\rm Assumption~\ref{assumption1}} and \eqref{defn:criticality} hold. Let $K_n$ be as described \blue{in {\rm Lemma~\ref{lem:technical}} below}. For all $r\geq 1$, $1\leq i\leq K_n$, \blue{$\varepsilon \in (0,\frac{4-\tau}{\tau-1})$} and $n\geq 1$, \begin{eq} \ensuremath{\mathbbm{P}}(\partial_i(r)\neq \varnothing) \leq \ensuremath{\mathbbm{P}}(\bar{\partial}_i(r)\neq \varnothing)+ \ensuremath{\mathbbm{P}}(\rE(\mathscr{C}(i))>n^{\rho+\varepsilon}). \end{eq} \end{proposition} \noindent Before proceeding with the next section in which we investigate $\ensuremath{\mathbbm{P}}(\bar{\partial}_i(r)\neq \varnothing)$, we estimate the expectation and variance of the progeny distribution in the branching process $\mathcal{X}_{\scriptscriptstyle n,\mathrm{res}}(i)$ using Assumptions~\ref{assumption1},~\ref{assumption-extra}, and \eqref{defn:criticality}. Using $\sum_i \theta_i^2 = \infty$, we can choose $i_2(\lambda)$ (depending only on $\lambda$) such that \begin{align}\label{eqn:666} \frac{1}{\mu}\sum_{i=1}^{i_2(\lambda)}\theta_i^2\geq 5\lambda. \end{align} Also the normalizing constant in \eqref{eqn:668} satisfies \blue{ \begin{eq}\label{eq:L-asymp} L = \ell_n(1+o(n^{-\eta})) \end{eq} uniformly over $1\leq i\leq K_n$. To see this, first observe that $\ubar{\ell}_n=\ell_n-2n^{\rho+\varepsilon} = o(n^{-\eta})$ since $\varepsilon < 1-\rho -\eta = \frac{4-\tau}{\tau-1}$. Also, $\frac{1}{\ell_n}\sum_{j\leq i} d_j = O(d_1K_nn^{-1})= o(n^{2\alpha-1})=o(n^{-\eta})$, \blue{as $K_n = o(n^{\alpha})$} by Assumption~\ref{assumption-extra} \blue{and Lemma~\ref{lem:technical}} and $2\alpha -1 = -\eta$. Hence, \eqref{eq:L-asymp} follows.} Now using Assumption~\ref{assumption1} \blue{and \eqref{defn:criticality}}, note that there exists $N_\lambda\geq 1$ such that for all $n\geq N_\lambda$ and $i_2(\lambda)\leq i\leq K_n$, \begin{eq} \label{eq:computation-mean-BP} \expt{\bar{\xi}_n(i)}&=\frac{1}{L}\sum_{j\geq i} d_j(d_j-1)= \frac{1}{\ell_n}\sum_{j \geq i} d_j(d_j-1) +o(n^{-\eta})\\ &=1+\lambda n^{-\eta}-\frac{1}{\ell_n}\sum_{j<i}d_j(d_j-1)+o(n^{-\eta})\\ & \leq 1+\lambda n^{-\eta}-\frac{1}{2\ell_n}\sum_{j<i}d_j^2+o(n^{-\eta})\\ &\leq 1 - \bigg(Cn^{-2\alpha}\sum_{j< i} d_j^2\bigg)n^{-\eta} +o(n^{-\eta }), \end{eq} where the third step uses \eqref{defn:criticality}, the penultimate step uses the fact that $d_i\geq 2$ so that $\sum_{j<i} d_j \leq \sum_{j<i}d_j^2 /2$ for $i\leq K_n$, and the last step uses \eqref{eqn:666}. Thus, for $n\geq N_\lambda$ and $i_2(\lambda)\leq i\leq K_n$, \begin{eq} \label{eq:BP-expt} \expt{\bar{\xi}_n(i)}\leq 1 - \beta_i^n n^{-\eta} \quad \text{where}\quad \beta_i^n=Cn^{-2\alpha}\sum_{j< i}d_j^2. \end{eq} The estimate in \eqref{eq:BP-expt} will be crucial in the next section. \subsection{Estimating heights of trees via random walks} \label{sec:height-vs-rw} We will prove the following theorem in this section: \begin{theorem}\label{lem:boundary-small-prob} Suppose that {\rm Assumptions~\ref{assumption1}, \ref{assumption-extra}}, and \eqref{defn:criticality} hold. Fix $\varepsilon>0$. Then, for all $i_2(\lambda)\leq i \leq K_n$ (where $i_2(\lambda)$ \blue{and $K_n$ are} given by \eqref{eqn:666} \blue{and {\rm Lemma~\ref{lem:technical}} respectively}) and $n\geq N_\lambda$, \begin{equation} \ensuremath{\mathbbm{P}}(\bar{\partial}_i( \varepsilon n^\eta)\neq \varnothing)\leq \frac{Cd_i}{n^{\alpha}} \mathrm{e}^{- \frac{\varepsilon \beta_i^n}{2}}, \end{equation}for some constant $C = C(\varepsilon,\lambda)>0$. \end{theorem} Define $\cX_n^1 (i)$ to be the Galton-Watson tree starting with one offspring and progeny distribution $\bar{\xi}_n(i)$ and let $\bar{\partial}_i^1(r)$ denote the number of individuals in generation $r$ of $\mathcal{X}_{n}^1(i)$. The crucial ingredient for the proof of Theorem~\ref{lem:boundary-small-prob} is the following: \begin{proposition}\label{prop-height-bound-one-progeny} Under identical conditions as in {\rm Theorem~\ref{lem:boundary-small-prob}}, for all $n\geq N_\lambda$, \begin{eq} \ensuremath{\mathbbm{P}}(\bar{\partial}_{i_2(\lambda)}^1(\varepsilon n^{\eta}) \neq \varnothing) \leq \frac{C}{n^{\alpha}}, \end{eq}for some constant $C = C(\varepsilon,\lambda)>0$. \end{proposition} \begin{proof}[Proof of Theorem~\ref{lem:boundary-small-prob} using Proposition~\ref{prop-height-bound-one-progeny}] Let $M_r$ denote the number of children at generation $r$ of $\cX_{n,\mathrm{res}}(i)$, and note that \begin{eq} \ensuremath{\mathbbm{P}}(\bar{\partial}_i( \varepsilon n^\eta)\neq \varnothing) \leq \ensuremath{\mathbbm{E}}[M_{\varepsilon n^{\eta}/2}] \times \ensuremath{\mathbbm{P}}(\bar{\partial}_i^1(\varepsilon n^{\eta}/2) \neq \varnothing). \end{eq} Now, using \eqref{eq:BP-expt}, \begin{eq} \ensuremath{\mathbbm{E}}[M_{\varepsilon n^{\eta}/2}] \leq d_i (1-\beta_i^n n^{-\eta})^{\varepsilon n^{\eta}/2} \leq d_i \mathrm{e}^{- \frac{\varepsilon \beta_i^n}{2}}, \end{eq} and $\bar{\xi}_n(i)\preceq \bar{\xi}_n(i-1) \preceq \dots \preceq \bar{\xi}_n(i_2(\lambda))$, where $\preceq$ denotes stochastic domination. Thus, \begin{eq} \ensuremath{\mathbbm{P}}(\bar{\partial}_i( \varepsilon n^\eta)\neq \varnothing) \leq d_i \mathrm{e}^{- \frac{\varepsilon \beta_i^n}{2}}\times \ensuremath{\mathbbm{P}}(\bar{\partial}_{i_2(\lambda)}^1(\varepsilon n^{\eta}/2) \neq \varnothing), \end{eq} and the proof of Theorem~\ref{lem:boundary-small-prob} follows using Proposition~\ref{prop-height-bound-one-progeny}. \end{proof} The rest of this section is devoted to the proof of Proposition~\ref{prop-height-bound-one-progeny}. We leverage some key ideas from~\cite{A17}. Define the breadth-first random walk $\bld{s}_n$ by $s_n(0) = 1$ and \begin{equation}\label{eq:random-walk-tree} s_n(u) = s_n(u-1) + \zeta_u -1, \end{equation} where $(\zeta_u)_{u\geq 1}$ are i.i.d.~observations from the distribution of $\bar{\xi}_n(i_2(\lambda))$. Let $\sigma := \inf\{u:s_n(u) = 0\}$ and for $t=0, 1,\ldots, \sigma$, define the function \begin{equation} H_n(t) := \red{\sum_{u=0}^{t-1}}\frac{1}{s_n(u)}\, . \end{equation} A remarkable fact observed in \cite[Proposition 1.7]{A17} states that the height of a tree with breadth-first exploration process $\bld{s}_n$ is at most $3H_n(\sigma)$. Thus Proposition~\ref{prop-height-bound-one-progeny} can be concluded directly from the following estimate: \begin{proposition}\label{prop:RW-hitting-estimate} Under identical conditions as in {\rm Theorem~\ref{lem:boundary-small-prob}}, for all $n\geq N_\lambda$, \begin{eq}\label{eqn:2} \ensuremath{\mathbbm{P}}(H_n(\sigma) > \varepsilon n^{\eta}) \leq \frac{C}{n^{\alpha}}, \end{eq} for some constant $C = C(\varepsilon,\lambda)>0$. \end{proposition} In what follows, we fix $\delta>0$ such that $\delta n^{\alpha} +2 < d_{i_2(\lambda)}/100$ for all $n\geq N_{\lambda}$. Define $I_l:=[2^{l-1} ,2^{l+1} )$ for $l\geq 1$. Let $\ensuremath{\mathbbm{P}}_x$ denote the law of the random walk $\bld{s}_n$, starting from $x$ and satisfying the recurrence relation in \eqref{eq:random-walk-tree}. Let $\sigma_{nl} : = \min\{t\geq 1: s_n(t) \notin I_l\}$ and $r_{nl}: = \min\{t\geq 1:\sup_{x\in I_l}\ensuremath{\mathbbm{P}}_x(\sigma_{nl}>t)\leq 1/2\}$. We first obtain the following bound on $r_{nl}$: \begin{lemma}\label{lem:r-nl-ub} Under identical conditions as in {\rm Theorem~\ref{lem:boundary-small-prob}}, there exists $n_{\star}\geq 1$ depending only on $(d_i\, ;\, i\in [n], n\geq 1)$ such that for all $n\geq n_{\star}$ and all $l\geq 1$ satisfying $2^{l+1}\leq\delta n^{\alpha}$, we have $r_{nl} \leq C 2^{(\tau-2)l}$ for some (sufficiently large) constant $C>0$. \end{lemma} \begin{proof} \blue{By \eqref{eqn:668}, $\ensuremath{\mathbbm{P}}(\bar{\xi}_n(i_2(\lambda))= j ) = (1+o(1)) \ensuremath{\mathbbm{P}}(D_n^* = j+1)$ uniformly over $1\leq j\leq d_{i_2(\lambda)}$. Thus, by Assumption~\ref{assumption-extra}, \begin{eq}\label{eq:lower-bound-tail-xi} \ensuremath{\mathbbm{P}}\Big(\frac{u}{c_1}<\bar{\xi}_n(i_2(\lambda)) \leq u \Big) \geq Cu^{- (\tau-2)}, \end{eq}for all $c_1\leq u\leq \delta n^\alpha$.} Next, in order to estimate $\sigma_{nl}$, we bound $\sup_{x\in I_l} \ensuremath{\mathbbm{P}}_x (s_n(t) \in I_l) $ using an upper bound on L\'evy's concentration function due to Esseen~\cite{Ess86}, that we describe now. For a random variable $Z$, define L\'evy's concentration function \begin{eq} Q(Z,L) := \sup_{x\in \ensuremath{\mathbb{R}}} \ensuremath{\mathbbm{P}}(Z\in [x,x+L)). \end{eq} By \cite[Theorem~3.1]{Ess86}, for any $u>0$, \begin{eq}\label{eq:esseen-inequality} Q(s_n(t), u ) \leq \frac{Cu}{\big(t \times \ensuremath{\mathbbm{E}}[|\zeta_1-\zeta_2|^2 \ind{|\zeta_1-\zeta_2| \leq u}]\big)^{1/2}}\, , \end{eq} where $\zeta_1$ and $\zeta_2$ are i.i.d.~realizations from the distribution of $\bar{\xi}_n(i_2(\lambda))$. \red{To get an upper bound on the right side of} \eqref{eq:esseen-inequality}, we first observe that for any random variable $Y $ supported on $\bZ_{\geq 0}$, \begin{eq} \ensuremath{\mathbbm{E}}[Y^2 \ensuremath{\mathbbm{1}}\{Y\leq u\}] &= \sum_{1\leq y\leq u} y^2 \ensuremath{\mathbbm{P}}(Y = y) = \sum_{1\leq y\leq u} \sum_{1\leq x\leq y}y \ensuremath{\mathbbm{P}}(Y = y) \\ &= \sum_{1\leq x\leq u} \sum_{ \blue{x\leq y\leq u}} y \ensuremath{\mathbbm{P}}(Y = y) \geq \sum_{1\leq x\leq u} x \ensuremath{\mathbbm{P}}(\blue{x\leq Y\leq u}). \end{eq} Now, it follows from \eqref{eqn:668} and Assumption~\ref{assumption1}~(iii) that $\liminf_{n\to\infty}\ensuremath{\mathbbm{P}}(\bar{\xi}_n(i_2(\lambda)) =0)>\nobreak0$. Similarly, using \eqref{eqn:668} and Assumption~\ref{assumption1}~(ii), we can choose an integer $j_{\star}>c_1$ such that $\liminf_{n\to\infty}\ensuremath{\mathbbm{P}}\big(\bar{\xi}_n(i_2(\lambda))-1 =j_{\star}\big)>0$. Let $n_{\star}$ be such that \begin{align}\label{eqn:11} \inf_{n\geq n_{\star}}\ensuremath{\mathbbm{P}}\big(\bar{\xi}_n(i_2(\lambda)) =0\big)>0\ \text{ and }\ \inf_{n\geq n_{\star}}\ensuremath{\mathbbm{P}}\big(\bar{\xi}_n(i_2(\lambda)) -1 =j_{\star}\big)>0\, . \end{align} Then for any $n\geq n_{\star}$ and $c_1\leq u\leq \delta n^{\alpha}$, \begin{eq} &\ensuremath{\mathbbm{E}}[|\zeta_1 - \zeta_2|^2\ind{|\zeta_1-\zeta_2| \leq u}] \geq \sum_{1\leq x\leq u} x\ensuremath{\mathbbm{P}}(\blue{x\leq |\zeta_1 - \zeta_2| \leq u}) \\ &\hskip30pt \geq \sum_{1\leq x\leq u/c_1} x\ensuremath{\mathbbm{P}}(\blue{x\leq \zeta_1\leq u})\ensuremath{\mathbbm{P}}(\zeta_2 =0) \geq C u^{4-\tau}, \end{eq} where \blue{the penultimate step uses the fact that if $ x\leq \zeta_1 \leq u$ and $\zeta_2 =0$, then $x\leq |\zeta_1 - \zeta_2| \leq u$}, and the final step follows using \eqref{eq:lower-bound-tail-xi} and the first inequality in \eqref{eqn:11}. Thus, \eqref{eq:esseen-inequality} yields, for $n\geq n_{\star}$ and any $l\geq 1$ satisfying $c_1\leq 2^{l+1}\leq\delta n^{\alpha}$, \begin{eq} \sup_{x\in I_l} \ensuremath{\mathbbm{P}}_x (\sigma_{nl} >t) \leq \sup_{x\in I_l} \ensuremath{\mathbbm{P}}_x (s_n(t) \in I_l) \leq Q(s_n(t),2^{l+1}) \leq \frac{C2^l}{(t2^{l(4-\tau)})^{\blue{1/2}}}, \end{eq} which is at most $1/2$ by choosing $t = C 2^{l(\tau-2)}$ for some large constant $C>0$. Finally, for all $n\geq n_{\star}$ and $l\geq 1$ satisfying $2^{l+1}< c_1$, \[ \sup_{x\in I_l} \ensuremath{\mathbbm{P}}_x (\sigma_{nl} >t) \leq \ensuremath{\mathbbm{P}}\big(\bar{\xi}_n(i_2(\lambda)) -1 \neq j_{\star}\big)^t \leq \exp(-Ct)\, , \] where the last step uses the second inequality in \eqref{eqn:11}. This in particular implies that $r_{nl}\leq C$ for all $n\geq n_{\star}$ and $l\geq 1$ satisfying $2^{l+1}< c_1$. This completes the proof. \end{proof} We now decompose the possible values of the random walk~\eqref{eq:random-walk-tree} \blue{starting from $s_n(0) =1$} into different scales. Recall that $I_l:=[2^{l-1} ,2^{l+1} )$. At each time $t$, the scale of $s_n(t)$, denoted by $\mathrm{scl}(s_n(t))$, is an integer. Let $\mathrm{scl}(s_n(0)) =1$. Suppose that $\mathrm{scl}(s_n(u)) = l$ for some $u>0$. A change of scale occurs when $\bld{s}_n$ leaves $I_l$, i.e., at time $T:= \inf\{t>u: s_n(t)\notin I_l\}$, and the new scale is given by $\mathrm{scl}(s_n(T)) = l'$, where $l'$ is such that $s_n(T)\in [2^{l'-1},2^{l'})$. Now, the next change of scale occurs at time $T':= \inf\{t>T: s_n(t)\notin I_{l'}\}$, and the scale remains the same until $T'$, i.e., $\mathrm{scl}(s_n(t)) = l'$ for all $T\leq t<T'$. Define \begin{eq} H_{nl}(t):= \sum_{u \in [0,t), \ \mathrm{scl}(s_n(u))=l}\frac{1}{s_n(u)}, \quad \text{so that} \quad H_n(t) = \sum_{l\geq 1} H_{nl}(t). \end{eq} \noindent Let $T_{nl}(t):= \#\{u\in [0,t): \mathrm{scl}(s_n(u))=l\}$, and note that \begin{eq} 2^{l-1} H_{nl}(t)\leq T_{nl} (t)\leq 2^{l+1} H_{nl}(t). \end{eq} Therefore, for any $x>0$, \begin{equation}\label{eq:T-H-relation} \ensuremath{\mathbbm{P}}\Big(H_{nl}(\sigma)\geq \frac{xr_{nl}}{2^{l-1}}\Big)\leq \ensuremath{\mathbbm{P}}(T_{nl}(\sigma)\geq xr_{nl}). \end{equation} The next lemma estimates $ \ensuremath{\mathbbm{P}}(T_{nl}(\sigma)\geq xr_{nl})$: \begin{lemma}\label{lem:time-spent-l-ub} For all $n\geq 1$, $l\geq 1$, and $x>0$, \begin{equation} \ensuremath{\mathbbm{P}}(T_{nl}(\sigma)\geq x r_{nl})\leq C2^{-l -C' x }, \end{equation}for some absolute constants $C, C'>0$. \end{lemma} \begin{proof} Let us first show that for any $l\geq 2$, \begin{eq}\label{eq:T-nl-bound} \ensuremath{\mathbbm{P}}(T_{nl}(\sigma) \neq 0)\leq 2^{-(l-1)}. \end{eq} For any $t\geq 0$, let $\cF_t$ denote the sigma-field generated by $(\zeta_u)_{u =0}^t$, where we take $\zeta_0 =1$. Note that if $T_{nl}(\sigma)\neq 0$, then $s_n(u)$ hits $2^{l-1}$ before hitting zero. For $H>1$, let $\gamma_H:= \min\{t: s_n(t) \geq H, \text{ or }s_n(t) =0\}$. Since $\ensuremath{\mathbbm{E}}[\zeta_u-1]<0$ by \eqref{eq:BP-expt}, $(s_n(t))_{t\geq 0}$ is a supermartingale with respect to the filtration $(\cF_t)_{t\geq 0}$. Consequently, an application of the optional stopping theorem yields \begin{eq} H \ensuremath{\mathbbm{P}}(s_n(\gamma_H) \geq H) \leq \ensuremath{\mathbbm{E}}[s_n(\gamma_H)] \leq \ensuremath{\mathbbm{E}}[s_n(0)] = 1, \end{eq} and therefore, \begin{eq}\label{super-mg-hitting-time-bound} \ensuremath{\mathbbm{P}}(s_n(\gamma_H) \geq H) \leq \frac{1}{H}. \end{eq} Thus, \eqref{eq:T-nl-bound} follows by taking $H = 2^{l-1}$ together with the fact that $T_{nl}(\sigma)\neq 0$ implies that $s_n(\gamma_H)\geq H$. Next, we define $U_n(t,[a,b))$--the number of upcrossings of an interval $[a,b)$ by $\bld{s}_n$ up to time $t$--to be the supremum of all integers $k$ such that there exist times $(u_j,t_j)_{j=1}^k$ satisfying $0\leq u_1<t_1<u_2<\dots<t_k\leq t$, and $s_n(u_j)<a<b\leq s_n(t_j)$ for all $j\in [k]$. We will use the following simple fact (see \cite[Proposition 3.2]{A17}): for any positive integers $k, z, a, b$ with $ 0<z<a<b$, \begin{eq}\label{eq:upcrossing-inequality} \ensuremath{\mathbbm{P}}_z\big(U_n(\sigma,[a,b)) \geq k\big) \leq \Big(\frac{ a-1}{b}\Big)^k. \end{eq} Next define $\mathrm{visit}(l,t)$ to be the number of visits to scale $l$ upto time $t$, i.e., this is the supremum over $k\in \ensuremath{\mathbb{N}}$ such that one can find $(u_j,t_j)_{j=1}^k$ with $u_1<t_1< \dots <u_k<t_k \leq t$ satisfying $\mathrm{scl}(s_n(u_j))\neq l$ but $\mathrm{scl}(s_n(t_j)) = l$. For the random walk $\bld{s}_n$ started at $s_n(0)=1$, we set $\mathrm{visit}(1,0) =1$ and $\mathrm{visit}(l,t) = 0$ \blue{if $\bld{s}_n$ does not enter scale $l$ before time $t$.} Further, define $M_{nl} = \mathrm{visit}(l,\sigma)$ (the total number of visits to scale $l$) and $t_{jl} = \#\{t<\sigma:\mathrm{scl}(s_n(t))=l, \mathrm{visit}(l,t)=j \}$ (the time spent at scale $l$ during the $j$-th visit). \blue{Note that, if $T_{nl} (\sigma) \neq 0$ occurs, then $M_{nl} \geq 1$, and} $T_{nl} (\sigma) = \sum_{j=1}^{M_{nl}} t_{jl}$. Thus, for any $m\geq 2$ and $x\in\bZ_{\geq 2}$, \begin{eq}\label{eq:Tnl-split-up} \ensuremath{\mathbbm{P}}\big(T_{nl} (\sigma)> 5x r_{nl} \big)= \ensuremath{\mathbbm{P}}\big(\sum_{j=1}^{M_{nl}} t_{jl}>\blue{5x} r_{nl}\big) \leq \ensuremath{\mathbbm{P}}(M_{nl}>m)+\ensuremath{\mathbbm{P}}\big(\sum_{j=1}^{m} t_{jl}>\blue{5x}r_{nl}\big). \end{eq} Now, $M_{nl}>m$ implies that $T_{nl}(\sigma)\neq 0$, and after the first visit to scale $l$, the walk comes back to scale $l$ at least $m$ times before hitting zero. In any of the subsequent visits, if $\bld{s}_n$ enters scale $l$ from below (this can only happen for $l\geq 3$), then that would imply an upcrossing of the interval $[2^{l-2},2^{l-1})$ has taken place. Otherwise, if $\bld{s}_n$ enters scale $l$ from above in any of the subsequent visits, then it must be the case that while leaving the scale $l$ during the previous visit, the walk went from scale $l$ to a higher scale. This yields an upcrossing of $[2^{l},2^{l+1})$. Therefore, for any $l\geq 3$, $M_{nl}>m$ implies that $T_{nl}(\sigma)\neq 0$, and after the first visit to scale $l$ and before hitting zero, either at least $m/2$ many upcrossings of $[2^{l-2},2^{l-1})$ have taken place, or at least $m/2$ many upcrossings of $[2^{l},2^{l+1})$ have taken place. Thus, using \eqref{eq:T-nl-bound}, \eqref{eq:upcrossing-inequality}, and the strong Markov property, for any $l\geq 3$, \begin{eq}\label{Mnl-tail-bound} \ensuremath{\mathbbm{P}}(M_{nl}>m) \leq \frac{C}{2^{l+m/2}}. \end{eq} Next, by the definition of $r_{nl}$ given right above Lemma~\ref{lem:r-nl-ub}, $\ensuremath{\mathbbm{P}}_z(t_{jl} > k r_{nl}) \leq 2^{-k}$ for any $z>0$, which implies that $\lfloor t_{jl}/r_{nl}\rfloor$ can be stochastically dominated by a Geometric$(1/2)$ random variable. Using the strong Markov property, it follows that for any $z>0$, under $\ensuremath{\mathbbm{P}}_z$, $\sum_{j=1}^m \lfloor t_{jl}/r_{nl}\rfloor$ is stochastically dominated by $\sum_{i=1}^m g_i$, where $(g_i)_{i\geq 1}$ is an i.i.d.~collection of Geometric$(1/2)$ random variables. Thus, for any $z>0$, \begin{align*} \ensuremath{\mathbbm{P}}_z\bigg(\sum_{j=1}^mt_{jl}\geq (k+m)r_{nl}\bigg) &\leq \ensuremath{\mathbbm{P}}_z\bigg(\sum_{j=1}^m\Big\lfloor\frac{t_{jl}}{r_{nl}}\Big\rfloor > k\bigg) \leq \ensuremath{\mathbbm{P}}\bigg(\sum_{i=1}^m g_i>k\bigg) \\ &= \ensuremath{\mathbbm{P}}(\mathrm{Bin}(k,1/2)<m) \leq \mathrm{e}^{-(k-2m)^2/2k}, \end{align*} for $2m\leq k$, where the last step follows using standard concentration inequalities such as \cite[Theorem~2.1]{JLR00}. Consequently, using \eqref{eq:T-nl-bound} and the strong Markov property, $ \ensuremath{\mathbbm{P}}\big(\sum_{j=1}^mt_{jl}\geq (k+m)r_{nl}\big) \leq 2^{-(l-1)}\cdot\mathrm{e}^{-(k-2m)^2/2k} $ for $2m\leq k$. \blue{Combining this with \eqref{eq:Tnl-split-up} and \eqref{Mnl-tail-bound}, and taking $k = 4x$ and $m = x$ yields \begin{eq} \ensuremath{\mathbbm{P}}(T_{nl} (\sigma)> 5x r_{nl} ) \leq C 2^{-l-C'x} \end{eq} for any $l\geq 3$. The proofs for $l=1$ and $l=2$ follow similar steps. This completes the proof of Lemma~\ref{lem:time-spent-l-ub}.} \end{proof} We are now ready to prove Proposition~\ref{prop:RW-hitting-estimate}. \begin{proof}[Proof of Proposition~\ref{prop:RW-hitting-estimate}] Recall the definition of $\bld{s}_n$ from \eqref{eq:random-walk-tree} starting from one, so that $s_n(0)=1$. Fix $\delta>0$. We first estimate the probability of the event $\cB_n$ that $\bld{s}_n$ hits $\delta n^\alpha/2$ before hitting zero. Let $\gamma:= \min\{t\colon s_n(t) \geq \delta n^{\alpha}/2, \text{ or }s_n(t) =0\}$. By \eqref{super-mg-hitting-time-bound}, \begin{eq} \ensuremath{\mathbbm{P}}(\cB_n) = \ensuremath{\mathbbm{P}}\Big(s_n(\gamma) \geq \frac{\delta n^{\alpha}}{2}\Big) \leq \frac{2}{\delta n^{\alpha}}. \end{eq} \blue{Let $m := \max\{l\geq 1: 2^{l+1} \leq \delta n^{\alpha}\}$. On $\cB_n^c$, $H_{nl}(\sigma) = 0$ for $l>m$. } Thus, for any sequence of positive numbers $(b_l)_{l\geq 1}$, \begin{eq}\label{eq:H-n-sigma-decompose} \ensuremath{\mathbbm{P}}\bigg(H_n(\sigma) \geq \sum_{l=1}^m \frac{b_l r_{nl}}{ 2^{l-1}}\bigg) &\leq \frac{2}{\delta n^{\alpha}} + \ensuremath{\mathbbm{P}}\bigg(H_n(\sigma) \geq \sum_{l=1}^m \frac{b_l r_{nl}}{ 2^{l-1}}, \text{ and } \cB_n^c \text{ occurs}\bigg)\\ &\leq \frac{2}{\delta n^{\alpha}} + \ensuremath{\mathbbm{P}}\bigg(H_{nl}(\sigma) \geq \frac{b_l r_{nl}}{ 2^{l-1}} \text{ for some }1\leq l\leq m \bigg). \end{eq} Using \eqref{eq:T-H-relation} and Lemma~\ref{lem:time-spent-l-ub}, \eqref{eq:H-n-sigma-decompose} yields \begin{eq} \label{calc-simple-H-n-sigma} \ensuremath{\mathbbm{P}}\bigg(H_n(\sigma) \geq \sum_{l=1}^m \frac{b_l r_{nl}}{ 2^l}\bigg) \leq \frac{2}{\delta n^{\alpha}} + \sum_{l=1}^m \ensuremath{\mathbbm{P}}(T_{nl} (\sigma) \geq b_l r_{nl})\leq \frac{2}{\delta n^{\alpha}} + C\sum_{l=1}^m 2^{- l - C' b_l}. \end{eq} Letting $b_l = \frac{1}{C'} (m-l+1 +2 \log_2 (m-l+1))$ for $1\leq l\leq m$, and using Lemma~\ref{lem:r-nl-ub}, \begin{eq}\label{calc-sum-blrnl} \sum_{l=1}^m \frac{b_l r_{nl}}{2^{l-1}} &\leq C \sum_{l=1}^m \big(m-l+1 +2\log_2(m-l+1)\big) 2^{l(\tau-3)} \\ &= C \sum_{j=1}^m \frac{(j+2\log_2 j) 2^{(m+1)(\tau-3)} }{2^{j(\tau-3)}}\leq C (\delta n^{\alpha}) ^{\tau - 3} \sum_{j=1}^m \frac{(j+2\log_2 j) }{2^{j(\tau-3)}}\\ & \leq C n^{\eta} \delta^{\tau -3}, \end{eq}where we have used $\sum_{j=1}^\infty \frac{(j+2\log_2 j) }{2^{j(\tau-3)}} <\infty$ in the last step; the bound in \eqref{calc-sum-blrnl} holds for all $n\geq n_{\star}$, where $n_{\star}$ is as in Lemma~\ref{lem:r-nl-ub}. Also, \begin{eq}\label{calc-prob-ub} \sum_{l=1}^m 2^{-l- C'b_l} = \sum_{l=1}^m 2^{-(m+1)} (m-l+1)^{-2} \leq \frac{4}{\delta n^{\alpha}} \sum_{l=1}^\infty \frac{1}{l^2}. \end{eq} Thus, the claim in Proposition~\ref{prop:RW-hitting-estimate} follows for $n\geq n_{\star}$ by combining \eqref{calc-sum-blrnl} and \eqref{calc-prob-ub} with \eqref{calc-simple-H-n-sigma}. We conclude that the claimed bound holds for $n\geq N_{\lambda}$ by choosing a larger constant $C$ on the right side of \eqref{eqn:2}. \end{proof} \subsection{Proof of Proposition~\ref{prop:diamter-small-comp}} Let us now complete the proof of Proposition~\ref{prop:diamter-small-comp} using Proposition~\ref{prop:coupling-uppperbound} and Theorem~\ref{lem:boundary-small-prob}. \blue{We take $K_n$ as in Lemma~\ref{lem:technical} so that the results in Section~\ref{sec:height-vs-rw} hold. } Note that these bounds work for $i_2(\lambda)\leq i\leq K_n$, and we will use path counting arguments from \cite{J09b,BDHS17} to bound the diameter for $i>K_n$. Define $\mathscr{C}_{ \mathrm{res}} (i) $ to be the connected component containing vertex $i$ in the graph $\cG_n^{\scriptscriptstyle > i-1}=\mathrm{CM}_n(\bld{d}) \setminus [i-1]$. Note that if $\Delta^{\scriptscriptstyle >K}>\varepsilon n^{\eta}$, then there exists a path of length $\varepsilon n^\eta$ in $\mathrm{CM}_n(\bld{d})$ avoiding all the vertices in $[K]$. Suppose that the minimum index among vertices on that path is $i_0$. Then $\ensuremath{\mathrm{diam}} (\mathscr{C}_{ \mathrm{res}} (i_0) )>\varepsilon n^{\eta}$. Therefore, $\Delta^{\scriptscriptstyle >K}>\varepsilon n^{\eta}$ implies that either there exists $i\in (K,K_n)$ satisfying $\ensuremath{\mathrm{diam}} (\mathscr{C}_{ \mathrm{res}} (i) )>\varepsilon n^{\eta}$, or $\mathrm{diam}(\mathrm{CM}_n(\bld{d})\setminus [K_n]) > \varepsilon n^{\eta}$. We will use the following lemma first to complete the proof of Proposition~\ref{prop:diamter-small-comp} and prove the lemma subsequently: \begin{lemma}\label{lem:diam-bound-subcritical} Under \rm Assumptions~\ref{assumption1} and \ref{assumption-extra}, for any $\varepsilon>0$, $\lim_{n\to\infty}\ensuremath{\mathbbm{P}}(\mathrm{diam}(\mathrm{CM}_n(\bld{d})\setminus [K_n]) > \varepsilon n^{\eta}) = 0$, \blue{where $K_n$ as in Lemma~\ref{lem:technical}.} \end{lemma} \begin{proof}[Proof of Proposition~\ref{prop:diamter-small-comp}] As defined earlier around \eqref{eqn:668}, $\partial_i(r)$ denotes the number of vertices at distance $r$ from the vertex $i$ in the graph $\cG_n^{\scriptscriptstyle > i-1}$. Recall the definition of $\bar{\partial}$ in Proposition~\ref{prop:coupling-uppperbound}. Thus, Proposition~\ref{prop:coupling-uppperbound} and Theorem~\ref{lem:boundary-small-prob} together with Lemma~\ref{lem:diam-bound-subcritical}, yield that \begin{eq}\label{eq:union-bound-diameter} \prob{ \Delta^{\scriptscriptstyle >K}>\varepsilon n^{\eta} } &\leq \sum_{i\in (K,K_n)}\ensuremath{\mathbbm{P}}(\bar{\partial}_i(\varepsilon n^{\eta}\red{/2})\neq \varnothing) + \ensuremath{\mathbbm{P}}(\mathrm{diam}(\mathrm{CM}_n(\bld{d})\setminus [K_n]) > \varepsilon n^{\eta})\\ &\leq C\sum_{i\in (K,K_n)} \Big(\frac{d_i}{n^{\alpha}}\Big)\mathrm{e}^{-\varepsilon\beta_i^n/4} +o(1), \end{eq} where the last line tends to zero if we first take $n\to\infty$ and then take $K\to\infty$ using Assumption~\ref{assumption-extra} \blue{and Lemma~\ref{lem:technical} below}. Thus the proof of Proposition~\ref{prop:diamter-small-comp} follows. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:diam-bound-subcritical}] Let $\bld{d}':=(d_i'\, ;\, i\in [n]\setminus [K_n])$, where $d_i'$ denotes the degree of $i$ in $\mathrm{CM}_n(\bld{d})\setminus [K_n]$. Note that $\mathrm{CM}_n(\bld{d})\setminus [K_n]$ is again distributed as a configuration model conditionally on its degree sequence $\bld{d}'$, with the criticality parameter \begin{eq} \nu'_n = \frac{\sum_{i>K_n}d_i'(d_i'-1)}{\sum_{i>K_n} d_i'} \leq \frac{\sum_{i>K_n}d_i(d_i-1)}{\ell_n - 2 \sum_{i=1}^{K_n} d_i} \leq 1 - R_n n^{-\eta}, \quad R_n = \omega( \log n), \end{eq} where the penultimate step follows using $d_i'\leq d_i$ and $\ell_n' := \sum_{i>K_n} d_i' = \ell_n - 2 \sum_{i=1}^{K_n} d_i$, and the last bound follows from \blue{the definition of $K_n$ given in Lemma~\ref{lem:technical}~(ii)} and an argument identical to that in \eqref{eq:computation-mean-BP}. Let $\ensuremath{\mathbbm{P}}'(\cdot)$ denote the probability measure conditionally on $\bld{d}'$. We will use path-counting techniques for subcritical configuration models. An argument similar to the one given in \cite[Lemma 6.1]{J09b} shows that for any $l\geq 1$, conditional on $\bld{d}'$, the expected number of paths of length $l$ starting from vertex $i$ is at most \[ \frac{\ell_n'd_i' (\nu_n')^{l-1}}{\ell_n'-2l+3} \leq \ell_n'^2 (\nu_n')^{l-1}. \] Thus, for any $i>K_n$, \begin{eq} \label{eq:path-counting-tail} \ensuremath{\mathbbm{P}}'(\exists \text{ path of length at least }\varepsilon n^\eta \text{ from } i \text{ in }\mathrm{CM}_n(\bld{d})\setminus [K_n]) \leq C \ell_n'^2\sum_{l>\varepsilon n^{\eta}} (\nu_n')^l, \end{eq} Thus, for $i>K_n$, the probability in \eqref{eq:path-counting-tail} is at most \begin{eq} Cn^2(1-R_n n^{-\eta})^{\varepsilon n^{\eta}}/(R_n n^{-\eta}) \leq Cn^{2+\eta} \mathrm{e}^{-\varepsilon R_n} = o(1/n), \end{eq} \blue{since $R_n\gg \log{n}$.} Therefore, \begin{eq} \ensuremath{\mathbbm{P}}'(\exists i>K_n: \exists \text{ path of length at least }\varepsilon n^\eta \text{ from } i \text{ in }\mathrm{CM}_n(\bld{d})\setminus [K_n]) = o(1), \end{eq}and the proof of Lemma~\ref{lem:diam-bound-subcritical} follows. \end{proof} \section{Verification of the assumptions for percolated degrees: Proof of Theorem~\ref{cor:GLM-percoltion}} \label{sec:perc-degrees} Let $\cG_n$ denote the graph obtained by performing percolation with edge retention probability $p_c(\lambda)$ (defined in \eqref{eq:critical-window-defn}) on $\mathrm{CM}_n(\bld{d})$. Let $\bld{d}^p=(d_i^p)_{i\in [n]}$ denote the degree sequence of $\cG_n$. By \cite[Lemma 3.2]{F07}, the law of $\cG_n$, conditionally on $\bld{d}^p$, is the same as the law of $\mathrm{CM}_n(\bld{d}^p)$. Thus, it is enough to show that if the original degree sequence $(\bld{d}_n \, ,\, n\geq 1)$ satisfies Assumptions~\ref{assumption1}(i),~\ref{assumption1}(ii)~and~\ref{assumption-extra}, then we can construct $(\bld{d}^p\, ,\, n\geq 1)$ on the same probability space so that Assumption~\ref{assumption1}, \eqref{defn:criticality}, and Assumption~\ref{assumption-extra} are satisfied almost surely (with possibly different parameters), since then the claim in Theorem~\ref{cor:GLM-percoltion} will follow from Theorem~\ref{thm:gml-bound}. \blue{First, note that $\ensuremath{\mathbbm{E}}[d_i^p] = d_ip_c(\lambda) (1+o(1))$. Also, given $\mathrm{CM}_n(\bld{d})$, changing the status of an edge (deleted or retained) can change $d_i^p$ by at most $2$ when the edge is incident to $i$. There are at most $d_i$ choices for such an edge. Thus, the bounded difference inequality \cite[Corollary 2.27]{JLR00} implies}, for each fixed $i\geq 1$, and for any $\varepsilon>0$, \begin{gather} \ensuremath{\mathbbm{P}}\big( |d_i^p - d_i p_c(\lambda)| > \varepsilon d_i p_c(\lambda)\big) \leq 2\mathrm{e}^{-\frac{\varepsilon^2}{4} d_i p_c^2(\lambda)}\, . \label{hub-perc} \end{gather} In particular, for each $i\geq 1$, $n^{-\alpha} d_i^p \weakc \theta_i/\nu$ as $n\to\infty$, which verifies Assumption~\ref{assumption1}~(i). Next, let $M_r^p = \sum_{i\in [n]} (d_i^p)_{r}$ and $M_r = \sum_{i\in [n]} (d_i)_{r}$, where $(x)_r:= x(x-1)\cdots (x-r+1)$. To verify the moment conditions in Assumption~\ref{assumption1}~(ii), note that \eqref{eqn:670} holds for $\bld{d}^p$ since \blue{$\sum_{i>K} (d_i^p)_3 \leq \sum_{i>K} (d_i)_3 $.} We will show that \begin{eq}\label{moment-convergence} M_1^p =(1+O_{\scriptscriptstyle \PR}(n^{-1/2}))p_c(\lambda)M_1\ \ \text{ and }\ \ M_2^p =(1+O_{\scriptscriptstyle \PR}(n^{\frac{3\alpha}{2}-1}))p_c(\lambda)^2M_2. \end{eq} Using \eqref{moment-convergence}, the first and second moment assumptions in Assumption~\ref{assumption1}~(ii) holds for the percolated degree sequence. The estimate \eqref{moment-convergence} also shows that \eqref{defn:criticality} holds. Indeed, \begin{eq} \frac{M_2^p}{M_1^p} = \frac{p_c(\lambda)M_2}{M_1} (1+ O_{\scriptscriptstyle \PR}(n^{\frac{3\alpha}{2}-1})) = 1+\nu\lambda n^{-\eta} +o(n^{-\eta}), \end{eq} where the last step follows using \eqref{eq:defn-super-crit}, \eqref{eq:critical-window-defn}, and the fact that $-1+3\alpha/2< -1+2\alpha = -\eta$. It remains to prove \eqref{moment-convergence}. Since $\frac{1}{2}\sum_{i\in [n]} d_i^p $ has a binomial distribution with parameter $\ell_n/2$ and $p_c(\lambda)$, the first asymptotics follows from Chebyshev's inequality. For the asymptotics of $M_2^p $, we use the following construction of $\cG_n$ from~\cite{F07}. \begin{algo}\label{algo:perc-degrees}\normalfont $\bld{d}^p = (d_i^p)_{i\in [n]}$ can be generated as follows: \begin{enumerate} \item[(S0)] Sample $R_n \sim \mathrm{Bin}(\ell_n/2, p_c(\lambda))$. \item[(S1)] Conditionally on $R_n$, sample a uniform subset of $2R_n$ half-edges from the set of $\ell_n$ half-edges. Let $I_j^{(i)}$ denote the indicator that $j$-th half-edge of $i$ is selected. Then $d_i^p = \sum_{j=1}^{d_i} I_j^{(i)}$ for all $i\in [n]$. \end{enumerate} \end{algo} \noindent Using the above construction, note that \begin{eq} M_2^p = \sum_{i\in [n]} \sum_{\substack{1\leq j_1\neq j_2\leq d_i}}I_{j_1}^{(i)}I_{j_2}^{(i)}. \end{eq} Let $\ensuremath{\mathbbm{P}}_1(\cdot) = \ensuremath{\mathbbm{P}}(\cdot \vert R_n)$ and similarly define $\ensuremath{\mathbbm{E}}_1[\cdot]$, $\mathrm{Var}_1(\cdot)$, and $\mathrm{Cov}_1(\cdot,\cdot)$. Then, \begin{eq}\label{M-2-p-cond-exp} \ensuremath{\mathbbm{E}}_1[M_2^p] &= \sum_{i\in [n]} \sum_{\substack{1\leq j_1\neq j_2\leq d_i}} \ensuremath{\mathbbm{P}}_1 (I_{j_1}^{(i)}=1, I_{j_2}^{(i)} = 1) = \sum_{i\in [n]} \sum_{\substack{1\leq j_1\neq j_2\leq d_i}} \frac{{\ell_n-2 \choose 2R_n-2}}{{\ell_n \choose 2R_n}}\\ &=\sum_{i\in [n]} \sum_{\substack{1\leq j_1\neq j_2\leq d_i}} \frac{2R_n(2R_n-1)}{\ell_n(\ell_n-1)} = (1+O_{\scriptscriptstyle \PR}(n^{-1/2}))p_c(\lambda)^2 M_2, \end{eq} where the last step follows using $R_n = (1+O_{\scriptscriptstyle \PR}(n^{-1/2})) p_c(\lambda)\ell_n/2$. \blue{Next, recall that a collection of random variables $(X_1, \dots, X_t)$ is called negatively associated if for every index set $I\subset [k]$, \begin{eq}\label{defn:neg-association} \mathrm{Cov}\big(f(X_i,i\in I), g(X_i,i\in I^c)\big) \leq 0, \end{eq} for all functions $f: \ensuremath{\mathbb{R}}^{|I|} \mapsto \ensuremath{\mathbb{R}}$ and $g: \ensuremath{\mathbb{R}}^{t-|I|} \mapsto \ensuremath{\mathbb{R}}$ that are component-wise non-decreasing (\cite[Definition 3]{Dubhashi1996}).} Then, conditionally on $R_n$, $I_j^{(i)}$, $j = 1,\dots, d_i$, $i\in [n]$ are negatively associated (cf.~\cite[Theorem 10]{Dubhashi1996}), which yields \blue{the almost sure bound} \begin{eq} \mathrm{Var}_1(M_2^p) &\leq \sum_{i\in [n]} \sum_{\substack{1\leq j_1\neq j_2\leq d_i}} \mathrm{Var}_1(I_{j_1}^{(i)}I_{j_2}^{(i)})+\sum_{i\in [n]} \sum_{\substack{1\leq j_1\neq j_2\leq d_i\\ 1\leq j_3\neq j_4\leq d_i\\ |\{j_1,j_2\}\cap \{j_3,j_4\}| = 1}} \mathrm{Cov}_1(I_{j_1}^{(i)}I_{j_2}^{(i)},I_{j_3}^{(i)}I_{j_4}^{(i)}), \end{eq} \blue{since the contribution of $|\{j_1,j_2\}\cap \{j_3,j_4\}| = 0$ can be ignored due to negative association. Also, $\mathrm{Var}_1(I_{j_1}^{(i)}I_{j_2}^{(i)})\leq 1$ and $|\mathrm{Cov}_1(I_{j_1}^{(i)}I_{j_2}^{(i)},I_{j_3}^{(i)}I_{j_4}^{(i)})| \leq (\mathrm{Var}_1(I_{j_1}^{(i)}I_{j_2}^{(i)})\mathrm{Var}_1(I_{j_3}^{(i)}I_{j_4}^{(i)}))^{1/2} \leq 1$. Therefore, \begin{eq} \mathrm{Var}_1(M_2^p)\leq \sum_{i\in [n]}d_i^2+4 \sum_{i\in [n]}d_i^3 = O(n^{3\alpha}). \end{eq} } Thus, for any $A>0$, \begin{eq}\label{M-2-p-chebyshev} &\ensuremath{\mathbbm{P}} \big( |M_2^p - \ensuremath{\mathbbm{E}}_1[M_2^p]| > A n^{3\alpha/2} \big) = \ensuremath{\mathbbm{E}}\big[\ensuremath{\mathbbm{P}}_1 \big( |M_2^p - \ensuremath{\mathbbm{E}}_1[M_2^p]| > A n^{3\alpha/2} \big) \big]\leq \frac{\ensuremath{\mathbbm{E}}\big[\mathrm{Var}_1(M_2^p)\big]}{A^2n^{3\alpha}} , \end{eq} which can be made arbitrarily small by choosing $A>0$ large. Thus, we conclude the asymptotics of $M_2^p$ in \eqref{moment-convergence} by using \eqref{M-2-p-cond-exp} and \eqref{M-2-p-chebyshev}. Finally, we need to show convergence \blue{in distribution} of empirical measure of $\bld{d}^p$ to finish verifying Assumptions~\ref{assumption1}~(ii)~\blue{ and~(iii)}. Let $n_k^p= \#\{i:d_i^p = k\}$, and $n_{\geq k} ^p = \sum_{r\geq k} n_{r}^p$. It suffices to show that \begin{eq}\label{empirical-perc} \frac{n_{\geq k}^p}{n} \weakc \ensuremath{\mathbbm{P}}(D^p \geq k) \ \ \text{ for all } k\geq 1, \end{eq} where $D^p$ satisfies $\big(D^p\mid D = l\big) \sim \mathrm{Bin} (l,1/\nu)$, for all $l\geq 1$. Let $V_n$ be a uniformly chosen vertex and $D_n^p= d_{V_n}^p$ and $D_n = d_{V_n}$. By the construction in Algorithm~\ref{algo:perc-degrees}, \begin{eq} \ensuremath{\mathbbm{P}}(D_n^p = k \mid D_n = l, R_n) &= \frac{{l\choose k} {\ell_n - l \choose 2R_n - k}}{{\ell_n \choose 2R_n}} = (1+o(1)) {l\choose k} \bigg(\frac{2R_n}{\ell_n}\bigg)^k\bigg(1-\frac{2R_n}{\ell_n}\bigg)^{l-k}\\ & = (1+o_{\scriptscriptstyle \PR}(1)) {l\choose k} \bigg(\frac{1}{\nu}\bigg)^k\bigg(1-\frac{1}{\nu}\bigg)^{l-k}, \end{eq} where in the final step we have used that $R_n = p_c(\lambda)\ell_n/2 (1+o_{\scriptscriptstyle \PR}(1))$ and $p_c(\lambda) = \nu^{-1} (1+o(1))$. Thus, \begin{eq}\label{exp-perc-empricial} \ensuremath{\mathbbm{E}}\bigg[\frac{n_k^p}{n}\ \Big\vert\ R_n\bigg] = \ensuremath{\mathbbm{P}}(D_n^p = k \mid R_n) \weakc \ensuremath{\mathbbm{P}}(D^p = k). \end{eq} \blue{Moreover, note that $d_i^p = \sum_{j=1}^{d_i} I_j^{(i)}$, and the definition of negative association in \eqref{defn:neg-association} allows us to conclude negative correlation between increasing functions of $I_j^{(i)}$ that depend on disjoint sets of indices. Therefore,} \begin{eq} \mathrm{Var}\big(n_{\geq k}^p \mid R_n\big) = \mathrm{Var}\big(\sum_{i\colon d_i\geq k} \ensuremath{\mathbbm{1}}_{\{d_i^p\geq k\} }\ \Big\vert\ R_n\big) \leq \sum_{i\colon d_i\geq k} \mathrm{Var} \big(\ensuremath{\mathbbm{1}}_{\{d_i^p\geq k\} } \mid R_n\big) \leq n, \end{eq} and thus $\ensuremath{\mathbbm{E}}[\mathrm{Var}(n_{\geq k}^p/n \mid R_n)] = O(1/n)$. This together with \eqref{exp-perc-empricial} yields \eqref{empirical-perc}. We finally verify that Assumption~\ref{assumption-extra} holds with high probability. Let the constants $c_1$ and $c_0$ be as in Assumption~\ref{assumption-extra}. Let $c_1':=9 c_1\nu$. It is enough to show that there exists a deterministic constant \blue{$c_0'>0$} such that \begin{eq}\label{tail-percolated} \pr\Big( \frac{1}{n}\sum_{i\in [n]} d_i^p\blue{\ensuremath{\mathbbm{1}}\{l<d_i^p\leq c_1' l \}} \geq \frac{c_0'}{l^{\tau-2}} \ \ \text{ for all }\ \ 1\leq l \leq d_1/c_1' \Big) \to 1 \ \ \text{ as }\ \ n\to\infty. \end{eq} \blue{ Write $\sum_{*}$ for $\sum_{i: 8 l < d_i p_c(\lambda) \leq 8c_1 l }$. Then, for $1\leq l \leq d_1/c_1'$ and all large $n$, \begin{eq}\label{eqn:23} \frac{1}{n}\sum_{i\in [n]} \ensuremath{\mathbbm{E}}\big[d_i^p\ensuremath{\mathbbm{1}}\{l<d_i^p\leq c_1' l \}\big] & \geq \frac{1}{n}\sum\displaystyle_*\ \ensuremath{\mathbbm{E}}\big[d_i^p\ensuremath{\mathbbm{1}}\{l<d_i^p\leq c_1' l \}\big] \\ &= \frac{1}{n}\sum\displaystyle_*\ \ensuremath{\mathbbm{E}}\big[d_i^p\ensuremath{\mathbbm{1}}\{l<d_i^p\} \big], \end{eq} where the last step uses the fact that when $d_i p_c(\lambda)\leq 8c_1 l $, we have $d_i^p \leq d_i \leq 8c_1 l /p_c(\lambda) \leq c_1' l$ for all large $n$. Let $X_i \sim \mathrm{Bin} (\lfloor d_i/2\rfloor , p_c(\lambda))$. Then $d_i^p$ is stochastically larger than $X_i$. Using \eqref{eqn:23}, we see that for $1\leq l \leq d_1/c_1'$ and all large $n$, \begin{eq}\label{expt-perc-tail} & \frac{1}{n}\sum_{i\in [n]} \ensuremath{\mathbbm{E}}\big[d_i^p\ensuremath{\mathbbm{1}}\{l<d_i^p\leq c_1' l \}\big] \geq \frac{1}{n}\sum\displaystyle_*\ \ensuremath{\mathbbm{E}}\big[X_i\cdot\ensuremath{\mathbbm{1}}\{l<X_i\} \big] \\ &\hskip25pt \geq \frac{1}{n}\sum\displaystyle_*\ \ensuremath{\mathbbm{E}}[X_i]\ensuremath{\mathbbm{P}}(X_i>l) \geq \frac{1}{n}\sum\displaystyle_*\ \ensuremath{\mathbbm{E}}[X_i] \ensuremath{\mathbbm{P}}\big( X_i \geq \floor{\lfloor d_i/2\rfloor p_c(\lambda)} \big) \\ &\hskip50pt \geq \frac{1}{n}\sum\displaystyle_*\ \ensuremath{\mathbbm{E}}[X_i] \cdot\frac{1}{2} \geq \frac{C}{n}\sum\displaystyle_*\ d_i \geq \frac{C'}{l^{\tau-2}} \, , \end{eq} where the third step uses $l< d_i p_c(\lambda) /8 \leq \floor{\lfloor d_i/2\rfloor p_c(\lambda)}$, and the final step follows using Assumption~\ref{assumption-extra}}. Now let $F_1 := \sum_{i\in [n]} d_i^p\ensuremath{\mathbbm{1}}\{l<d_i^p\leq c_1' l \}$ and $F_2 := \ensuremath{\mathbbm{E}}[F_1\vert \mathrm{CM}_n(\bld{d}) ]$. We will apply the bounded difference inequality from \cite[Corollary 2.27]{JLR00}. Given the graph $\mathrm{CM}_n(\bld{d})$, if we keep one extra edge in the percolated graph, then $F_1$ can change by at most $2c_1' l$. Thus, for any $\varepsilon>0$, \begin{eq}\label{prob-tail-conc-1} \ensuremath{\mathbbm{P}}\Big(|F_1 - F_2|> \frac{n\varepsilon }{l^{\tau-2}}\ \Big\vert\ \mathrm{CM}_n(\bld{d})\Big)\leq 2\exp\bigg(- \frac{n^2\varepsilon^2}{l^{2(\tau-2) }(2c_1' l)^2\cdot \frac{\ell_n}{2} }\bigg) \leq 2\mathrm{e}^{-C\varepsilon^2 n l^{-2(\tau-1)}} . \end{eq} Also, we can apply concentration inequalities such as \cite[Lemma 2.5]{BCDS18} to conclude that \begin{eq}\label{prob-tail-conc-2} \ensuremath{\mathbbm{P}}\Big(|F_2 - \ensuremath{\mathbbm{E}}[F_2]|> \frac{n\varepsilon }{l^{\tau-2}}\ \Big)\leq 2\mathrm{e}^{-C\varepsilon^2 n l^{-2(\tau-1)} }. \end{eq} Combining \eqref{prob-tail-conc-1} and \eqref{prob-tail-conc-2} together with \eqref{expt-perc-tail} shows that there exists an $\varepsilon_0>0 $ such that \eqref{tail-percolated} holds if we replace ``for all $1\leq l\leq d_1/c_1'$'' by ``for all $1\leq l\leq n^{\varepsilon_0}$.'' For $l\geq n^{\varepsilon_0}$, we use \eqref{hub-perc} together with a union bound to complete the proof of \eqref{tail-percolated}. \appendix \section{A technical lemma} \begin{lemma}\label{lem:technical} Let $\beta_i^n = n^{-2\alpha} \sum_{j=1}^{i-1} d_j^2$. Then {\rm Assumption~\ref{assumption1}(i), \eqref{eqn:670}, and Assumption~\ref{assumption-extra}} imply the following: \begin{enumeratei} \item For all $\varepsilon>0$, \begin{eq}\label{eq:extra-assumption-2} \lim_{K\to\infty} \limsup_{n\to\infty} \sum_{i > K} \Big(\frac{d_i}{n^\alpha}\Big) \times \mathrm{e}^{-\varepsilon \beta_i^n} = 0. \end{eq} \item There exists a sequence $(K_n)_{n\geq 1}$ with $K_n\to\infty$, and $K_n = o(n^\alpha)$ such that $\beta_{K_n}^{n} > \log^3 n $ for all large $n$. \end{enumeratei} \end{lemma} \begin{proof} We will use $C_0,C_1,\dots$ etc. as generic notation for positive constants that do not depend on $n$. Recall Assumption~\ref{assumption-extra}. Let $\theta_{i,n} := n^{-\alpha}d_i$, $i\in [n]$. We first claim that Assumption~\ref{assumption-extra} implies \begin{eq}\label{eq:deg-power-lb} \min_{2\leq i\leq n} \theta_{i,n}^{\tau -2} \sum_{j = 1}^{i-1}\theta_{j,n} \geq C_0. \end{eq} To see this, let $1=i_1<i_2<i_3<\ldots$ be the indices such that $d_{i_{k-1}}=d_{i_{k-1} +1}=\ldots=d_{i_k -1}>d_{i_k}$ for $k\geq 2$. Then for $k\geq 2$, \[ \frac{1}{\ell_n}\sum_{j=1}^{i_{k}-1}d_j = \ensuremath{\mathbbm{P}}(D_n^* > d_{i_k}) \geq \ensuremath{\mathbbm{P}}\big(d_{i_k}<D_n^*\leq c_1 d_{i_k} \big) \geq c_0 (d_{i_k})^{-(\tau -2)} , \] and consequently, \begin{eq}\label{i-k-tail} \min_{k} \theta_{i_k,n}^{\tau -2} \sum_{j = 1}^{i_k-1}\theta_{j,n} \geq C_0. \end{eq} If $i_k>i\geq i_{k-1}$, then $\theta_{i,n}^{\tau -2} \sum_{j = 1}^{i-1}\theta_{j,n} = \theta_{i_{k-1},n}^{\tau -2} \sum_{j = 1}^{i-1}\theta_{j,n} \geq \theta_{i_{k-1},n}^{\tau -2} \sum_{j = 1}^{i_{k-1}-1}\theta_{j,n}$. Thus we conclude \eqref{eq:deg-power-lb} from \eqref{i-k-tail}. Next, define \begin{eq} f_n (x):= \begin{cases} \frac{1}{\theta_{i+1,n}}\, , & \quad \text{if } \sum_{j=1}^{i-1} \theta_{j,n} \leq x < \sum_{j=1}^{i} \theta_{j,n} \text{ for some }i\in[n-1], \\ 0\, , & \quad \text{if } x\geq \sum_{j=1}^{n-1} \theta_{j,n}, \end{cases} \end{eq} and \begin{eq} g_n (x):= \begin{cases} \sum_{j=1}^i \theta_{j,n}^2\, , & \quad \text{if } \sum_{j=1}^{i-1} \theta_{j,n} \leq x < \sum_{j=1}^{i} \theta_{j,n} \text{ for some }i\in[n], \\ 0\, , & \quad \text{if } x\geq \sum_{j=1}^{n} \theta_{j,n}\, . \end{cases} \end{eq} Since $\sum_{j=1}^i\theta_{j,n}\leq 2\sum_{j=1}^{i-1}\theta_{j,n}$ for $2\leq i\leq n$, we have, using \eqref{eq:deg-power-lb}, $\theta_{i+1,n}^{\tau -2} \sum_{j = 1}^{i-1}\theta_{j,n} \geq C_0/2$. Therefore, $f_n(x)^{-(\tau -2) } \times 2x \geq C_0$ for any $\theta_{1,n}\leq x < \sum_{j=1}^{n-1} \theta_{j,n}$, and consequently, \begin{eq} f_n(x) \leq C_1 x^{\frac{1}{\tau-2}} \quad \text{for} \quad \theta_{1,n}\leq x < \sum_{j=1}^{n-1} \theta_{j,n}. \end{eq} Next, for $i\in[n-1]$, \begin{eq} \sum_{j=1}^i \theta_{j,n}^2 \geq \sum_{j=1}^i \theta_{j,n} \theta_{j+1,n} &= \theta_{1,n} \theta_{2,n} + \int_{\theta_{1,n}}^{\sum_{j=1}^i \theta_{j,n}} \frac{\mathrm{d} x}{f_{n}(x)}\\ &\geq C_2\int_{\theta_{1,n}}^{\sum_{j=1}^i \theta_{j,n}} \frac{\mathrm{d} x}{x^{1/(\tau-2)}}\geq C_3 \bigg(\sum_{j=1}^i \theta_{j,n}\bigg)^{\frac{\tau-3}{\tau-2}} - C_4. \end{eq} Therefore, \begin{eq}\label{lb-g-n-x} g_n(x) \geq C_3 x^{\frac{\tau-3}{\tau-2}} - C_4 \quad \text{for} \quad 0\leq x < \sum_{j=1}^{n-1} \theta_{j,n}. \end{eq} Now, \begin{eq} \sum_{i=K}^{n-1} \theta_{i,n} \mathrm{e}^{-\varepsilon \sum_{j=1}^i \theta_{j,n}^2} = \int_{\sum_{j=1}^{K-1} \theta_{j,n}}^{ \sum_{j=1}^{n-1} \theta_{j,n} } \mathrm{e}^{-\varepsilon g_n(x)} \mathrm{d} x \leq C_5\int_{\sum_{j=1}^{K-1} \theta_{j,n}}^{ \infty} \mathrm{e}^{-\varepsilon C_3 x^{\frac{\tau-3}{\tau-2}}} \mathrm{d} x, \end{eq} and the above integral is finite for each fixed $K\geq 1$. By Assumption~\ref{assumption1}(i), $\sum_{j=1}^{K-1} \theta_{j,n} \to \sum_{j=1}^{K-1} \theta_{j}$ as $n\to\infty$, which diverges if we take $K\to\infty$. Thus, the proof of \eqref{eq:extra-assumption-2} follows. We next prove Lemma~\ref{lem:technical}(ii). Let $K_n:=\lceil n^{\alpha/2}\rceil$. Suppose that $\beta^n_{K_n} \leq \log ^3 n$. Using \eqref{lb-g-n-x}, it follows that \begin{eq} \log^3 n \geq \beta^n_{K_n} \geq C_3 \bigg(\sum_{j=1}^{K_n} \theta_{j,n}\bigg)^{\frac{\tau-3}{\tau-2}} - C_4, \end{eq}and an application of \eqref{eq:deg-power-lb} yields \begin{eq} C_4 + \log ^3 n \geq C (\theta_{K_n+1,n})^{-(\tau-3)} \implies \theta_{K_n,n} \geq \frac{C'}{(\log n)^{\frac{3}{\tau-3}}}\, . \end{eq} Therefore, $\sum_{i=1}^{K_n} \theta_{i,n}^3 \geq C'^3 K_n (\log n)^{-9/(\tau-3)}$. Thus, if $\beta^n_{K_n} \leq \log ^3 n$ for infinitely many $n$, then \begin{eq} \liminf_{n\to\infty}n^{-3\alpha} \sum_{i\in [n]} d_i^3 \geq \liminf_{n\to\infty} \sum_{i=1}^{K_n} \theta_{i,n}^3 = \infty\, , \end{eq} which leads to a contradiction as Assumption~\ref{assumption1}(i) and \eqref{eqn:670} imply that $\sup_{n}n^{-3\alpha}\sum_{i\in [n]} d_i^3<\infty$. Thus the claim in Lemma~\ref{lem:technical}~(ii) also follows. \end{proof} \section{Degree sequence satisfying compactness criterion}\label{sec:appendix-comapctness} In this section, we prove Proposition~\ref{prop:deg-compact}. \vskip5pt \noindent{\bf Proof of Proposition~\ref{prop:deg-compact}.} Define $\bld{d}^{\scriptscriptstyle (1,n)}:=(d_i^{\scriptscriptstyle (1,n)})_{i\in [n]}$ with $d_i^{\scriptscriptstyle (1,n)}: = \lceil n^{\alpha} \theta_i \rceil$ for $i\in [n]$. Let $\bld{d}^{\scriptscriptstyle (2,n)} = (d_i^{\scriptscriptstyle (2,n)})_{i\in [n]}$ be such that, for some $0<K_1<K_2<\infty$, \begin{eq}\label{eq:d-2n} K_1\Big( \frac{n}{i} \Big)^{\alpha} \leq d_i^{\scriptscriptstyle (2,n)} \leq K_2\Big( \frac{n}{i} \Big)^{\alpha}, \quad \text{ for }i \in [n], \end{eq} and Assumption~\ref{assumption1}(ii) and \eqref{eq:defn-super-crit} are satisfied. The idea is to change the high-degree vertices of $\bld{d}^{\scriptscriptstyle (2,n)}$ by those of $\bld{d}^{\scriptscriptstyle (1,n)}$. To this end, let \[ i_{\scriptscriptstyle (1,n)}:= \max\big\{i\geq 1\colon d_i^{\scriptscriptstyle (1,n)} \geq (\frac{n}{\log n})^{\alpha}\big\} \ \ \text{ and }\ \ i_{\scriptscriptstyle (2,n)}:= \max\big\{i\geq 1\colon d_i^{\scriptscriptstyle (2,n)} \geq (\frac{n}{\log n})^{\alpha}\big\}\, . \] For two finite sequences $(x_i)$ and $(y_j)$, we write $\texttt{Sort-Merge}((x_i),(y_j))$ as the sequence obtained by concatenating $(x_i)$ and $(y_j)$ and then sorting the sequence in a nonincreasing order. We define \begin{eq}\label{defn:merged-degree} \bld{d}^{\scriptscriptstyle (n)} = (d_i^{\scriptscriptstyle (n)}) := \texttt{Sort-Merge} \Big((d_i^{\scriptscriptstyle (1,n)})_{i=1}^{i_{\scriptscriptstyle (1,n)}}, (d_i^{\scriptscriptstyle (2,n)})_{i=i_{\scriptscriptstyle (2,n)}+1}^{n}\Big). \end{eq} Note that $i_{\scriptscriptstyle (1,n)} \to \infty$. Also, \begin{eq} \infty > \sum_{i=1}^\infty \theta_i^3 \geq \sum_{i=1}^{i_{\scriptscriptstyle (1,n)}} \theta_i^3 \geq i_{\scriptscriptstyle (1,n)} \theta_{i_{\scriptscriptstyle (1,n)}}^3 \geq i_{\scriptscriptstyle (1,n)} \Big(\frac{1}{2 \log n}\Big)^{3\alpha}, \end{eq} and therefore $i_{\scriptscriptstyle (1,n)} \leq C (\log n)^{3\alpha}$. Further, it follows from \eqref{eq:d-2n} that $i_{\scriptscriptstyle (2,n)} \leq K_2^{1/\alpha} (\log n)$. Therefore, the degree sequence in \eqref{defn:merged-degree} has length $n(1+o(1))$. Since $i_{\scriptscriptstyle (1,n)} \to \infty$, Assumption~\ref{assumption1}~(i) is satisfied by $(\bld{d}^{\scriptscriptstyle (n)})_{n\geq 1}$. Also, for each fixed $K\geq 1$, \begin{eq} n^{-3\alpha}\sum_{i>K} (d_{i}^{\scriptscriptstyle (n)})^3 \leq \sum_{i>K} 8\theta_i^3+n^{-3\alpha}\sum_{i>K} (d_{i}^{\scriptscriptstyle (2,n)})^3, \end{eq} and thus \eqref{eqn:670} holds. Next, it can be easily checked that the remaining conditions in Assumption~\ref{assumption1}(ii) and \eqref{eq:defn-super-crit} hold for $(\bld{d}^{(n)})_{n\geq 1}$ by making use of the fact that $(\bld{d}^{\scriptscriptstyle (2,n)})_{n\geq 1}$ satisfies Assumption~\ref{assumption1}(ii) and \eqref{eq:defn-super-crit}. Finally we have to verify that $(\bld{d}^{\scriptscriptstyle (n)})_{n\geq 1}$ satisfies Assumption~\ref{assumption-extra}. It suffices to show that there exist $C>1$ and $C'>0$ such that for all $n\geq 1$, \[ \sum_i d_i^{\scriptscriptstyle (n)}\ind{l< d_i^{\scriptscriptstyle (n)}\leq Cl} \geq C' n/ l^{\tau-2} \ \ \text{ for }\ \ 1\leq l< d_1^{\scriptscriptstyle (n)}\, . \] This can be proved in a straightforward way by using \eqref{eq:compactness}, \eqref{eq:d-2n}, and the definition of $\bld{d}^{\scriptscriptstyle (n)}$ given in \eqref{defn:merged-degree}. We omit the details. \end{document}
arXiv
\begin{document} \setcounter{page}{125} \publyear{2021} \papernumber{2084} \volume{183} \issue{1-2} \finalVersionForARXIV \title{The Complexity of Synthesis of $b$-Bounded Petri Nets} \author{Ronny Tredup\thanks{Address for correspondence: Universit\"at Rostock, Institut f\"ur Informatik, Theoretische Informatik, Albert-Einstein-Stra\ss e 22, 18059, Rostock, Germany} \\ Institut f\"ur Informatik, Theoretische Informatik\\ Universit\"at Rostock\\ Albert-Einstein-Stra\ss e 22, 18059, Rostock, Germany\\ [email protected] } \maketitle \runninghead{R. Tredup}{The Complexity of Synthesis of $b$-Bounded Petri Nets} \begin{abstract} For a fixed type of Petri nets $\tau$, \textsc{$\tau$-Synthesis} is the task of finding for a given transition system $A$ a Petri net $N$ of type $\tau$ ($\tau$-net, for short) whose reachability graph is isomorphic to $A$ if there is one. The decision version of this search problem is called \textsc{$\tau$-Solvability}. If an input $A$ allows a positive decision, then it is called $\tau$-solvable and a sought net $N$ $\tau$-solves $A$. As a well known fact, $A$ is $\tau$-solvable if and only if it has the so-called $\tau$-\emph{event state separation property} ($\tau$-ESSP, for short) and the $\tau$-\emph{state separation property} ($\tau$-SSP, for short). The question whether $A$ has the $\tau$-ESSP or the $\tau$-SSP defines also decision problems. In this paper, for all $b\in \mathbb{N}$, we completely characterize the computational complexity of \textsc{$\tau$-Solvability}, \textsc{$\tau$-ESSP} and \textsc{$\tau$-SSP} for the types of pure $b$-bounded Place/Transition-nets, the $b$-bounded Place/Transition-nets and their corresponding $\mathbb{Z}_{b+1}$-extensions. \end{abstract} \begin{keywords} Petri nets, synthesis, $b$-bounded, SSP, ESSP, solvability \end{keywords} \section{Introduction}\label{introduction} The task of system \emph{analysis} is to examine the behavior of a system and to derive its behavioral properties. Its counterpart, \emph{synthesis}, is the task of automatically finding an implementing system for a given behavioral specification. A valid synthesis procedure then computes a system that is correct by design if it exists. In this paper we investigate a certain instance of synthesis: For a fixed type of Petri nets $\tau$, \textsc{$\tau$-Synthesis} is the task to find, for a given directed labeled graph $A$, called transition system (TS, for short), a Petri net $N$ of type $\tau$ ($\tau$-net, for short) whose state graph is isomorphic to $A$ if such a net exists. The decision version of \textsc{$\tau$-Synthesis} is called $\tau$-\textsc{Solvability}. Synthesis for Petri nets has been investigated and applied for many years and in various fields: It is used to extract concurrency and distributability data from sequential specifications like transition systems or languages \cite{DBLP:journals/fac/BadouelCD02}. Synthesis has applications in the field of process discovery to reconstruct a model from its execution traces \cite{DBLP:books/daglib/0027363}. In \cite{DBLP:journals/deds/HollowayKG97}, it is employed in supervisory control for discrete event systems. It is also used for the synthesis of speed-independent circuits \cite{DBLP:journals/tcad/CortadellaKKLY97}. In this paper, we investigate the computational complexity of synthesis for certain types of \emph{bounded} Petri nets, that is, Petri nets for which there is a positive integer $b$ that restricts the number of tokens on every place in every reachable marking. In \cite{DBLP:conf/tapsoft/BadouelBD95,DBLP:series/txtcs/BadouelBD15}, synthesis has been shown to be solvable in polynomial time for bounded and pure bounded \emph{Place/Transition}-nets (P/T-nets, for short). The approach provided in \cite{DBLP:conf/tapsoft/BadouelBD95,DBLP:series/txtcs/BadouelBD15} guarantees a (pure) bounded P/T-net to be output if such a net exists. Unfortunately, it does not work for preselected bounds. In fact, in \cite{DBLP:journals/tcs/BadouelBD97} it has been shown that solvability is NP-complete for $1$-bounded P/T-nets (there referred to as \emph{elementary net systems}), that is, if the bound $b=1$ is chosen in advance. In \cite{DBLP:conf/stacs/Schmitt96}, the type of pure $1$-bounded P/T-nets is extended by the additive group $\mathbb{Z}_2$ of integers modulo $2$ (there referred to as \emph{flip-flop nets}). Transitions of these nets can simulate the addition of integers modulo 2. The result of \cite{DBLP:conf/stacs/Schmitt96} shows that this suffices to bring the complexity of synthesis down to polynomial time. In \cite{DBLP:conf/tamc/TredupR19,DBLP:conf/apn/TredupR19}, we progressed the approach of examining the effects of the presence and absence of different interactions on the complexity of synthesis for the broader class of \emph{Boolean} Petri nets that enable independence between places and transitions. This class also contains the type of $1$-bounded P/T-nets and its $\mathbb{Z}_2$-extension. Although~\cite{DBLP:conf/tamc/TredupR19,DBLP:conf/apn/TredupR19} show that synthesis remains hard for 75 of the 128 possible Boolean types (allowing independence), \cite{DBLP:conf/tamc/TredupR19} also discovers 36 types for which synthesis is doable in polynomial time. The latter applies in particular for the $\mathbb{Z}_2$ extension of $1$-bounded P/T-nets. As another aspect that possibly might influence the complexity of synthesis of (pure) $1$-bounded P/T-nets, the grade $g$ of a TS $A$ as has been introduced in \cite{DBLP:conf/apn/TredupRW18}: A TS $A$ is $g$-\emph{grade} if every state of $A$ has at most $g$ incoming and at most $g$ outgoing labeled edges. There we showed that synthesis of pure $1$-bounded P/T-nets remains NP-complete even for acyclic $1$-grade TS. In \cite{DBLP:journals/corr/abs-1911-05834}, for any fixed $g\in \mathbb{N}$, we completely characterize the computational complexity of synthesis from $g$-grade TS for all Boolean Petri net types that enable independence. Surprisingly enough, for many other Boolean types, synthesis remains hard for all $g\geq 1$. For example, this applies to the type of \emph{inhibitor nets} and the type of \emph{contextual nets}, which have originally been introduced in~\cite{DBLP:conf/apn/Pietkiewicz-Koutny97} and~\cite{DBLP:journals/acta/MontanariR95} and are referred to as $\{\ensuremath{\textsf{nop}}, \ensuremath{\textsf{inp}}, \ensuremath{\textsf{out}}, \ensuremath{\textsf{free}}\}$ and $\{\ensuremath{\textsf{nop}}, \ensuremath{\textsf{inp}}, \ensuremath{\textsf{out}}, \ensuremath{\textsf{used}}, \ensuremath{\textsf{free}}\}$ in~\cite{DBLP:journals/corr/abs-1911-05834}, respectively. However, there are several types for which the complexity changes when $g$ becomes small enough. This applies in particular to the Boolean type of \emph{trace nets} that has originally been introduced in \cite{DBLP:journals/acta/BadouelD95} and is referred to as $\{\ensuremath{\textsf{nop}},\ensuremath{\textsf{inp}},\ensuremath{\textsf{out}},\ensuremath{\textsf{res}},\ensuremath{\textsf{set}},\ensuremath{\textsf{used}},\ensuremath{\textsf{free}}\}$ in \cite{DBLP:journals/corr/abs-1911-05834}. Synthesis for this type is hard if $g\geq2$, but polynomial for $g < 2$. The same is true for the type of \emph{set nets} that has originally been introduced in~\cite{DBLP:journals/acta/KleijnKPR13} and is referred to as $\{\ensuremath{\textsf{nop}}, \ensuremath{\textsf{inp}}, \ensuremath{\textsf{set}}, \ensuremath{\textsf{used}}\}$ in~\cite{DBLP:journals/corr/abs-1911-05834}. However, some questions in the area of synthesis for Petri nets are still open. Recently, the complexity status of synthesis for (pure) $b$-bounded P/T-nets, where $b\geq 2$, has been reported as unknown~\cite{DBLP:conf/concur/SchlachterW17}. Furthermore, it has not yet been analyzed whether extending (pure) $b$-bounded P/T-nets by the group $\mathbb{Z}_{b+1}$ provides also a tractable superclass if $b\geq 2$. Let $b\in\mathbb{N}^+$. In this paper, we show that solvability for (pure) $b$-bounded P/T-nets is NP-complete even if the input is an acyclic $1$-grade TS. Moreover, for $b\geq 2$, we introduce (pure) $\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets. This type originates from (pure) $b$-bounded P/T-nets by adding interactions between places and transitions simulating the addition of integers modulo $b+1$. This extension is a natural generalization of Schmitt's approach that does this for $b=1$~\cite{DBLP:conf/stacs/Schmitt96}. In contrast to Schmitt's result~\cite{DBLP:conf/stacs/Schmitt96}, in this paper, we show that solvability for (pure) $\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets remains NP-complete for all $b\geq 2$ even if the input is restricted to $g$-grade TS where $g\geq 2$. In particular, this makes the synthesis of all of these $b$-bounded Petri net types NP-hard. The question arises whether there are also types of $b$-bounded P/T-nets for which synthesis is tractable if $b\geq 2$. We affirm this question and propose the type of restricted $\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets. This paper shows, that synthesis is solvable in polynomial time for this type. To prove the NP-completeness of solvability we use its well known close connection to the so-called \emph{event state separation property} (ESSP, for short) and \emph{state separation property} (SSP, for short). In fact, a TS $A$ is solvable with respect to a Petri net type if and only if it has the type related ESSP \emph{and} SSP \cite{DBLP:series/txtcs/BadouelBD15}. The question of whether a TS $A$ has the ESSP or the SSP also defines decision problems. The possibility to efficiently decide if $A$ has at least one of both properties serves as quick-fail pre-processing mechanisms for solvability. Moreover, if $A$ has the ESSP then synthesizing Petri nets up to language equivalence is possible \cite{DBLP:series/txtcs/BadouelBD15}. This makes the decision problems ESSP and SSP worth to study. In \cite{DBLP:journals/tcs/Hiraishi94}, both problems have been shown to be NP-complete for pure $1$-bounded P/T-nets. This has been confirmed for almost trivial inputs in \cite{DBLP:conf/apn/TredupRW18,DBLP:conf/concur/TredupR18}. In this paper, for all $b\in \mathbb{N}^+$, we show that ESSP and SSP are NP-complete for (pure) $b$-bounded P/T-nets even if the input is an acyclic $1$-grade TS. Moreover, for all $b\geq 2$, the ESSP is shown to remain NP-complete for (pure) $\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets for $g$-grade TS where $g\geq 2$. By way of contrast, in this paper, we show that SSP is decidable in polynomial time for the type of (pure) $\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets, for all $b\in \mathbb{N}$. To the best of our knowledge, so far, this is the first net family where the provable computational complexity of SSP is different to solvability and ESSP. All presented NP-completeness proofs base on a reduction from the monotone one-in-three 3-SAT problem that is known to be NP-complete~\cite{DBLP:journals/dcg/MooreR01}. Every reduction starts from a given boolean input expression $\varphi$ and results in an accordingly restricted $g$-grade TS $A$. The expression $\varphi$ belongs to monotone one-in-three 3-SAT if and only if $A$ has the ESSP or the SSP or the solvability, depending on which of the properties is queried. The proofs of the announced polynomial time results base on a generalization of Schmitt's approach~\cite{DBLP:conf/stacs/Schmitt96} that reduces ESSP and SSP to systems of linear equations modulo $b+1$. It exploits that the solvability of such systems is decidable in polynomial time. This paper is organized as follows: Section~\ref{sec:preliminaries} introduces necessary definitions and provides them with illustrating examples. Moreover, it also presents some basic results that are used throughout the paper. Section~\ref{sec:unions} introduces the concept of unions applied by the proofs of our hardness results. Section~\ref{sec:hardness_results} provides the NP-completeness results and presents the corresponding reductions that prove their validity. Section~\ref{sec:poly_results} provide the announced tractability results. Finally, Section~\ref{sec:conclusion} closes the paper. This paper is an extended version of~\cite{DBLP:conf/apn/Tredup19,DBLP:conf/apn/Tredup19a}. \section{Preliminaries}\label{sec:preliminaries} In this section, we introduce necessary notions and provide some basic results that we use throughout the paper as well as some examples. \begin{definition}[Transition System]\label{def:ts} A (deterministic) \emph{transition system} (TS, for short) $A=(S,E, \delta)$ is a directed labeled graph with states $S$, events $E$ and partial \emph{transition function} $\delta: S\times E \longrightarrow S$, where $\delta(s,e)=s'$ is interpreted as the \emph{edge} $s\edge{e}s'$. For $s\edge{e}s'$ we say $s$ is a source and $s'$ is a target of $e$, respectively. An event $e$ \emph{occurs} at a state $s$, denoted by $s\edge{e}$, if $\delta(s,e)$ is defined. A word $w=e_0\dots e_n\in E^*$ \emph{occurs} at a state $s$, denoted by $s\edge{w}$, if it is the empty word $\varepsilon$ or there are states $q_0,\dots, q_n$ such that $s=q_0$ and $\delta(q_i,e_{i+1})=q_{i+1}$ is defined for all $i\in \{0,\dots,n-1\}$. An \emph{initialized} TS $A=(S,E,\delta, \iota)$ is a TS with a distinct state $\iota \in S$ such that every state $s\in S$ is \emph{reachable} from $\iota$ by a directed labeled path. The language of $A$ is the set $L(A)=\{w\in E^* \mid \iota \edge{w}\}$. \end{definition} In the remainder of this paper, if not explicitly stated otherwise, we assume all TS to be initialized and if a TS $A$ is not explicitly defined, then we refer to its components consistently by $S(A)$ (states) and $E(A)$ (events) and $\delta_A$ (transition function) and $\iota_A$ (initial states). \begin{definition}[$g$-grade, linear]\label{def:grade} Let $g\in \mathbb{N}$. A TS $A=(S,E,\delta,\iota)$ is \emph{$g$-grade} if, for every state $s\in S$, the number of incoming and outgoing labeled edges at $s$ is at most $g$: $\vert \{e\in E\mid \edge{e}s\}\vert\leq g$ and $\vert \{e\in E\mid s\edge{e}\}\vert\leq g$. If a TS is $1$-grade and cycle free, that is, there are pairwise distinct states $s_0,\dots, s_m$ such that $A=s_0\edge{e_1}\dots\edge{e_m}s_m$, then we say $A$ is \emph{linear}; we call $s_m$ the \emph{terminal state} of $A$ and, for all $i < j\in \{1,\dots, m\}$, we say $e_j$ and $s_j$ occur after $e_i$. \end{definition} In this paper, we deal with (different kinds of Petri) nets. Nets have places, transitions, a flow and an initial marking. Places can contain \emph{tokens}. A global marking of a net defines for every place $p$ how many tokens it contains initially. The firing of a transition can change locally the content of some places and thus globally the marking of the net. The flow defines the relations between places and transitions: how many token must a place contain to allow the firing of a transition and in which way changes the firing of a transition the content of a place. Nets are classified by the number of tokens that a place can maximally contain (markings) and according to how places and transitions may influence each other (flow). This way to classify nets leads to infinite many different classes of nets. In order to deal with these classes in a uniform way, the notion of types of nets has been developed in~\cite{DBLP:series/txtcs/BadouelBD15}: \begin{definition}[Type of nets]\label{def:type_of_nets} A type of nets $\tau$ is a (non-initialized) TS $\tau=(S_\tau, E_\tau,\delta_\tau)$ with $S_\tau\subseteq \mathbb{N}$. \end{definition} Based on this notion, we are now able to define $\tau$-nets, where the states $S_\tau$ of $\tau=(S_\tau,E_\tau,\delta_\tau)$ correspond to possible contents of places, the events $E_\tau$ correspond to possible relations between places and transitions and the partial transition function $\delta_\tau$ describe how the contents of places can be changed by the firing of a transition and, moreover, which contents can inhibit such a firing: \begin{definition}[$\tau$-Nets]\label{def:tau_nets} Let $\tau=(S_\tau, E_\tau, \delta_\tau)$ be a type of nets. A Petri net $N = (P, T, M_0, f)$ of type $\tau$, ($\tau$-net, for short) is given by finite and disjoint sets $P$ of places and $T$ of transitions, an initial marking $M_0: P\longrightarrow S_\tau$, and a (total) flow function $f: P \times T \rightarrow E_\tau$. A $\tau$-net realizes a certain behavior by firing sequences of transitions: A transition $t \in T$ can fire in a marking $M: P \longrightarrow S_\tau$ if $\delta_\tau(M(p), f(p,t))$ is defined for all $p\in P$. By firing, $t$ produces the next marking $M' : P\longrightarrow S_\tau$ where $M'(p)=\delta_\tau(M(p), f(p,t))$ for all $p\in P$. This is denoted by $M \edge{t} M'$. Given a $\tau$-net $N=(P, T, M_0, f)$, its behavior is captured by a transition system $A_N$, called the reachability graph of $N$. The state set of $A_N$ is the reachability set $RS(N)$, that is, the set of all markings that, starting from initial state $M_0$, are reachable by firing a sequence of transitions. For every reachable marking $M$ and transition $t \in T$ with $M \edge{t} M'$ the state transition function $\delta_{A_N}$ of $A_N$ is defined by $\delta_{A_N}(M,t) = M'$. \end{definition} Let $b\in \mathbb{N}^+$ be arbitrary but fixed. In this paper, the following types of ($b$-bounded Petri) nets are the subject of our investigations: \begin{definition}[$\tau_{PT}^b$] The type of \emph{$b$-bounded P/T-nets} $\tau_{PT}^b=(S_{\tau_{PT}^b}, E_{\tau_{PT}^b}, \delta_{\tau_{PT}^b})$ has the state set $S_{\tau_{PT}^b}=\{0,\dots, b\}$ and the event set $E_{\tau_{PT}^b}=\{0,\dots, b\}^2$ and, for all $s\in S_{\tau_{PT}^b}$ and all $(m,n)\in E_{\tau_{PT}^b}$, the transition function is defined by $\delta_{\tau_{PT}^b}(s,(m,n))=s-m+n$ if $s\geq m$ and $ s-m+n \leq b$, and undefined otherwise. \end{definition} \begin{definition}[$\tau_{PPT}^b$] The type $\tau_{PPT}^b=(S_{\tau_{PPT}^b}, E_{\tau_{PPT}^b}, \delta_{\tau_{PPT}^b})$ of \emph{pure $b$-bounded P/T-nets} is a restriction of $\tau_{PT}^b$ that discards all events $(m,n)$ from $E_{\tau_{PT}^b}$ where both $m$ and $n$ are positive. To be exact, $S_{\tau_{PPT}^b}=S_{\tau_{PT}^b}$ and $E_{\tau_{PPT}^b}=E_{\tau_{PT}^b} \setminus \{(m,n) \mid 1 \leq m,n \leq b\}$ and, for all $s\in S_{\tau_{PPT}^b}$ and all $e\in E_{\tau_{PPT}^b}$, we have $\delta_{\tau_{PPT}^b}(s,e)=\delta_{\tau_{PT}^b}(s,e)$. \end{definition} \begin{definition}[$\tau_{\mathbb{Z}PT}^b$] \begin{spacing}{1.06} The type $\tau_{\mathbb{Z}PT}^b=(S_{\tau_{\mathbb{Z}PT}^b}, E_{\tau_{\mathbb{Z}PT}^b}, \delta_{\tau_{\mathbb{Z}PT}^b})$ of \emph{$\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets} originates from $\tau_{PT}^b$ by extending the event set $E_{\tau_{PT}^b}$ with the elements $0,\dots, b$. The transition function additionally simulates the addition modulo (b+1). More exactly, $S_{\tau_{\mathbb{Z}PT}^b}=S_{\tau_{PT}^b}$ and $E_{\tau_{\mathbb{Z}PT}^b}=(E_{\tau_{PT}^b}\setminus \{(0,0)\}) \cup \{0,\dots, b\}$ and, for all $s\in S_{\tau_{\mathbb{Z}PT}^b}$ and all $e\in E_{\tau_{\mathbb{Z}PT}^b}$ we have that $\delta_{\tau_{\mathbb{Z}PT}^b}(s,e)=\delta_{\tau_{PT}^b}(s,e)$ if $e\in E_{\tau_{PT}^b}$, else $\delta_{\tau_{\mathbb{Z}PT}^b}(s,e)=(s+e) \text{ mod } (b+1)$. \end{spacing}\vspace*{-2mm} \end{definition} \begin{definition}[$\tau_{\mathbb{Z}PPT}^b$] The type $\tau_{\mathbb{Z}PPT}^b=(S_{\tau_{\mathbb{Z}PPT}^b}, E_{\tau_{\mathbb{Z}PPT}^b}, \delta_{\tau_{\mathbb{Z}PPT}^b})$ of pure \emph{$\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets} is a restriction of $\tau_{\mathbb{Z}PT}^b$ such that $S_{\tau_{\mathbb{Z}PPT}^b}=S_{\tau_{\mathbb{Z}PT}^b}$ and $E_{\tau_{\mathbb{Z}PPT}^b}=E_{\tau_{\mathbb{Z}PT}^b}\setminus \{(m,n) \mid 1 \leq m,n \leq b\}$ and, for all $s\in S_{\tau_{\mathbb{Z}PPT}^b}$ and all $e\in E_{\tau_{\mathbb{Z}PPT}^b}$, the transition function is defined by $\delta_{\tau_{\mathbb{Z}PPT}^b}(s,e)=\delta_{\tau_{\mathbb{Z}PT}^b}(s,e)$. \end{definition} \begin{definition}[$\tau_{R\mathbb{Z}PT}^b$] \begin{spacing}{1.1} The type of \emph{restricted $\mathbb{Z}_{b+1}$-extended $b$-bounded P/T-nets} $\tau_{R\mathbb{Z}PT}^b=(S_{\tau_{R\mathbb{Z}PT}^b},E_{\tau_{R\mathbb{Z}PT}^b}, \delta_{\tau_{R\mathbb{Z}PT}^b})$ has the same state set $S_{\tau_{R\mathbb{Z}PT}^b}=S_{\tau_{\mathbb{Z}PT}^b}$ and the same event set $E_{\tau_{R\mathbb{Z}PT}^b}=E_{\tau_{\mathbb{Z}PT}^b}$ as $\tau_{\mathbb{Z}PT}^b$, but a restricted transition function $\delta_{\tau_{R\mathbb{Z}PT}^b}$. In particular, $\delta_{\tau_{R\mathbb{Z}PT}^b}$ restricts $\delta_{\tau_{\mathbb{Z}PT}^b}$ in such a way that for $s\in S_{\tau_{R\mathbb{Z}PT}^b}$ and $(m,n)\in E_{\tau_{R\mathbb{Z}PT}^b}$ we have that $\delta_{\tau_{R\mathbb{Z}PT}^b}(s,(m,n))=\delta_{\tau_{\mathbb{Z}PT}^b}(s,(m,n))$ if $s=m$; otherwise if $s\not=m$, then $\delta_{\tau_{R\mathbb{Z}PT}^b}(s,(m,n))$ remains undefined. Hence, every $(m,n)\in E_{\tau_{R\mathbb{Z}PT}^b}$ occurs exactly once in $\tau_{R\mathbb{Z}PT}^b$. Furthermore, if $(s,e)\in \{0,\dots,b\}^2$ then $\delta_{\tau_{R\mathbb{Z}PT}^b}(s,e)=\delta_{\tau_{\mathbb{Z}PT}^b}(s,e)$. \end{spacing} \end{definition} \begin{figure}\label{fig:types} \end{figure} \begin{example} Figure~\ref{fig:types} sketches $\tau_{PT}^2$ (top) and $\tau_{\mathbb{Z}PT}^2$ (middle). Events separated by commas label different edges. Omitting the events $(1,1)$, $(1,2)$, $(2,1)$ and $(2,2)$ and the corresponding edges yields $\tau_{PPT}^2$ and $\tau_{\mathbb{Z}PPT}^2$, respectively. Moreover, Figure~\ref{fig:types} sketches $\tau_{R\mathbb{Z}PT}^2$ (bottom). \end{example} \begin{example}\label{ex:tau_net} Figure~\ref{fig:admissible_set} sketches the $\tau_{PPT}^2$-net $N_1$ and its reachability graph $A_{N_1}$. $N_1$ has places $R_1$ and $R_2$, transitions $a$ and $b$ and flow $f(R_1,a)=(1,0)$, $f(R_2,b)=(1,0)$ and $f(R_1,b)=f(R_2,a)=(0,0)$ and initial marking $M_0(R_1)=M_0(R_2)=1$. The $(0,0)$-labeled edges are omitted. \end{example} \begin{figure} \caption{The TS $A_1$, the $\tau_{PPT}^1$-Net $N_1$ and the reachability graph $A_{N_1}$ of $N_1$.} \label{fig:admissible_set} \end{figure} According to Definition~\ref{def:tau_nets}, for every $\tau$-net $N$, there is always a TS $A_N$, that reflects the global behavior of $N$, namely the corresponding reachability graph. Moreover, by firing all possible sequences of transitions, the reachability graph $A_N$ can be computed effectively. Naturally, this raises the question whether a given TS $A$ corresponds to the behavior of a $\tau$-net $N$. Furthermore, in case of a positive decision, $N$ should be constructed. This is the subject of the following search problem: \noindent \fbox{\begin{minipage}[t][1.7\height][c]{0.97\textwidth} \begin{searchproblem} \problemtitle{\textsc{$\tau$-Synthesis}} \probleminput{A TS $A=(S,E,\delta, \iota)$.} \problemquestion{Find a $\tau$-net $N$ whose reachability graph is isomorphic to $A$ if it exists.} \end{searchproblem} \end{minipage}} If an input $A=(S,E,\delta,\iota)$ of $\tau$-Synthesis allows a positive decision, then we want to construct a corresponding $\tau$-net $N$ purely from $A$. Since $A$ and the reachability graph $A_N$ of $N$ shall be isomorphic, the events $E$ of $A$ become transitions of $N$. The places, the flow function and the initial marking of $N$ originate from so-called $\tau$-regions of $A$. \begin{definition}[$\tau$-Regions]\label{def:region} Let $\tau\in \{\tau^b_0,\tau^b_1,\tau^b_2,\tau^b_3,\tau^b_4\}$ and $A=(S,E,\delta,\iota)$ be a TS. A $\tau$-region of $A$ is a pair $(sup, sig)$ of \emph{support} $sup: S \rightarrow S_\tau $ and \emph{signature} $sig: E\rightarrow E_\tau $ such that for every edge $s \edge{e} s'$ of $A$ the image $sup(s) \ledge{sig(e)} sup(s')$ is present in $\tau$. If $sig(e)=(m,n)$, then we define $sig^-(e)=m$ and $sig^+(e)=n$ and $\vert sig(e)\vert =0$, and if $sig(e)\in \{0,\dots, b\}$, then we define $sig^-(e)=sig^+(e)=0$ and $\vert sig(e)\vert =sig(e)$. \end{definition} A region $(sup, sig)$ models a place $p$ and its initial marking $M_0(p)$ as well as the corresponding part of the flow function $f(p,\cdot)$ of a sought $\tau$-net if it exist. In particular, $sig(e)$ models $f(p,e)$ and $sup(\iota)$ models the number of tokens that $p$ contains initially and, more generally, $sup(s)$ models the number of tokens $M(p)$ in the marking $M$ that corresponds to the state $s$ according to the isomorphism $\varphi$ that justifies $A\cong A_N$. \begin{definition}[Synthesized net]\label{def:synthesized_net} Every set $\mathcal{R} $ of $\tau$-regions of $A=(S,E,\delta,\iota)$ defines the \emph{synthesized $\tau$-net} $N^{\mathcal{R}}_A=(\mathcal{R}, E, f, M_0)$ with set of places $\mathcal{R}$, set of transitions $E$, flow function $f((sup, sig),e)=sig(e)$ and initial marking $M_0((sup, sig))=sup(\iota)$ for all $(sup, sig)\in \mathcal{R}$ and all $e\in E$. \end{definition} To make sure that a synthesized net $N$ realizes the behavior of a TS exactly, distinct states $s$ and $s'$ of $A$ must correspond to different markings $M$ and $M'$ of the net. Moreover, the firing of a transition $e$ needs to be inhibited at a marking $M$, when the event $e$ does not occur at the state $s$ that corresponds to $M$ by the isomorphism $\varphi$. This is stated by so-called separation atoms and separation properties. \begin{definition}[$\tau$-State Separation]\label{def:state_separation} Let $\tau$ be a type of nets and $A=(S,E,\delta,\iota)$ a TS. A pair $(s, s')$ of distinct states of $A$ defines a \emph{state separation atom} (SSA, for short). A $\tau$-region $R=(sup, sig)$ \emph{solves} $(s,s')$ if $sup(s)\not=sup(s')$. The meaning of $R$ is to ensure that $N^{\mathcal{R}}_A$ contains at least one place $R$ such that $M(R)\not=M'(R)$ for the markings $M$ and $M'$ corresponding to $s$ and $s'$, respectively. If $s\in S$ is a state of $A$ and, for all states $s'\in S$ such that $s'\not=s$, there is a $\tau$-region that solves $(s,s')$ then $s$ is called \emph{$\tau$-solvable}. If every state of $A$ or, equivalently, every SSA of $A$ is $\tau$-solvable, then $A$ has the \emph{$\tau$-state separation property} ($\tau$-SSP, for short). \end{definition} \begin{definition}[$\tau$-Event State Separation]\label{def:event_state_separation} Let $\tau$ be a type of nets and $A=(S,E,\delta,\iota)$ a TS. A pair $(e,s)$ of event $e\in E $ and state $s\in S$ where $e$ does not occur at $s$, that is $\neg s\edge{e}$, defines an \emph{event state separation atom} (ESSA atom, for short). A $\tau$-region $R=(sup, sig)$ \emph{solves} $(e,s)$ if $sig(e)$ is not defined at $sup(s)$ in $\tau$, that is, $\neg \delta_\tau(sup(s), sig(e))$. The meaning of $R$ is to ensure that there is at least one place $R$ in $N^{\mathcal{R}}_A$ such that $\delta_\tau(M(R),f(R,e))$ is not defined for the marking $M$ that corresponds to $s$ via the isomorphism, that is, $e$ cannot fire in $M$. If, for all $s\in S$ such that $\neg s\edge{e}$, there is a $\tau$-region that solves $(e,s)$, then $e$ is called \emph{$\tau$-solvable}. If every event of $A$ or, equivalently, every ESSA of $A$ is $\tau$-solvable, then $A$ has the \emph{$\tau$-event state separation property} ($\tau$-ESSP, for short). \end{definition} \begin{definition}[Witness, $\tau$-admissible set]\label{def:witness} A set $\mathcal{R}$ of $\tau$-region is a ($\tau$-) \emph{witness} of the $\tau$-(E)SSP of $A$ if it contains for every (E)SSA a $\tau$-region that solves it. If $A$ has the $\tau$-SSP and the $\tau$-ESSP, then $A$ is called $\tau$-solvable. A set $\mathcal{R}$ that is a witness of both the $\tau$-SSP and the $\tau$-ESSP of $A$ is called $\tau$-\emph{admissible}. \end{definition} The following lemma, borrowed from \cite[p.163]{DBLP:series/txtcs/BadouelBD15}, summarizes the already implied connection between the existence of $\tau$-admissible sets of $A$ and (the solvability of) $\tau$-synthesis: \begin{lemma}[\cite{DBLP:series/txtcs/BadouelBD15}]\label{lem:admissible} Let $A$ be a TS and $\tau$ a type of nets. The reachability graph $A_N$ of a $\tau$-net $N$ is isomorphic to $A$ if and only if there is a $\tau$-admissible set $\mathcal{R}$ of $A$ such that $N=N^{\mathcal{R}}_A$. \end{lemma} \begin{example}\label{ex:admissible_set} Let $\tau\in \{\tau_{PPT}^b,\tau_{PT}^b \mid b\in \mathbb{N}^+ \}$. The TS $A_1$ of Figure~\ref{fig:admissible_set} has the $\tau$-ESSP and the $\tau$-SSP: The region $R_1=(sup_1,sig_1)$, which is defined by $sup_1(s_0)=sup_1(s_2)=1$ and $sup_1(s_1)=sup_1(s_3)=0$ and $sig(a)=(1,0)$ and $sig(b)=(0,0)$, solves the ESSA $(a,s_1)$ and $(a,s_3)$ as well as the SSA $(s_0,s_1)$ and $(s_0,s_3)$ and $(s_2,s_1)$ and $(s_2,s_3)$. Moreover, the region $R_2=(sup_2,sig_2)$, which is defined by $sup_2(s_0)=sup_2(s_1)=1$, $sup_2(s_2)=sup_2(s_3)=0$, $sig(a)=(0,0)$ and $sig(b)=(1,0)$ solves the remaining ESSA $(b,s_2)$ and $(b,s_3)$ as well as the SSA $(s_0,s_2)$ and $(s_1,s_3)$ of $A_1$. Since $R_1$ and $R_2$ solve all SSA and ESSA, $\mathcal{R}=\{R_1,R_2\}$ is a $\tau$-admissible set. Figure~\ref{fig:admissible_set} sketches the synthesized net $N_1=N^{\mathcal{R}}_{A_1}$, where $(0,0)$-labeled flow edges are omitted, and its reachability graph $A_{N_1}$. The isomorphism $\varphi$ between $A_1$ and $A_{N_1}$ is given by $\varphi(s_0)=11$, $\varphi(s_1)=01$, $\varphi(s_2)=10$ and $\varphi(s_3)=00$. \end{example} \begin{figure}\label{fig:admissible_set_2} \end{figure} \begin{example}\label{ex:admissible_set_2} The TS $A_2$ of Figure~\ref{fig:admissible_set_2} has no ESSA, since the only event $a$ occurs at every state of $A_2$. Consequently, $A_2$ has the $\tau$-ESSP for all types of nets. However, $A_2$ has the SSA $(s_0,s_1), (s_0,s_2)$ and $(s_1,s_2)$. If $\tau\in \{\tau_{PPT}^b,\tau_{PT}^b \mid b\in \mathbb{N}^+ \}$, then neither of these atoms is $\tau$-solvable, since every $\tau$-region $R=(sup, sig)$ of $A_2$ satisfies $sup(s_0)=sup(s_0)-2sig^-(a)+2sig^+(a)$, which implies $sig(a)=(0,0)$ and thus $sup(s_0)=sup(s_1)=sup(s_2)$. Nevertheless, if $\tau\in \{\tau_{\mathbb{Z}PPT}^b,\tau_{\mathbb{Z}PT}^b \mid b\geq 2 \}$, then $A_2$ has the $\tau$-SSP, since the following $\tau$-Region $R=(sup, sig)$ solves all SSA in one blow: $sup(s_0)=0$, $sup(s_1)=1$ and $sup(s_2)=2$ and $sig(a)=1$. Since $A_2$ has also the $\tau$-ESSP, $\mathcal{R}=\{R\}$ is a $\tau$-admissible set of $A_2$. Figure~\ref{fig:admissible_set_2} sketches the synthesized net $N_2=N^{\mathcal{R}}_{A_2}$ and its reachability graph $A_{N_2}$. This example also shows that the group-extended types $\tau_{\mathbb{Z}PPT}$ and $\tau_{\mathbb{Z}PT}$ are strictly more powerful than the types $\tau_{PPT}$ and $\tau_{PT}$. \end{example} A purpose of this paper is to characterize the computational complexity of $\tau$-\textsc{Synthesis} for all introduced $b$-bounded types of nets completely. Since the corresponding complexity classes are defined for decision problems, we restrict our investigations to the decision version of $\tau$-\textsc{Synthesis} that is called $\tau$-\textsc{Solvability}. By Lemma~\ref{lem:admissible}, there is a $\tau$-admissible set $\mathcal{R}$ of $A$ if and only if there is a $\tau$-net $N$ whose reachability graph is isomorphic to $A$. This allows us to formulate the solvability problem for $\tau$-nets as follows: \noindent \fbox{\begin{minipage}[t][1.7\height][c]{0.97\textwidth} \begin{decisionproblem} \problemtitle{\textsc{$\tau$-Solvability}} \probleminput{A TS $A=(S,E,\delta, \iota)$.} \problemquestion{Does there exist a $\tau$-admissible set $\mathcal{R}$ of $A$?} \end{decisionproblem} \end{minipage}} Although we are mainly interested in synthesis, the $\tau$-SSP and the $\tau$-ESSP are also interesting on their own. This is because, for example, an algorithm that decides in polynomial time whether $A$ has the $\tau$-SSP or the $\tau$-ESSP could serve as a pre-synthesis method, which rejects inputs that does not have the property in question. This leads to the following decision problems: \noindent \fbox{\begin{minipage}[t][1.7\height][c]{0.97\textwidth} \begin{decisionproblem} \problemtitle{\textsc{$\tau$-SSP}} \probleminput{A TS $A=(S,E,\delta, \iota)$.} \problemquestion{Does there exist a witness $\mathcal{R}$ for the $\tau$-SSP of $A$?} \end{decisionproblem} \end{minipage}} \noindent \fbox{\begin{minipage}[t][1.7\height][c]{0.97\textwidth} \begin{decisionproblem} \problemtitle{\textsc{$\tau$-ESSP}} \probleminput{A TS $A=(S,E,\delta, \iota)$.} \problemquestion{Does there exist a witness $\mathcal{R}$ for the $\tau$-ESSP of $A$?} \end{decisionproblem} \end{minipage}} In~\cite{DBLP:journals/tcs/BadouelBD97}, it was originally shown that \textsc{$\tau_{PPT}^1$-Solvability} (there referred to as \emph{elementary net synthesis}) is NP-complete. In~\cite{DBLP:conf/apn/TredupRW18,DBLP:conf/concur/TredupR18}, we have shown that this remains true even for strongly restricted inputs and applies also to \textsc{$\tau_{PPT}^1$-SSP} and \textsc{$\tau_{PPT}^1$-ESSP}. Moreover, the type $\tau_{\mathbb{Z}PPT}^1$ coincides with Schmitt's type (\emph{flip-flop nets}) for which the considered decision problems are tractable~\cite{DBLP:conf/stacs/Schmitt96}. In \cite[p.~619]{DBLP:conf/tamc/TredupR19}, this characterization was found to be true for $\tau_{\mathbb{Z}PT}^1$ (there referred to as the Boolean type of nets $\tau=\{\ensuremath{\textsf{nop}},\ensuremath{\textsf{inp}},\ensuremath{\textsf{out}},\ensuremath{\textsf{used}},\ensuremath{\textsf{swap}}\}$) as well. In this paper, we complete the complexity characterization of $\tau$-\textsc{Solvability}, $\tau$-\textsc{SSP} and $\tau$-\textsc{ESSP} for all introduced $b$-bounded types of nets and all $b\in \mathbb{N}^+$. (Observe that the problems are trivial if $b=0$.) Figure~\ref{fig:overview} provides an overview over our findings and shows, depending on $\tau$ and $b$, which of the problems are NP-complete (NPC) and which are solvable in polynomial time (P). \begin{figure}\label{fig:overview} \end{figure} In the following, if not explicitly stated otherwise, for all $\tau\in \{\tau_{PT}^b,\tau_{PPT}^b\}$, we let $b\in \mathbb{N}^+$ and, for all $\tau\in \{\tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b\}$, we let $2\leq b\in \mathbb{N}$ be arbitrary but fixed, since the case $b=1$ is already solved for the latter. The observations of the next lemma are used to simplify our proofs: \begin{lemma}\label{lem:observations} Let $\tau \in \{\tau_{PPT}^b, \tau_{PT}^b, \tau_{\mathbb{Z}PT}^b, \tau_{\mathbb{Z}PPT}^b\}$ and $A=(S,E,\delta,\iota)$ be a TS. \begin{enumerate} \item\label{lem:sig_summation_along_paths} Two mappings $sup: S\longrightarrow S_\tau$ and $sig: E\longrightarrow E_\tau$ define a $\tau$-region of $A$ if and only if for every directed labeled path $q_0\edge{e_1}\dots\edge{e_m}q_m$ of $A$ holds $sup(q_{i})=sup(q_{i-1})-sig^-(e_i)+sig^+(e_i)+\vert sig(e_i)\vert$ for all $i\in \{1,\dots, \ell\}$, where this equation is to consider modulo $(b+1)$. In particular, every region $(sup, sig)$ is implicitly completely defined by $sig$ and $sup(\iota)$. \item\label{lem:absolute_value} If $s_{0}, s_{1},\dots, s_{b}\in S$, $e\in E$ and $s_{0}\edge{e} \dots \edge{e} s_b$ then a $\tau$-region $(sup, sig)$ of $A$ satisfies $sig(e)= (m,n)$ with $m\not=n$ if and only if $(m,n) \in \{(1,0),(0,1)\}$. If $sig(e)=(0,1)$ then $sup(s_{0})=0$ and $sup(s_b)=b$. If $sig(e)=(1,0)$ then $sup(s_0)=b$ and $sup(s_b)=0$. \end{enumerate} \end{lemma} \begin{proof} (\ref{lem:sig_summation_along_paths}): The first claim follows directly from the definitions of $\tau$ and $\tau$-regions. For the second claim, we observe that every state $s\in S$ is reachable by a directed labeled path $q_0\edge{e_1}\dots\edge{e_m}q_m$, where $q_0=\iota$ and $q_m=s$. Thus, if $sup(\iota)$ and a valid signature $sig$ are given, then, by the first claim, we get $sup(s)$ by $sup(s)=sup(\iota)+ \sum_{i=1}^{m} (-sig^-(e_i)+sig^+(e_i)+\vert sig(e_i)\vert)$. (\ref{lem:absolute_value}): The \textit{If}-direction is trivial. For the \textit{Only-if}-direction we show that the assumption $(m,n)\not\in \{(1,0),(0,1)\}$ yields a contradiction. By (\ref{lem:sig_summation_along_paths}), we have that $sup(s_b)=sup(s_0) + b\cdot(n-m)$. If $\vert n-m\vert > 1$, then either $b\cdot(n-m) < - b$ or $b\cdot(n-m) > b$; since $0\leq sup(s_0) \leq b$, the first case contradicts $sup(s_b)\geq 0$, and the latter case contradicts $\sup(s_b)\leq b$, respectively. Hence, if $n\not=m$ then $\vert n-m\vert =1$. For a start, we show that $ m > n$ implies $m=1$ and $n=0$. By $n \leq m-1$ and $sup(s_0)\leq b$ we obtain the estimation \[ sup(s_{b-1}) = sup(s_0) +(b-1)(n-m)\ \leq \ b + (b-1)(m-1-m) = 1 \] By $n < m \leq sup(s_{b-1})\leq 1$ we have $(m,n)=(1,0)$. Similarly, we obtain that $(m,n)=(0,1)$ if $m < n$. Hence, if $sig(e)=(m,n)$ and $n\not=m$ then $sig(e) \in \{(1,0),(0,1)\}$. The second statement follows directly from (\ref{lem:sig_summation_along_paths}). \end{proof} The following lemma shows that if $A$ is a linear TS and $\mathcal{R}$ is a witness of the $\tau$-ESSP of $A$, then $\mathcal{R}$ witnesses also the $\tau$-SSP of $A$. In particular, this implies that a linear TS $A$ is $\tau$-solvable if and only if it has the $\tau$-ESSP. Notice that Lemma~\ref{lem:essp_implies_ssp} provides a very general result, since its statement is independent from the actual choice of $\tau$. \begin{lemma}[ESSP implies SSP for Linear TS]\label{lem:essp_implies_ssp} Let $\tau$ be a type of nets and let $ A= s_0 \edge{e_1} \dots \edge{e_n} s_n$ be a a linear TS and let $\mathcal{R}$ be a set of $\tau$-regions of $A$. If $\mathcal{R}$ is a witness of the $\tau$-ESSP of $A$, then $\mathcal{R}$ witnesses also the $\tau$-SSP of $A$. \end{lemma} \begin{proof} Let $\mathcal{R}$ be a witness of the $\tau$-ESSP of $A$. Assume that there is an SSA that can not be solved by a region of $\mathcal{R}$. Then there is an SSA $\alpha=(z_{i_j}, z_{i_k})$ of $A$, where $i_j,i_k\in \{0,\dots, n\}$, such that the state $z_{i_k}$ has the maximum index among all states of $A$ that participate at SSA of $A$ that can not be solved by a region of $\mathcal{R}$: if $(z_{i_\ell}, z_{i_m})$ is an SSA of $A$ that can not be solved by regions of $\mathcal{R}$, then $i_\ell\leq i_k$ and $i_m\leq i_k$. In particular, this implies $i_j < i_k$. Since $i_j < i_k$, there is the edge $z_{i_j}\ledge{e_{i_j+1}}z_{i_j+1}$ in $A$. Since $\alpha$ is not solvable by regions of $\mathcal{R}$, we have $sup(z_{i_j})=sup(z_{i_k})$ for all $(sup, sig)\in \mathcal{R}$. This implies, that the event $e_{i_j+1}$ occurs at $z_{i_k}$, since the ESSA $(e_{i_j+1}, z_{i_k})$ would not be solvable otherwise: $sup(z_{i_j})\lledge{sig(e_{i_j+1})}$ and $\neg sup(z_{i_k})\lledge{sig(e_{i_j+1})}$ implies the contradiction $sup(z_{i_j})\not=sup(z_{i_k})$. Hence, $z_{i_k}\ledge{e_{i_j+1}}z_{i_k+1}$ is an edge in $A$. Since $sup(z_{i_j})=sup(z_{i_k})$ for all $(sup, sig)\in \mathcal{R}$ and since $\delta_\tau$ is a function, by $z_{i_j}\ledge{e_{i_j+1}}z_{i_j+1}$ and $z_{i_k}\ledge{e_{i_j+1}}z_{i_k+1}$, we get $sup(z_{i_j+1})=sup(z_{i_k+1})$ for all $(sup, sig)\in \mathcal{R}$. In particular, the SSA $(z_{i_j+1}, z_{i_k+1})$ is not solvable by regions of $\mathcal{R}$. Since $i_k < i_k+1$, this contradicts the choice of $\alpha$. Consequently, $\alpha$ does not exist and $\mathcal{R}$ witnesses the $\tau$-SSP of $A$. \end{proof} \section{The concept of unions}\label{sec:unions} For our reductions, we use the technique of \emph{component design} \cite{DBLP:books/fm/GareyJ79}. Every implemented constituent is a TS (in the context of the reduction also referred to as gadget) that locally ensures the satisfaction of some constraints. Commonly, all constituents are finally joined together in a target instance (TS) such that all required constraints are properly globally translated. However, the concept of unions saves us the need to actually create the target instance: \begin{definition}[Union] If $A_0=(S_0,E_0,\delta_0,\iota_0), \dots, A_n=(S_n,E_n,\delta_n,\iota_n)$ are TS with pairwise disjoint states (but not necessarily disjoint events) then we call $U(A_0, \dots, A_n)$ their \emph{union} with set of states $S(U)=\bigcup_{i=0}^n S_i$ and set of events $E(U)=\bigcup_{i=0}^n E_i$. \end{definition} Let $\tau=(Z_\tau,E_\tau,\delta_\tau)$ be a type of nets and $U=U(A_0, \dots, A_n)$ a union, where $A_i=(S_i,E_i,\delta_i,\iota_i)$ for all $i\in\{0,\dots,n\}$. The concepts of SSA, ESSA, $\tau$-regions, $\tau$-SSP, and $\tau$-ESSP as defined in the preliminaries are transferred to $U$ as follows: \begin{definition}[Region of a Union] A pair $(sup, sig)$ of mappings $sup: S(U)\rightarrow S_\tau$ and $sig:E(U)\rightarrow E_\tau$ is called a \emph{$\tau$-region} (of $U$), if $s\edge{e}s'\in A_i$ implies $sup(s)\ledge{sig(e)}sup(s')\in \tau$ for all $i\in \{0,\dots, n\}$. \end{definition} \begin{definition}[$\tau$-State Separation in Unions] A pair $(s,s')$ of distinct states $s, s' \in S(U)$ of the \emph{same} TS $A_i$, where $i\in \{0,\dots, n\}$, defines an SSA of $U$. A $\tau$-region $(sup,sig)$ of $U$ solves $(s,s')$, if $sup(s) \not= sup(s')$. $U$ has the $\tau$-SSP, if all of its SSA are $\tau$-solvable. \end{definition} \begin{definition}[$\tau$-Event State Separation in Unions] A pair $(e,s)$ of event $e\in E(U)$ and state $s\in S(U)$ such that $\neg s\edge{e}$ defines an ESSA of $U$. A $\tau$-region $(sup, sig)$ of $U$ solves it, if $\neg sup(s)\edge{e}$. $U$ has the $\tau$-ESSP if all of its ESSA are $\tau$-solvable. \end{definition} In the same way, the notion of \emph{witness} and \emph{$\tau$-admissible set} and \emph{$\tau$-solvable} are transferred to unions. From the perspective of $\tau$-SSP and $\tau$-ESSP, unions are intended to treat a lot of unjoined TS as if they were joined to a TS. To be able to do so, in the following, we introduce the \emph{linear joining} $LJ(U)$ and the \emph{joining} $J(U)$ of a union $U$ and argue that $LJ(U)$ or $J(U)$ has the $\tau$-(E)SSP if and only if $U$ has the $\tau$-(E)SSP. \begin{definition}[Linear Joining] Let $U = U(A_0, \dots, A_n)$ be a union such that, for all $i\in \{0,\dots, n\}$, the TS $A_i=(S_i,E_i,\delta_i,\iota_i)$ is linear and its terminal state is $t_i$ and let $Q=\{q_1,\dots, q_n\}$ be a set of states, which is disjoint with $S(U)$, and $W=\{w_1,\dots, w_n\}$ and $Y=\{y_1,\dots, y_n\}$ be sets of events which are disjoint with $E(U)$. The \emph{linear joining} of $U$ is the linear TS $LJ(U)=(S(U)\cup Q,E(U)\cup W\cup Y,\delta, \iota_0)$ with transition function $\delta$ that is, for all $e\in E(U)\cup W\cup Y$ and all $s\in S(U)\cup Q$, defined as follows: \[\delta(s,e)= \begin{cases} \delta_i(s,e), &\text{if } s\in S_i \text{ and } e\in E_i \text{ and } i\in \{0,\dots, n\}\\ q_{i+1}, &\text{if } s=t_i \text{ and } e=w_{i+1} \text{ and } i\in \{0,\dots, n-1\}\\ \iota_i, &\text{if } s=q_i \text{ and } e=y_i \text{ and } i\in \{1,\dots, n\}\\ \text{undefined}, &\text{otherwise} \end{cases} \] \end{definition} \begin{remark} The linear joining $LJ(U)$ of $U$ can be sketched as follows: \begin{center} \begin{tikzpicture}[new set = import nodes] \begin{scope}[nodes={set=import nodes}] \node (init) at (-1,0) {$LJ(U)=$}; \foreach \i in {0,...,4} { \coordinate (\i) at (\i*1.4cm,0) ;} \node (0) at (0) {$A_0$}; \node (1) at (1) {\nscale{$q_1$}}; \node (2) at (2) {$A_1$}; \node (3) at (3) {\nscale{$q_2$}}; \node (4) at (4) {}; \node (5) at (6.2,0) {$\dots$}; \node (6) at (6.8,0) {}; \node (7) at (8.2,0) {\nscale{$q_n$}}; \node (8) at (9.6,0) {$A_n$}; \graph { (import nodes); 0 ->["\escale{$w_1$}"]1->["\escale{$y_1$}"]2 ->["\escale{$w_2$}"]3 ->["\escale{$y_2$}"]4; 6 ->["\escale{$w_n$}"]7->["\escale{$y_n$}"]8; }; \end{scope} \end{tikzpicture} \end{center} \end{remark} \begin{definition}[Joining] Let $U = U(A_0, \dots, A_n)$ be a union of TS $A_i=(S_i,E_i,\delta_i,\iota_i)$ for all $i\in \{0,\dots, n\}$, and let $Q=\{q_0,\dots, q_n\}$ be a set of states, which is disjoint with $S(U)$, and $W=\{w_1,\dots, w_n\}$ and $Y=\{y_0,\dots, y_n\}$ be sets of events, which are disjoint with $E(U)$. The \emph{joining} of $U$ is the TS $J(U) = (S(U) \cup Q, E(U) \cup W \cup Y , \delta, q_0 )$ with transition function $\delta$ that is, for all $e\in E(U)\cup W\cup Y$ and all $s\in S(U)\cup Q$, defined as follows: \[\delta(s,e) = \begin{cases} \delta_i(s,e), & \text{if } s \in S_i \text{ and } e \in E_i \text{ and } i\in \{0,\dots, n\}\\ q_{i+1}, & \text{if } s = q_i \text{ and } e=w_{i+1} \text{ and } i\in \{0,\dots, n-1\}\\ \iota_i, & \text{if } s = q_i \text{ and } e=y_i \text{ and } i\in \{0,\dots, n\}\\ \text{undefined}, &\text{otherwise} \end{cases} \] \end{definition} \begin{remark} The joining $J(U)$ of $U$ can be sketched as follows: \begin{center} \begin{tikzpicture} \node (t0) at (0,0) {\nscale{$q_0$}}; \node (t1) at (1.5,0) {\nscale{$q_1$}}; \coordinate (t2) at (3,0) ; \node (dots_1) (d1) at (3.5,0) {$\dots$} ; \coordinate (t_n_1) at (4,0) ; \node (tn) at (5.5,0) {\nscale{$q_n$}}; \node (a0) at (0,-1.1) {$A_0$}; \node (a1) at (1.5,-1.1) {$A_1$}; \coordinate (a2) at (3.2,-1.1) ; \coordinate (a_n_1) at (4.5,-1.1) ; \node (an) at (5.5,-1.1) {$A_n$}; \path (t0) edge [->] node[pos=0.5,above] {\escale{$w_1$}} (t1); \path (t1) edge [->] node[pos=0.5,above] {\escale{$w_2$}} (t2); \path (t_n_1) edge [->] node[pos=0.5,above] {\escale{$w_n$}} (tn); \path (t0) edge [->] node[pos=0.5,left] {\escale{$y_0$}} (a0); \path (t1) edge [->] node[pos=0.5,left] {\escale{$y_1$}} (a1); \path (tn) edge [->] node[pos=0.5,left] {\escale{$y_n$}} (an); \end{tikzpicture} \end{center} \end{remark} The following lemma proves the announced functionality of unions. For technical reasons, we restrict ourselves to unions $U$ where for every event $e\in E(U)$ there is at least one ESSA $(e,s)$ to solve. The unions of our reductions satisfy this property, which is used to ensure that if $U$ has the $\tau$-ESSP, then $LJ(U)$ and $J(U)$ have the $\tau$-ESSP, too. Moreover, our reductions ensure that if $\tau\in\{\tau_{PT}^b,\tau_{PPT}^b\}$, then only the linear joining $LJ(U)$ is to consider, and if $\tau\in\{\tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b\}$, then only the joining $J(U)$ is used. Thus, for the sake of simplicity, the lemma is formulated accordingly. \begin{lemma}\label{lem:joining} \break \vspace*{-7mm} \begin{enumerate} \item Let $U = U(A_0, \dots, A_n)$ be a union of linear TS such that $t_i$ is the terminal state of $A_i=(S_i,E_i,\delta_i,\iota)$ for all $i\in \{0,\dots, n\}$, and for every event $e\in E(U)$ there is a state $s\in S(U)$ with $\neg s\edge{e}$. If $\tau\in \{\tau_{PT}^b, \tau_{PPT}^b\}$, then $U$ has the $\tau$-ESSP, respectively the $\tau$-SSP, if and only if $LJ(U)$ has the $\tau$-ESSP, respectively the $\tau$-SSP. \item Let $U = U(A_0, \dots, A_n)$ be a union such that $A_i=(S_i,E_i,\delta_i, \iota_i)$ for all $i\in \{0,\dots, n\}$, and for every event $e\in E(U)$ there is a state $s\in S(U)$ with $\neg s\edge{e}$. If $\tau \in \{\tau_{\mathbb{Z}PT}^b, \tau_{\mathbb{Z}PPT}^b\}$, then $U$ has the $\tau$-ESSP, respectively the $\tau$-SSP, if and only if $J(U)$ has the $\tau$-ESSP, respectively the $\tau$-SSP. \end{enumerate} \end{lemma} \begin{proof} (1): The \emph{if}-direction is trivial. \emph{Only-if}: Let $R=(sup, sig)$ be a $\tau$-region of $U$, which solves an ESSA $(a,z)$ or an SSA $(z,z')$ of $U$. We can extended $R$ to a $\tau$-region $R'=(sup', sig')$ of $LJ(U)$ that also solves these atoms, by defining $R'$ for all $s\in S(U)\cup Q$ and all $e\in E(U)\cup W\cup Y$ as follows: \begin{align*} sup'(s) &= \begin{cases} sup(s), & \text{if } s \in S(U),\\ sup(z), & \text{if } s \in Q \end{cases}\\ sig'(e) &= \begin{cases} sig(e), & \text{if } e \in E(U),\\ (sup(t_i)-sup(z),0) & \text{if } e = w_{i+1} \text{ and } sup(t_i) > sup(z) \text{ and } i\in \{0,\dots, n-1\} \\ (0, sup(z)-sup(t_i)) & \text{if } e = w_{i+1} \text{ and } sup(t_i) \leq sup(z) \text{ and } i\in \{0,\dots, n-1\} \\ (0, sup(\iota_i)-sup(z)) & \text{if } e = y_i \text{ and } sup(\iota_i) > sup(z) \text{ and } i\in \{1,\dots, n\} \\ (sup(z)-sup(\iota_i),0) & \text{if } e = y_i \text{ and } sup(\iota_i) \leq sup(z) \text{ and } i\in \{1,\dots, n\} \\ \end{cases} \end{align*} Notice that this extension also $\tau$-solves $(e, q_i)$ for all $i\in \{1,\dots, n\}$, since $sup(q_i)=sup(z)$. Since, for every event $e\in E(U)$ there is an atom $(e,s)$ to solve, this implies that all events of $U$ are $\tau$-solvable in $LJ(U)$. Moreover, it is easy to see that the connector states $q_1,\dots, q_n$ and the connector events $y_1,\dots, y_n$ are $\tau$-solvable: If $i\in \{1,\dots, n\}$ is arbitrary but fixed then the following region $R=(sup, sig)$ (by Lemma~\ref{lem:observations}, completely defined) $\tau$-solves $q_i$ and $y_i$: $sup(\iota_0)=b$; for all $e\in E(LJ(U))$, if $e=y_i$, then $sig(e)=(0,b)$; if $e=w_i$, then $sig(e)=(b,0)$; otherwise $sig(e)=(0,0)$. So far, we have already proven that if $U$ has the $\tau$-SSP, then $LJ(U)$ has the $\tau$-SSP, too. Thus, to prove that the $\tau$-ESSP of $U$ implies $\tau$-ESSP of $LJ(U)$, it remains to show that $w_1,\dots, w_n$ are solvable if $U$ has the $\tau$-ESSP. Let $i\in \{1,\dots, n\}$ be arbitrary but fixed. The following region $R=(sup, sig)$ solves $(w_i,s)$ for all $s\in S(LJ(U))\setminus S_{i-1}$: if $i=1$, then $sup(\iota_0)=0$, otherwise $sup(\iota_0)=b$; for all $e\in E(LJ(U))$, if $i\not=1$ and $e=y_{i-1}$, then $sig(e)=(b,0)$; if $e=w_i$, then $sig(w_i)=(0,b)$; otherwise, $sig(e)=(0,0)$. It remains to argue that $(w_i,s)$ is $\tau$-solvable for all $s\in S_{i-1}$. Since $U$ has the $\tau$-ESSP, there is a set $\mathcal{R}$ of regions that witnesses the $\tau$-ESSP. In particular, for every ESSA $(e,s)$ of $A_{i-1}$ there is a region of $\mathcal{R}$ that solves it. Restricting the corresponding regions to $A_{i-1}$ yields a set of regions that witnesses the $\tau$-ESSP of $A_{i-1}$. Since $A_i$ is linear, by Lemma~\ref{lem:essp_implies_ssp}, these regions witness also the $\tau$-SSP of $A_{i-1}$. Consequently, for every state $s\in S_{i-1}\setminus\{t_{i-1}\}$, there is a region $(sup, sig)\in \mathcal{R}$ such that $sup(s)\not=sup(t_{i-1})$. We extend this region to a region of $LJ(U)$ that solves $(w_i, s)$ as follows: Besides of $w_i, q_i$ and $y_i$, the extension of $(sup, sig)$ is defined as $R'$ above; if $sup(s)>sup(t_{i-1})$, then $sup(q_i)=b$, otherwise $sup(q_i)=0$; if $sup(q_i)=b$, then $sig(w_i)=(0, b-sup(t_{i-1}))$; otherwise $sig(w_i)=(sup(t_{i-1}), 0)$; finally, if $sup(q_i)=b$, then $sig(y_i)=(b-sup(\iota_i), 0)$; otherwise $sig(y_i)=(0,sup(\iota_i))$. (2): \emph{If}: Again, the \emph{if}-direction is trivial. \emph{Only-if}: Let $R=(sup, sig)$ be a $\tau$-region of $U$, which solves an ESSA $(a,z)$ or an SSA $(z,z')$ of $U$. We can extended $R$ to a $\tau$-region $R'=(sup', sig')$ of $J(U)$ that also solves these atoms, by defining $R'$ for all $s\in S(U)\cup Q$ and all $e\in E(U)\cup W\cup Y$ as follows: \begin{align*} sup'(s) &= \begin{cases} sup(s), & \text{if } s \in S(U),\\ sup(z), & \text{if } s \in Q \end{cases}\\ sig'(e) &= \begin{cases} sig(e), & \text{if } e \in E(U),\\ 0, & \text{if } e \in W ,\\ (sup(z) - sup(\iota_i),0) & \text{if } e = y_i \text{ and } sup(\iota_i) < sup(z) \text{ and } i\in \{0,\dots, n\} \\ ( 0, sup(\iota_i ) - sup(z)) & \text{if } e = y_i \text{ and } sup(\iota_i) \geq sup(z) \text{ and } i\in \{0,\dots, n\} \end{cases} \end{align*} Notice that $R'$ also solves $(a,q_i)$ for all $i\in \{0,\dots, n\}$ as $sup(q_i)=sup(z)$. Consequently, since there is at least one state $s\in S(U)$ for every event $e\in E(U)$ such that $(e,s)$ is an ESSA of $U$, the atom $(e,q_i)$ is solvable for every $e\in E(U)$ and every $i\in \{0,\dots, n\}$. As a result, to prove the $\tau$-(E)SSP for $J(U)$ it remains to argue that the remaining SSA and ESSA at which the states of $Q$ and the events of $W\cup Y$ participate are solvable in $J(U)$. If $i\in \{0,\dots, n\}$ and $s\in S(J(U))$ and $e\in E(J(U))$ then the following region $(sup, sig)$ simultaneously solves every valid atom $(y_i, \cdot)$, $(q_i,\cdot) $ and $(w_{i+1}, \cdot)$ in $J(U)$ (if the latter exists): \[ sup(s) =\begin{cases} 0, & \text{if } s=q_i\\ b, & \text{otherwise } \end{cases}\\ \text{ \text{ } } sig(e)=\begin{cases} (0,b), & \text{if } e = y_i \text{ or } ( i < n \text{ and } e=w_{i+1})\\ (b,0), & \text{if } 1 \leq i \text{ and } e=w_{i-1}\\ 0, & \text{ otherwise} \\ \end{cases} \] \vspace*{-7mm} \end{proof} \section{NP-completeness results}\label{sec:hardness_results} The following theorem is the main contribution of this section: \begin{theorem}\label{the:hardness_results} \begin{enumerate} \item\label{the:hardness_results_1} If $\tau \in \{\tau_{PT}^b, \tau_{PPT}^b\}$, then $\tau$-\textsc{Solvability} and $\tau$-\textsc{ESSP} and $\tau$-\textsc{SSP} are NP-complete, even when restricted to linear TS. \item\label{the:hardness_results_2} Let $\tau \in \{\tau_{\mathbb{Z}PT}^b, \tau_{\mathbb{Z}PPT}^b\}$. For any fixed $g\geq 2$, $\tau$-\textsc{Solvability} and $\tau$-\textsc{ESSP} are NP-complete, even when restricted to $g$-grade TS. \end{enumerate} \end{theorem} For the proof of Theorem~\ref{the:hardness_results}, on the one hand, we have to argue that $\tau$-\textsc{Solvability}, $\tau$-\textsc{ESSP} and $\tau$-\textsc{SSP} are in NP. This can be seen as follows. By Definition~\ref{def:state_separation} and Definition~\ref{def:event_state_separation}, a TS $A=(S,E,\delta,\iota)$ has at most $\vert S\vert^2$ SSA and at most $\vert S\vert\cdot \vert E\vert $ ESSA, respectively. This implies that if a TS $A$ is $\tau$-solvable or has the $\tau$-SSP or the $\tau$-ESSP, then there is a set of $\tau$-regions $\mathcal{R}$ of $A$ of size at most $\vert S\vert^2 + \vert S\vert \cdot \vert E\vert $ that witnesses the corresponding property of $A$. Consequently, there is a non-deterministic Turing machine that guesses $\mathcal{R}$ in a non-deterministic computation and verifies the validity of $\mathcal{R}$ in (deterministic) polynomial time. On the other hand, we have to argue that the decision problems are NP-hard for accordingly restricted input TS. The NP-hardness proofs base on polynomial-time reductions of the following decision problem, which has been shown to be NP-complete in~\cite{DBLP:journals/dcg/MooreR01}: \noindent \fbox{\begin{minipage}[t][1.9\height][c]{0.97\textwidth} \begin{decisionproblem} \problemtitle{\textsc{Cubic Monotone One-In-Three 3-SAT} (\textsc{CM1in33Sat})} \probleminput{A boolean expression $\varphi=\{\zeta_0,\dots, \zeta_{m-1}\}$ of 3-clauses such that, for all $i\in \{0,\dots, m-1\}$, the clause $\zeta_i=\{X_{i_0}, X_{i_1}, X_{i_2}\}$ contains $3$ distinct non-negated variables, where $i_0 < i_1 < i_2$; every variable $X\in V(\varphi)$ occurs in exactly three distinct clauses, where $V(\varphi)=\bigcup_{i=0}^{m-1}\zeta_i$ denotes the set of all variables of $\varphi$. } \problemquestion{Does there exist a one-in-three model of $\varphi$, that is, a subset $M\subseteq V(\varphi)$ such that $\vert M\cap\zeta_i\vert =1 $ for all $i\in \{0,\dots, m-1\}$?} \end{decisionproblem} \end{minipage}} Notice that the characterization of the input $\varphi$ implies $\vert V(\varphi)\vert=m$. The following example provides --up to renaming-- the smallest instance of \textsc{CM1in33Sat} that allows a positive decision: \begin{example}\label{ex:varphi} The boolean expression $\varphi=\{\zeta_0,\dots, \zeta_{5}\}$ with clauses $\zeta_0=\{X_0,X_1,X_2\},\ \zeta_1= \{X_0,X_2,X_3\},\ \zeta_2= \{X_0,X_1,X_3\},\ \zeta_3= \{X_2,X_4,X_5\},\ \zeta_4=\{X_1,X_4,X_5\},\ \zeta_5= \{X_3,X_4,X_5\}$ is a well-defined input of \textsc{CM1in33Sat} and has the one-in-three model $M=\{X_0,X_4\}$. \end{example} \textbf{General reduction approach.} Let $\tau\in \{\tau_{PT}^b,\tau_{PPT}^b, \tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b\}$. For the proof of the NP-hardness of \textsc{$\tau$-Solvability} and \textsc{$\tau$-ESSP} we reduce $\varphi$ to a union $U_\tau$ of gadget TS. The index $\tau$ emphasizes that the actual peculiarity of the union depends on $\tau$. In particular, if $\tau\in \{\tau_{PT}^b,\tau_{PPT}^b\}$, then all these TS are linear, and $JL(U_\tau)$ is a well defined linear TS. Otherwise, if $\tau\in \{\tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b\}$, then these gadgets are 2-grade TS where no initial state has an incoming edge, which implies that $J(U_\tau)$ is a 2-grade TS. In $U_\tau$, the variables of $\varphi$ are represented by events and the clauses of $\varphi$ are represented by paths on which the variables of the clauses occur as events. More exactly, for every $i\in \{0,\dots, m-1\}$ and clause $\zeta_i=\{X_{i_0}, X_{i_1}, X_{i_2}\}$, the union $U_\tau$ contains (a gadget with) a directed labeled path $P_i=\dots\Edge{X_{i_0}}\dots\Edge{X_{i_1}}\dots\Edge{X_{i_2}}\dots $ on which the variables $X_{i_0}, X_{i_1}$ and $X_{i_2}$ of $\zeta_i$ occur as events. Moreover, by construction, the union $U_\tau$ provides an ESSA $\alpha$ whose $\tau$-solvability is connected with the existence of a one-in-three model of $\varphi$. In particular, we build the union $U_\tau$ in a way such that there is a subset $\mathfrak{E}\subseteq E_\tau$ of events of $\tau$ so that the following properties are satisfied: If $R=(sup, sig)$ is a $\tau$-region of $U_\tau$ that solves $\alpha$, then the variable events whose signature belongs to $\mathfrak{E}$ define a one-in-three model of $\varphi$, that is, the set $M=\{X\in V(\varphi)\mid sig(X)\in \mathfrak{E}\}$ satisfies $\vert M\cap \zeta_i\vert =1$ for all $i\in \{0,\dots, m-1\}$. Hence, if $U_\tau$ has the $\tau$-ESSP, then $\alpha$ is $\tau$-solvable and $\varphi$ allows a positive decision. Moreover, the construction of $U_\tau$ ensures that if $\varphi$ has a one-in-three model, then $\alpha$ as well as all the other ESSA and SSA of $U_\tau$ are $\tau$-solvable. Thus, $U_\tau$ has the $\tau$-ESSP if and only if $\varphi$ is one-in-three satisfiable if and only if $U_\tau$ has both the $\tau$-ESSP and the $\tau$-SSP. Since Lemma~\ref{lem:joining} lifts these implications to the linear joining $LJ(U_\tau)$, if $\tau\in \{\tau_{PT}^b,\tau_{PPT}^b\}$, and to the joining $J(U_\tau)$, if $\tau\in \{\tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b\}$, this proves the NP-hardness of the \textsc{$\tau$-ESSP} and \textsc{$\tau$-Solvability} for accordingly restricted TS. \eject Let $\tau\in \{\tau_{PT}^b,\tau_{PPT}^b\}$. For the proof of the NP-hardness of \textsc{$\tau$-SSP} we reduce $\varphi$ to a union $U$ of linear TS. Since this union is the same for both $\tau_{PT}^b$ and $\tau_{PPT}^b$, $U$ needs no index. Using essentially the same approach as just sketched, the union $U$ provides an SSA $\alpha$ that is $\tau$-solvable if and only if $\varphi$ has a one-in-three model. Moreover, if $\alpha $ is $\tau$-solvable, then $U$ has the $\tau$-SSP. Consequently, again by Lemma~\ref{lem:joining}, this implies that $LJ(U)$ has the $\tau$-SSP if and only if $\varphi$ has a one-in-three model. This proves the NP-hardness of \textsc{$\tau$-SSP} for linear inputs. \subsection{NP-hardness of \textsc{$\tau_{PPT}^b$-solvability} and \textsc{$\tau_{PPT}^b$-ESSP}}\label{sec:tau_ppt_solvability} \begin{figure}\label{fig:example_for_pure_essp} \end{figure}\vspace*{-2mm} In the remainder of this section, unless explicitly stated otherwise, let $\tau=\tau_{PPT}^b$. In the following, we first introduce the gadgets (TS) of the union $U_\tau$ and the atom $\alpha$. Figure~\ref{fig:example_for_pure_essp} presents a concrete example of $U_{\tau_{PPT}^2}$, where $\varphi$ corresponds to Example~\ref{ex:varphi}. Secondly, we argue that these gadgets collaborate in a way such that if $\alpha$ is $\tau$-solvable, then $\varphi$ has a one-in-three model. Finally, we show that if $\varphi$ is one-in-three satisfiable, then $U^\tau_\varphi$ is $\tau$-solvable. The union $U_\tau$ has the following gadget $H_1$ that provides the announced ESSA $\alpha=(k, h_{1,2b+4})$: \begin{center} \begin{tikzpicture} \node (h0) at (0,0) {\nscale{$h_{1,0}$}}; \node (h1) at (1.4,0) {\nscale{}}; \node (h_2_dots) at (1.6,0) {\nscale{$\dots$}}; \node (h_k_1) at (1.8,0) {}; \node (h_k) at (3.2,0) {\nscale{$h_{1,b}$}}; \node (h_k1) at (4.9,0) {\nscale{$h_{1,b+1}$}}; \node (h_k2) at (6.8,0) {\nscale{$h_{1,b+2}$}}; \node (h_k3) at (8.4,0) {\nscale{}}; \node (h_k4_dots) at (8.75,0) {\nscale{$\dots$}}; \node (h_2k1) at (8.9,0) {\nscale{}}; \node (h_2k2) at (10.5,0) {\nscale{$h_{1,2b+2}$}}; \node (h_2k3) at (12.5,0) {\nscale{$h_{1,2b+3}$}}; \node (h_2k4) at (14.5,0) {\nscale{$h_{1,2b+4}$}}; \node (h_2k5) at (14.5,-1.2) {\nscale{$h_{1,2b+5}$}}; \node (h_2k6) at (12.9,-1.2) {}; \node (h_k5_dots) at (12.7,-1.2) {\nscale{$\dots$}}; \node (h_3k4) at (12.4,-1.2) {}; \node (h_3k5) at (10.8,-1.2) {\nscale{$h_{1,3b+5}$}}; \graph { (h0) ->["\escale{$k$}"] (h1) ; (h_k_1)->["\escale{$k$}"] (h_k) ->["\escale{$y_0$}"] (h_k1)->["\escale{$o_0$}"] (h_k2)->["\escale{$k$}"] (h_k3); (h_2k1)->["\escale{$k$}"] (h_2k2)->["\escale{$y_1$}"] (h_2k3)->["\escale{$y_0$}"] (h_2k4)->["\escale{$o_1$}"] (h_2k5)->[swap, "\escale{$k$}"] (h_2k6); (h_3k4)->[swap, "\escale{$k$}"] (h_3k5); }; \end{tikzpicture} \end{center} For all $j\in \{0,1,2,3\}$, the union $U_\tau$ has the following gadget $D_j$ that provides the event $k_j$: \begin{center} \begin{tikzpicture}[baseline=-2pt] \node at (-1,0) {$D_{j,1}=$}; \foreach \i in {0,...,3} {\coordinate (\i) at (\i*1.5,0);} \foreach \i in {0,...,3} {\node (p\i) at (\i) {\nscale{$d_{j,\i}$}};} \graph { (p0) ->["\escale{$o_0$}"] (p1) ->["\escale{$k_j$}"] (p2) ->["\escale{$o_1$}"] (p3);}; \end{tikzpicture} \end{center} For all $j\in \{0,\dots, 2m-1\}$, the union $U_\tau$ has the following gadgets $F_j$ and $G_j$ that provide the event~$z_j$:\vspace*{-2mm} \begin{center} \begin{tikzpicture} \begin{scope} \node at (-1,0) {$F_j=$}; \foreach \i in {0,...,2} {\coordinate (\i) at (\i*1.5,0);} \foreach \i in {0,...,2} {\node (p\i) at (\i) {\nscale{$f_{j,\i}$}};} \graph { (p0) ->["\escale{$k_0$}"] (p1) ->["\escale{$z_j$}"] (p2);}; \end{scope} \begin{scope}[xshift=6cm] \node at (-1,0) {$G_j=$}; \foreach \i in {0,...,2} {\coordinate (\i) at (\i*1.5,0);} \foreach \i in {0,...,2} {\node (p\i) at (\i) {\nscale{$g_{j,\i}$}};} \graph { (p0) ->["\escale{$z_j$}"] (p1) ->["\escale{$o_0$}"] (p2);}; \end{scope} \end{tikzpicture} \end{center} For all $i\in \{0,\dots, m-1\}$, the union $U_\tau$ has the following gadget $M_i$, that uses the variable $X_i$ as event:\vspace*{-2mm} \begin{center} \begin{tikzpicture}[scale=0.9] \begin{scope}[yshift=3cm] \node at (-1,0) {$M_i=$}; \node (t0) at (0,0) {\nscale{$m_{i,0}$}}; \node (t1) at (2,0) {\nscale{$m_{i,1}$}}; \node (t2) at (3.75,0) {}; \node (h_2_dots) at (4,0) {\nscale{$\dots$}}; \node (tb) at (4.25,0) {}; \node (tb+1) at (6,0) {\nscale{$m_{i,b+1}$}}; \graph { (t0) ->["\escale{$k_1$}"] (t1) ->["\escale{$X_i$}"] (t2) ; (tb) ->["\escale{$X_i$}"] (tb+1);}; \end{scope} \end{tikzpicture} \end{center} For all $i\in \{0,\dots, m-1\}$, the union $U_\tau$ has the following gadget $T_i$ that uses the elements of $\zeta_i=\{X_{i_0}, X_{i_1}, X_{i_2}\}$ as events: \begin{center} \begin{tikzpicture}[scale=0.9] \begin{scope}[yshift=3cm] \node at (-1,0) {$T_{i}=$}; \node (t0) at (0,0) {\nscale{$t_{i,0}$}}; \node (t1) at (1.75,0) {\nscale{$t_{i,1}$}}; \node (t2) at (3.25,0) {}; \node (h_2_dots) at (3.75,0) {\nscale{$\dots$}}; \node (tb) at (4.1,0) {}; \node (tb+1) at (5.75,0) {\nscale{$t_{i,b+1}$}}; \node (tb+2) at (7.75,0) {\nscale{$t_{i,b+2}$}}; \node (tb+3) at (9.5,0) {}; \node (h_k+4_dots) at (9.75,0) {\nscale{$\dots$}}; \node (t2b+1) at (10,0) {}; \node (t2b+2) at (12,0) {\nscale{$t_{i,2b+2}$}}; \node (t2b+3) at (12,-1.2) {\nscale{$t_{i,2b+3}$}}; \node (t2b+4) at (10.1,-1.2) {}; \node (h_k+5_dots) at (9.75,-1.2) {\nscale{$\dots$}}; \node (t3b+2) at (9.5,-1.2) {}; \node (t3b+3) at (7.75,-1.2) {\nscale{$t_{i,3b+3}$}}; \node (t3b+4) at (5.75,-1.2) {\nscale{$t_{i,3b+4}$}}; \graph { (t0) ->["\escale{$k_2$}"] (t1) ->["\escale{$X_{i_0}$}"] (t2) ; (tb) ->["\escale{$X_{i_0}$}"] (tb+1) ->["\escale{$z_{2i}$}"] (tb+2)->["\escale{$X_{i_1}$}"] (tb+3); (t2b+1)->["\escale{$X_{i_1}$}"] (t2b+2)->["\escale{$z_{2i+1}$}"] (t2b+3)->[swap, "\escale{$X_{i_2}$}"] (t2b+4); (t3b+2)->[swap, "\escale{$X_{i_2}$}"] (t3b+3)->[swap, "\escale{$k_3$}"] (t3b+4); ;}; \end{scope} \end{tikzpicture} \end{center} Altogether, \[ U_\tau=U(H_1,D_0,\dots, D_3, F_0,\dots, F_{2m-1}, G_0,\dots, G_{2m-1},M_0,\dots, M_{m-1}, T_0,\dots, T_{m-1}). \] \begin{lemma}\label{lem:tau_ppt_essp_implies_model} If $U_\tau$ has the $\tau$-ESSP, then $\varphi$ has a one-in-three model. \end{lemma} \begin{proof} Since $U_\tau$ has the $\tau$-ESSP, there is a $\tau$-region that solves $\alpha$. Let $R=(sup, sig)$ be such a region. In the following we argue, that the set $\{X\in V(\varphi)\vert sig(X)=(0,1)\}$ or the set $\{X\in V(\varphi)\vert sig(X)=(1,0)\}$ is a one-in-three model of $\varphi$. Since $R$ solves $\alpha$, we have that $sig(k)$ does not occur at $sup(h_{1,2b+4})$. This implies $sig(k)\not=(0,0)$. By Lemma~\ref{lem:observations}, we get $sig(k)\in \{(1,0),(0,1)\}$. In what follows, we let $sig(k)=(0,1)$ and show that $M=\{X\in V(\varphi)\vert sig(X)=(1,0)\}$ defines a one-in-three model of $\varphi$. The arguments for the case $sig(k)=(1,0)$ are quite similar and lead to the fact that $\{X\in V(\varphi)\vert sig(X)=(0,1)\}$ defines a searched model. Let $sig(k)=(0,1)$ and $\neg sup(h_{1,2b+4}) \ledge{sig(k)}$. We argue that this implies $sig(o_0)=sig(o_1)=(b,0)$: For all $s\in \{0,\dots, b-1\}$, the event $(0,1)$ occurs at $s$ in $\tau$. Since $sig(k)$ does not occur at $sup(h_{1,2b+4})$, this implies $sup(h_{1,2b+4})=b$. Moreover, by $sig(k)=(0,1)$ and Lemma~\ref{lem:observations}, we get $sup(h_{1,b})=b$ and $sup(h_{1,b+2})=sup(h_{1,2b+5})=0$. By $sup(h_{1,2b+4})=b$ and $sup(h_{1,2b+5})=0$, we obtain $sig(o_1)=(b,0)$. Moreover, $sup(h_{1,b})=b$ and $h_{1,b}\edge{y_0}$ imply $sig^+(y_0)=0$, and by $sup(h_{1,2b+4})=b$ and $\edge{y_0}h_{1,2b+4}$ imply $sig^-(y_0)=0$. (Recall that $R$ is pure.) Hence, $sig(y_0)=(0,0)$, which implies $sup(h_{1,b+1})=b$. Thus, by $sup(h_{1,b+1})=b$ and $sup(h_{1,b+2})=0$, we obtain $sig(o_0)=(b,0)$. The gadgets $D_0,\dots, D_3$ use the signatures of $o_0$ and $o_1$ to determine the signatures of $k_0,\dots, k_3$. More exactly, $sig(o_0)=sig(o_1)=(b,0)$ implies $sup(d_{j,1})=0$ and $sup(d_{j,2})=b$ for all $j\in \{0,1,2,3\}$. Consequently, this implies $sig(k_0)=\dots=sig(k_3)=(0,b)$. Let $j\in \{0,\dots, 2m-1\}$ be arbitrary but fixed. The gadgets $F_j$ and $G_j$ ensure that $sig(z_j)=(0,0)$: By $sig(o_0)=(b,0)$ and $sig(k_0)=(0,b)$, we get $sup(f_{j,1})=b$ and $sup(g_{j,1})=b$. Since $R$ is pure, that is $sig^+(z_j)=0$ or $sig^-(z_j)=0$, by $f_{j,2}\Edge{z_j}$, we get $sig^-(z_j)\geq sig^+(z_j)$. Similarly, by $\Edge{z_j}g_{j,1}$, we get $sig^+(z_j)\geq sig^-(z_j)$. Consequently, $sig^-(z_j)=sig^+(z_j)$, which implies $sig(z)=(0,0)$, since $R$ is pure. Let $i\in \{0,\dots, m-1\}$ be arbitrary but fixed. The gadget $M_i$ ensures for $X_i$ that $sig(X_i)\in \{(1,0), (0,0)\}$: By $sig(k_1)=(0,b)$, we have $sup(m_{i,1})=b$, which implies $sig^-(X_i)\geq sig^+(X_i)$. Since $X_i$ occurs b times in a row at $m_{i,1}$, by Lemma~\ref{lem:observations}, this implies $sig(X_i)\in \{(1,0),(0,0)\}$. The gadget $T_i$ ensures that there is exactly one event $X\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$ such that $sig(X)=(1,0)$: By $sig(k_2)=sig(k_3)=(0,b)$, we have that $sup(t_{i,1})=b$ and $sup(t_{i,3b+3})=0$. Consequently, the image of the sub-path $t_{i,1}\edge{X_{i_1}}\dots\edge{X_{i_2}}t_{i,3b+3}$ under $(sup, sig)$ is a path of $\tau$ that starts at $b$ and terminates at $0$. Hence, there is an event $e$ on this path that satisfies $sig^-(e) > sig^+(e)$. Since $sig(z_{2i})= sig(z_{2i+1})=(0,0)$, we obtain that $e\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$. Moreover, since each of $X_{i_0}, X_{i_1}$ and $X_{i_2}$ occurs b times in a row, if $sig^-(e) > sig^+(e)$, then $sig(e)=(1,0)$. In the following, we argue that if $e\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$ such that $sig(e)=(1,0)$, then $sig(e')\not=(1,0)$ for all $e'\in \{X_{i_0}, X_{i_1}, X_{i_2}\}\setminus\{e\}$. If $sig(X_{i_0})=(1,0)$, then we get $sup(t_{i,b+1})=0$, by Lemma~\ref{lem:observations}. By $sig(z_{2i}) =(0,0)$, this implies $sup(t_{i,b+2})=0$ and $sig^-(X_{i_1})=0$. Thus, by $sig(X_{i_1})\in \{(1,0),(0,0)\}$, we conclude $sig(X_{i_1})=(0,0)$. By $sup(t_{i,b+2})=0$, $sig(X_{i_1})=(0,0)$ and $sig(z_{2i+1})=(0,0)$, we have that $sup(t_{i,2b+3})=0$. This implies $sig^-(X_{i_2})=0$ and, thus, $sig(X_{i_2})=(0,0)$. In particular, we have $sig(X_{i_1})\not=(1,0)$ and $sig(X_{i_2})\not=(1,0)$. If $sig(X_{i_1})=(1,0)$, then we get $sup(t_{i,b+1})=b$ and $sup(t_{i,2b+3})=0$, by Lemma~\ref{lem:observations} and $sig(z_{2i}) =sig(z_{2i+1})=(0,0)$. By $sup(t_{i,b+1})=b$, we get $sig(X_{i_0})\not=(1,0)$. Moreover, just like before, by $sup(t_{i,2b+3})=0$, we have $sig(X_{i_2})\not=(1,0)$. If $sig(X_{i_2})=(1,0)$, then we get $sig(X_{i_0})\not=(1,0)$ and $sig(X_{i_1})\not=(1,0)$, since $sig(X_{i_0})=(1,0)$ or $sig(X_{i_1})=(1,0)$ imply $sig(X_{i_2})\not=(1,0)$, as just discussed. Altogether, we have shown that if $R=(sup, sig)$ is a $\tau$-region that solves $\alpha$ such that $sig(k)=(0,1)$, then, for all $i\in \{0,\dots, m-1\}$, there is exactly one event $e\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$ that satisfies $sig(e)=(1,0)$. As a result, the set $\{X\in V(\varphi)\mid sig(X)=(1,0)\}$ defines a one-in-three model of $\varphi$. It is noteworthy that we use the pureness of $\tau$ only for the functionality of $H_1$ and (by the signature of $o_1$, implicitly) for $D_0,\dots, D_3$. That is, once we have that $sig(k_0)=\dots=sig(k_3)=(0,b)$ and $sig(o_0)=(b,0)$, the arguments for the functionality of the remaining gadgets essentially work also for the (impure) $b$-bounded type $\tau_{PT}^b$. The only difference then is that we can not conclude that $sig(z_j)=(0,0)$, since $sig(z_j)=(m,m)$ would also be possible for $\tau_{PT}^b$. The same is true for $e'\in \{X_{i_0}, X_{i_1}, X_{i_2}\}\setminus\{e\}$ if $e\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$ such that $sig(e)=(1,0)$. However, if $sig(e)=(m,m)$, then $s\edge{e}s'$ implies also $sup(s)=sup(s')$, and that is what actually matters in our arguments. Thus, we will reuse the corresponding gadgets for the type $\tau_{PT}^b$. If $sig(k)=(1,0)$ and $sup(h_{1, 2b+4})=0$, then one argues similarly that the set $\{X\in V(\varphi)\mid sig(X)=(0,1)\}$ defines a one-in-three model of $\varphi$. Altogether, this shows that if $U_\tau$ has the $\tau$-ESSP, which implies the $\tau$-solvability of $\alpha$, then $\varphi$ has a one-in-three model. \end{proof} For the opposite direction, we have to prove the following lemma: \begin{lemma}\label{lem:tau_ppt_model_implies_solvability} If $\varphi$ has a one-in-three model, then $U_\tau$ has the $\tau$-ESSP and the $\tau$-SSP. \end{lemma} For the proof of Lemma~\ref{lem:tau_ppt_model_implies_solvability} it is sufficient to show that if $\varphi$ has a one-in-three model $M$, then $U_\tau$ has the $\tau$-ESSP. Since all introduced gadgets are linear TS, by Lemma~\ref{lem:essp_implies_ssp}, this implies that $U_\tau$ has the $\tau$-SSP, too. The brut-force approach of this proof would be to explicitly present for every ESSA of $U_\tau$ a $\tau$-region that solves it. In fact, for some atoms of $U_\tau$, we need to explicitly present regions that solve them. In particular, this applies to $(k, h_{1, 2b+4})$. On the other hand, the gadgets and the events of $U_\tau$ meet some regularities that allow us to solve many events homogeneously. In the following, for the purpose to discover these regularities, we first introduce the notions of consistent and thinly distributed events. After that we present a lemma that uses these notions and exploits a certain structure of $U_\tau$ to solve most events uniformly. \begin{definition}[c-consistent]\label{def:c_consistent} Let $U=U(A_0,\dots, A_n)$ be a union, where $A_i=(S_i,E_i,\delta_i,\iota_i)$ is a linear TS for all $i\in \{0,\dots, n\}$, and let $c\in \mathbb{N}$. We say an event $e\in E(U)$ is \emph{c-consistent} (in $U$), if the following condition is satisfied for all $i\in \{0,\dots, n\}$: if $s\edge{e}s'\in A_i$, then $e$ occurs always exactly $c$ times in a row in $A_i$, that is, there are states $s,s'\in \{s_0,\dots,s_c\}\subseteq S_i $ such that $s_0\edge{e}\dots\edge{e} s_c$ and $\neg \edge{e}s_0$ and $\neg s_c\edge{e}$. \end{definition} \begin{definition}[thinly distributed]\label{def:thinly_distributed} Let $U=U(A_0,\dots, A_n)$ be a union, where $A_i=(S_i,E_i,\delta_i,\iota_i)$ is a linear TS for all $i\in \{0,\dots, n\}$, and let $e\in E(U)$ such that $e$ is $c$-consistent for some $c\in\{1,b\}$. We say $e$ is \emph{thinly distributed} (in $U$) if the following condition is satisfied for all $i\in \{0,\dots, n\}$: if $e\in E_i$, then there is exactly one path (with pairwise distinct states) $s_0\edge{e}\dots\edge{e}s_c$ in $A_i$. \end{definition} \begin{example} Every event of $U_\tau$ is either $b$-consistent as, for example, $k$ and $X_0,\dots, X_{m-1}$, or $1$-consistent as, for example, $o_0$ and $o_1$. Moreover, the event $o_1$ occurs once at the edge $h_{1,2b+4}\edge{o_1}h_{1,2b+5}$ and, for all $j\in \{0,1,2,3\}$, the it occurs once at the edge $d_{j,2}\edge{o_1}d_{j,3}$. No other gadget of $U_\tau$ applies $o_1$. Thus, $o_1$ is thinly distributed in $U_\tau$. Moreover, for all $i\in \{0,\dots, m-1\}$, if $X_i$ occurs in a gadget of $U_\tau$, then it occurs exactly once $b$-times in a row in this gadget. Hence, $X_i$ is thinly distributed. \end{example} \begin{lemma}\label{lem:easy_solvability} Let $\tau\in \{\tau_{PT}^b,\tau_{PPT}^b\}$. Let $U=U(A_0,\dots, A_n)$ be a union, where $A_i=(S_i,E_i,\delta_i,\iota_i)$ is a linear TS for all $i\in \{0,\dots, n\}$, such that every event $e\in E(U)$ is $1$-consistent or $b$-consistent, and let $a\in E(U)$ be a thinly distributed event and $q\in S_i$ a state such that $\neg \edge{a}$, where $i\in \{0,\dots, n\}$ is arbitrary but fixed. If one of the following conditions is satisfied, then there is a $\tau$-region of $U$ that solves $(a,q)$: \begin{enumerate} \item\label{lem:easy_solvability_not_or_initial} $a\not\in E_i$ or $e\in E_i$ and $q$ occurs after $a$; \item\label{lem:easy_solvability_preceded} $a\in E_i$ and $a$ occurs after $q$ and there is an event $x\in E_i\setminus \{a\}$ such that $\edge{x}z\edge{a}$ in $A_i$ and \begin{enumerate} \item $x$ is thinly distributed and \item for all $j\in \{0,\dots, n\}$, if $a,x\in E_j$, then $x$ does not occur after $a$ in $A_j$ and \item if $a$ is $b$-consistent, then $x$ is $1$-consistent. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} (1): The following $\tau$-region $R=(sup, sig)$ solves $(a,q)$: For all $j\in \{0,\dots, m-1\}$, if $a\in E_j$, then $sup(\iota_j)=0$, otherwise $sup(\iota_j)=b$; for all $e\in E(U)$, if $e=a$ and $a$ is $b$-consistent, then $sig(e)=(0,1)$; if $e=a$ and $e$ is $1$-consistent, then $sig(e)=(0,b)$; otherwise $sig(e)=(0,0)$. (2): The following $\tau$-region $R=(sup, sig)$ solves $(a,q)$: for all $j\in \{0,\dots, n\}$, if $a\in E_j$ and $x\not\in E_j$, then $sup(\iota_j)=0$, otherwise $sup(\iota_j)=b$; for all $e\in E(A)$, if $e=a$, then $sig(e)=(0,1)$ if $a$ is $b$-consistent, else $sig(e)=(0,b)$; if $e=x$, then $sig(e)=(1,0)$ if $x$ is $b$-consistent, else $sig(e)=(b,0)$; otherwise $sig(e)=(0,0)$. \end{proof} Armed with these results, we are now able to provide the proof of Lemma~\ref{lem:tau_ppt_model_implies_solvability}: \begin{proof}[Lemma~\ref{lem:tau_ppt_model_implies_solvability}] Let $M$ be a one-in-three model of $\varphi$. We proceed as follows. First, we apply Lemma~\ref{lem:easy_solvability} to solve most of $U_\tau$'s ESSA. After that, we explicitly present $\tau$-regions that solve the remaining atoms and, in particular, solve $\alpha$. This proves that $U_\tau$ has the $\tau$-ESSP and, by Lemma~\ref{lem:essp_implies_ssp}, the \textsc{$\tau$-Solvability}, too. Let $e\in E(U_\tau)\setminus\{k, y_0, y_1\}$, let $G$ be a gadget of $U_\tau$ and let $s\in S(G)$ such that $\neg s\edge{e}$, where all of $e, G$ and $s$ are arbitrary but fixed. For a start, we notice that $e$ is thinly distributed. Moreover, recall that, for all $i\in \{0,\dots, m-1\}$, the clause $\zeta_i=\{X_{i_0}, X_{i_1}, X_{i_2}\}$ satisfies $i_0 < i_1 < i_2$. Consequently, if $e\not\in \{o_0,o_1\}$ or if $e\in \{o_0,o_1\}$ and $G\not=H_1$, then $e$ satisfies Condition~\ref{lem:easy_solvability_not_or_initial} or Condition~\ref{lem:easy_solvability_preceded} of Lemma~\ref{lem:easy_solvability}. Thus, by Lemma~\ref{lem:easy_solvability}, the atom $(e,s)$ is $\tau$-solvable. It remains to argue that the remaining atoms are also $\tau$-solvable. For convenience, we let $I=\{\iota_G\mid \text{$G$ is a gadget of $U$}\}$ be the set of the initial states of the gadgets of $U$. For a start, we argue for the solvability of $k$. The following region $R=(sup, sig)$ solves $(k,s)$ for all relevant $s\in S(H_1)$; in particular, it solves $(k ,h_{1,2b+4})$. Figure~\ref{fig:example_for_pure_essp} presents a concrete example of $R$ for the union $U_\tau$ that originates from $\varphi$ of Example~\ref{ex:varphi}. Let's start with the support of the initial states: if $s\in \{h_{1,0}, f_{0,0}, \dots, f_{2m-1,0}, m_{0,0},\dots, m_{m-1,0}, t_{0,0}, \dots, t_{m-1,0}\}$, then $sup(s)=0$; if $s\in \{d_{0,0}, \dots, d_{3,0}, g_{0,0}, \dots, g_{2m-1,0}\}$, then $sup(s)=b$. The signature is defined as follows: for all $e\in E(U_\tau)$, if $e=k$, then $sig(e)=(0,1)$; if $e\in \{o_0,o_1\}$, then $sig(e)=(b,0)$; if $e\in \{k_0,\dots, k_3\}$, then $sig(e)=(0,b)$; if $e\in M$, then $sig(e)=(1,0)$; otherwise $sig(e)=(0,0)$. The following region $R=(sup, sig)$ solves $(k,s)$ for all other relevant states of $U_\tau$: $sup(h_{1,0})=0$; for all $s\in I\setminus\{h_{1,0}\}$, $sup(s)=b$; for all $e\in E(U_\tau)$, if $e=k$, then $sig(e)=(0,1)$; if $e=y_0$, then $sig(e)=(b,0)$; otherwise $sig(e)=(0,0)$. This proves the solvability of $k$. In the following, we argue that $(o_0,q)$ is solvable for all relevant $q\in S(H_1)$: The following region $R=(sup, sig)$ solves $(o_0,s)$ for all $s\in \{h_{1, b+2}, \dots, h_{1, 3b+5}\}$: for all gadgets $G\in U_\tau$, we define $sup(\iota_G)=0$ for $G$'s the initial state $\iota_G$; for all $e\in E(U_\tau)$, if $e=o_0$, then $sig(e)=(0,b)$; otherwise $sig(e)=(0,0)$. The following region $R=(sup, sig)$ solves $(o_0,s)$ for all $s\in \{h_{1,0}, \dots, h_{1, b}\}$: $sup(h_{1,0})=b$; for all $s\in I\setminus\{h_{1,0}\}$, we define $sup(s)=0$; for all $e\in E(U_\tau)$, if $e=o_0$, then $sig(e)=(0,b)$; if $e=y_0$, then $sig(e)=(b,0)$; otherwise $sig(e)=(0,0)$. Similarly, one argues that $(o_1,q)$ is solvable for all relevant $q\in S(H_1)$. So far, we have proven the solvability of all $e\in E(U_\tau)\setminus\{k,y_0,y_1\}$. It remains to argue for the solvability of $y_0$ and $y_1$. The following region $R=(sup, sig)$ solves $(y_0,s)$ for all $s\in \{h_{1,0}, \dots, h_{1, b-1}\}$: for all $s\in I$, we define $sup(\iota_G)=0$; for all $e\in E(U_\tau)$, if $e=y_0$, then $sig(e)=(b,0)$; if $e=k$, then $sig(e)=(0,1)$; otherwise $sig(e)=(0,0)$. The following region $R=(sup, sig)$ solves $(y_0,s)$ for all $s\in S(H_1) \setminus \{h_{1,0}, \dots, h_{1, b-1}\}$: $sup(h_{i,0})=b$ and for all $s\in I\setminus\{h_{1,0}\}$, we define $sup(s)=0$; for all $e\in E(U_\tau)$, if $e=y_0$, then $sig(e)=(b,0)$; if $e=y_1$, then $sig(e)=(0,b)$; otherwise $sig(e)=(0,0)$. It is easy to see, that $y_1$ is solvable. \end{proof} Altogether, since the construction of $U_\tau$ and thus $A_\tau$ is obviously polynomial, by Lemma~\ref{lem:joining}, Lemma~\ref{lem:tau_ppt_essp_implies_model} and Lemma~\ref{lem:tau_ppt_model_implies_solvability} and the NP-completeness of \textsc{CM1in33Sat}, we have finally proven that \textsc{$\tau_{PPT}^b$-ESSP} and \textsc{$\tau_{PPT}^b$-Solvability} are NP-complete for all $b\in \mathbb{N}^+$. \subsection{NP-hardness of \textsc{$\tau_{PT}^b$-solvability} and \textsc{$\tau_{PT}^b$-ESSP}}\label{sec:tau_pt_solvability} In the remainder of section, unless stated explicitly otherwise, we assume that $\tau=\tau_{PT}^b$. The union $U_\tau$ has the following TS $H_0$ that provides the ESSA $\alpha=(k, h_{0, 4b+1})$: \begin{center} \begin{tikzpicture} \node (init) at (-0.75,0) {$H_{0}=$}; \node (h0) at (0,0) {\nscale{$h_{0,0}$}}; \node (h1) at (1.5,0) {}; \node (dots1) at (1.75,0) {\nscale{$\dots$}}; \node (h_b_1) at (2,0) {}; \node (h_b) at (3.5,0) {\nscale{$h_{0,b}$}}; \node (h_b+1) at (5,0) {}; \node (dots_2) at (5.25,0) {\nscale{$\dots$}}; \node (h_2b_1) at (5.5,0) {}; \node (h_2b) at (7,0) {\nscale{$h_{0,2b}$}}; \node (h_2b+1) at (9,0) {\nscale{$h_{0,2b+1}$}}; \node (h_2b+2) at (10.5,0) {}; \node (dots_3) at (10.75,0) {\nscale{$\dots$}}; \node (h_3b) at (11,0) {}; \node (h_3b+1) at (12.5,0) { \nscale{$h_{0,3b+1}$} }; \node (h_3b+2) at (12.5,-1.2) {}; \node (dots_4) at (12.25,-1.2) {\nscale{$\dots$}}; \node (h_4b) at (12,-1.2) { }; \node (h_4b+1) at (10.5,-1.2) { \nscale{$h_{0,4b+1}$} }; \node (h_4b+2) at (8.75,-1.2) {}; \node (dots_5) at (8.5,-1.2) {\nscale{$\dots$}}; \node (h_5b) at (8.25,-1.2) { }; \node (h_5b+1) at (6.5,-1.2) { \nscale{$h_{0,5b+1}$} }; \node (h_5b+2) at (4.8,-1.2) { }; \node (dots_5) at (4.5,-1.2) {\nscale{$\dots$}}; \node (h_6b) at (4.25,-1.2) { }; \node (h_6b+1) at (2.5,-1.2) { \nscale{$h_{0,6b+1}$} }; \graph { (h0) ->["\escale{$k$}"] (h1); (h_b_1)->["\escale{$k$}"] (h_b) ->["\escale{$z$}"] (h_b+1); (h_2b_1)->["\escale{$z$}"] (h_2b)->["\escale{$o_0$}"] (h_2b+1)->["\escale{$k$}"] (h_2b+2); (h_3b)->["\escale{$k$}"] (h_3b+1)->["\escale{$z$}"] (h_3b+2); (h_4b)->[swap, "\escale{$z$}"] (h_4b+1)->[swap, "\escale{$o_1$}"] (h_4b+2); (h_5b)->[swap, "\escale{$o_1$}"] (h_5b+1)->[swap, "\escale{$k$}"] (h_5b+2); (h_6b)->[swap, "\escale{$k$}"] (h_6b+1); }; \end{tikzpicture} \end{center} For every $j\in \{0,1,2,3\}$, the union $U_\tau$ has the following gadget $C_j$ that provides $k_{j}$: \begin{center} \begin{tikzpicture}[yshift=-5cm] \node (init) at (-1,0) {$C_j=$}; \foreach \i in {0,...,2} {\coordinate (\i) at (\i*1.55,0);} \foreach \i in {0,...,2} {\node (p\i) at (\i) {\nscale{$c_{j,\i}$}};} \node (p3) at (4.5,0) {}; \node (hdots_3) at (4.9,0) {\nscale{$\dots$}}; \node (db+2) at (5.25,0) {}; \node (db+3) at (6.75,0) { \nscale{$c_{j,b+2}$} }; \graph { (p0) ->["\escale{$o_{0}$}"] (p1) ->["\escale{$k_j$}"] (p2) ->["\escale{$o_{1}$}"] (p3); (db+2) ->["\escale{$o_{1}$}"] (db+3); }; \end{tikzpicture} \end{center} Finally, for all $j\in \{0,\dots, 2m-1\}$ and for all $i\in \{0,\dots, m-1\}$, the union $U_\tau$ has the gadgets $F_j, G_j, M_i$ and $T_i$ as defined in Section~\ref{sec:tau_ppt_solvability}. Altogether, \[ U_\tau=U(H_0,C_0,\dots, C_3,F_0,\dots, F_{2m-1}, G_0,\dots, G_{2m-1},M_0,\dots, M_{m-1},T_0,\dots, T_{m-1}). \] \begin{lemma}\label{lem:tau_pt_essp_implies_model} If $U_\tau$ has the $\tau$-ESSP, then $\varphi$ has a one-in-three model. \end{lemma} \begin{proof} Since $U_\tau$ has the $\tau$-ESSP, there is a $\tau$-region of $U_\tau$ that solves $\alpha$. Let $R=(sup, sig)$ be such a region. In the following, we argue that either $sig(k_0)=\dots=sig(k_3)=(0,b)$ or $sig(k_0)=\dots=sig(k_3)=(b,0)$. As already argued at the end of the proof of Lemma~\ref{lem:tau_ppt_essp_implies_model}, by the functionality of the remaining gadgets, this implies that $\{X\in V(\varphi)\mid sig(X)=(1,0)\}$ or $\{X\in V(\varphi)\mid sig(X)=(0,1)\}$ is a one-in-three model of $\varphi$. Let $ E_{0}=\{ (m,m) \mid 0\le m \leq b\}$. By definition, if $sig(k)=(m,m) \in E_0$ then $sup(h_{0,3b+1})\geq m$ and $sup(h_{0,5b+1})\geq m$. Event $(m,m)$ occurs at every state $s\in S_{\tau_{PT}^b}$ that satisfies $s\geq m$. Hence, by $\neg h_{0,4b+1}\ledge{(m,m)}$, we get $sup(h_{0,4b+1}) < m$. Since $sup(h_{0,3b+1}) \geq m$ and $sup(h_{0,4b+1}) < m$, we have $sig^-(z) > sig^+(z)$. Observe, that $z$ is $b$-consistent. Thus, by Lemma~\ref{lem:observations}, we have $sup(z)=(1,0)$. Similarly, we get $sig(o_1)=(0,1)$. This immediately implies $sup(h_{0,2b})=0$ and $sup(h_{0,3b+1})=b$. Moreover, by $sig(k)=(m,m) $ and $sup(h_{0,3b+1})=b$ we get $sup(h_{0,2b+1})=b$. By $sup(h_{0,2b})=0$, this implies $sig(o_0)=(0,b)$. Thus, we have $sig(o_0)=(0,b)$ and $sig(o_1)=(0,1)$. Otherwise, if $sig(k)\not\in E_0$, then Lemma~\ref{lem:observations} ensures $sig(k)\in \{(1,0), (0,1)\}$. If $sig(k)=(0,1)$ then we have $sup(h_{0, 4b+1})=b$, since $s\edge{(0,1)}$ for every state $s\in \{0,\dots, b-1\}$ of $\tau_{PT}^b$. Moreover, again by $sig(k)=(0,1)$ we have $sup(h_{0,b})=sup(h_{0,3b+1})=b$ and $sup(h_{0,2b+1})=sup(h_{0, 5b+1})=0$. By $sup(h_{0, 3b+1})=sup(h_{0,4b+1})=b$ we have $sig(z)\in E_0$, which together with $sup(h_{0,b})=b$ implies $sup(h_{0, 2b})=b$. Thus, by $sup(h_{0,2b})=b$ and $sup(h_{0, 2b+1})=0$, it is $sig(o_0)=(b,0)$. Moreover, by $sup(h_{0, 4b+1})=b$ and $sup(h_{0, 5b+1 })=0$, we conclude $sig(o_1)=(1,0)$. Hence, we have $sig(o_0)=(b,0)$ and $sig(o_1)=(1,0)$. Similar arguments show that $sig(k)=(1,0)$ implies $sig(o_0)=(0,b)$ and $sig(o_1)=(0,1)$. So far we have argued that if $(sup, sig)$ is a $\tau_{PT}^b$-region of $U_\tau$ that solves $\alpha$, then either $sig(o_0)=(0,b)$ and $sig(o_1)=(0,1)$ or $sig(o_0)=(b,0)$ and $sig(o_1)=(1,0)$. One easily finds out that if $sig(o_0)=(0,b)$ and $sig(o_1)=(0,1)$, then $sup(c_{j,1})=b$ and $sup(c_{j,2})=0$ and thus $sig(k_j)=(b,0)$ for all $j\in \{0,\dots, 3\}$. Similarly, if $sig(o_0)=(b,0)$ and $sig(o_1)=(1,0)$, then $sup(c_{j,1})=0$, $sup(c_{j,2})=b$ and $sig(k_j)=(0,b)$ for all $j\in \{0,\dots,3\}$. By the functionality of $F_j,G_j$ this implies $z_j\in E_0$. Moreover, by the functionality of $M_i$, this implies if $sig(k_1)=(0,b)$, then $sig(X_i)\in \{(1,0)\}\cup E_0$ and if $sig(k_1)=(b,0)$, then $sig(X_i)\in \{(0,1), (0,0)\}$ for all $i\in \{0,\dots, m-1\}$. Similar to the arguments for $\tau_{PPT}^b$, one argues that the gadgets $T_0,\dots, T_{m-1}$ then ensure that $\{e\in V(\varphi)\mid sig(e)=(0,1)\}$ or $\{e\in V(\varphi)\mid sig(e)=(1,0)\}$ defines a sought model of $\varphi$. Thus, if $U_\tau$ has the $\tau$-ESSP or is $\tau$-solvable, which implies that $\alpha$ is $\tau$-solvable, then $\varphi$ has a one-in-three model. \end{proof} The following lemma is dedicated to the opposite direction: \begin{lemma}\label{lem:tau_pt_model_implies_solvability} If $\varphi$ has a one-in-three model, then $U_\tau$ has the $\tau$-ESSP and the $\tau$-SSP. \end{lemma} \begin{proof} In the following, we argue that if $M$ is a one-in-three model of $\varphi$, then $U_\tau$ has the $\tau$-ESSP and thus has also the $\tau$-SSP, since all gadgets are linear. Notice that if $e$ is an event and $G$ is a gadget of $U_\tau$ such that $e$ does not occur in $G$, then $(e,s)$ is $\tau$-solvable for all $s\in S(G)$. A solving region $R=(sup, sig)$ is defined as follows: for all gadgets $G'$ of $U_\tau$, if $e\in E(G')$, then $sup(\iota_{G'})=b$, otherwise $sup(\iota_{G'})=0$; for all events $e'\in E(U_\tau)$, $if e'=e$, then $sig(e')=(b,b)$; otherwise $sig(e')=(0,0)$. Thus, in the following, we only argue for valid atoms $(e,s)$ where $e$ and $s$ occur in the same gadget. Let $e\in E(U_\tau)\setminus\{k, z\}$, let $G$ be a gadget of $U_\tau$ and let $s\in S(G)$ such that $\neg s\edge{e}$, where all of $e, G$ and $s$ are arbitrary but fixed. The event $e$ is thinly distributed. Moreover, if $e\not\in \{o_0,o_1\}$ or if $e\in \{o_0,o_1\}$ and $G\not=H_0$, then $e$ satisfies Condition~\ref{lem:easy_solvability_not_or_initial} or Condition~\ref{lem:easy_solvability_preceded} of Lemma~\ref{lem:easy_solvability}. Thus, by Lemma~\ref{lem:easy_solvability}, in these cases, the atom $(e,s)$ is $\tau$-solvable. For convenience, let $I=\{\iota_G\mid \text{$G$ is a gadget of $U_\tau$}\}$ be the set of the initial states of the gadgets of $U_\tau$. To complete the proof for the solvability of $o_0$ and $o_1$, it remains to argue that $(o_0, s)$ and $(o_1,s')$ are solvable for all relevant $s,s'\in S(H_0)$: By Lemma~\ref{lem:easy_solvability}.\ref{lem:easy_solvability_not_or_initial}, the atoms $(o_0,s)$ and $(o_1,s')$ are solvable for all $s\in \{h_{0, 2b+1},\dots, h_{0,6b+1}\}$ and for all $s'\in \{h_{0, 5b+1},\dots, h_{0,6b+1}\}$. The following region $R=(sup, sig)$ solves $(o_0,s)$ for all $s\in \{h_{0,0},\dots, h_{0,2b-1}\}$: $sup(h_{0,0})=b$; for all $s\in I\setminus\{h_{0,0}\}$, we define $sup(s)=0$; for all $e\in E(U_\tau)$, if $e=o_0$, then $sig(e)=(0,b)$; if $e=z$, then $sig(e)=(1,0)$; otherwise $sig(e)=(0,0)$. The following region $R=(sup, sig)$ solves $(o_1,s)$ for all $s\in \{h_{0,0},\dots, h_{0,4b}\}\setminus\{h_{0,2b}\}$ and uses the model $M$ of $\varphi$: We start with the support of the initial states: $sup(h_{0,0})=0$; if $s\in \{f_{0,0}, \dots, f_{2m-1,0}, m_{0,0},\dots, m_{m-1,0}, t_{0,0}, \dots, t_{m-1,0}\}$, then $sup(s)=0$; if $s\in \{c_{0,0}, \dots, c_{3,0}\}\cup\{g_{0,0}, \dots, g_{2m-1,0}\}$, then $sup(s)=b$. The signature is defined as follows: for all $e\in E(U_\tau)$, if $e=o_1$, then $sig(e)=(b,b)$; if $e=z$, then $sig(e)=(0,1)$; if $e=o_0$, then $sig(e)=(b,0)$; if $e\in \{k_0,\dots, k_3\}$, then $sig(e)=(0,b)$; if $e\in M$, then $sig(e)=(1,0)$; otherwise $sig(e)=(0,0)$. The following region $R=(sup, sig)$ solves $(o_1,h_{0,2b})$: $sup(s)=0$ for all $s\in I$; for all $e\in E(U_\tau)$, if $e=o_1$, then $sig(o_1)=(b,b)$; if $e=o_0$, then $sig(e)=(0,b)$; otherwise $sig(e)=(0,0)$. This proves the solvability of $o_1$. Since $z$ occurs only in $H_0$, for the solvability of $z$, it remains to argue that $(z,s)$ is $\tau$-solvable for all relevant $s\in S(H_0)$. The following region $R=(sup, sig)$ does this for all $s\in S(H_0)\setminus\{h_{0,6b+1}\}$ and uses the model $M$ of $\varphi$. Moreover, this region also solves $(k,s)$ for all $s\in S(H_0)$ and, thus, proves the solvability of $k$: if $s\in \{h_{0,0}, f_{0,0}, \dots, f_{2m-1,0}, m_{0,0},\dots, m_{m-1,0}, t_{0,0}, \dots, t_{m-1,0}\}$, then $sup(s)=0$; if $s\in \{c_{0,0}, \dots, c_{3,0}, g_{0,0}, \dots, g_{2m-1,0}\}$, then $sup(s)=b$; for all $e\in E(U_\tau)$, if $e=z$, then $sig(z)=(b,b)$; if $e=k$, then $sig(e)=(0,1)$; if $e\in \{o_0,o_1\}$, then $sig(e)=(b,0)$; if $e\in \{k_0,\dots, k_3\}$, then $sig(e)=(0,b)$; if $e\in M$, then $sig(e)=(1,0)$; otherwise $sig(e)=(0,0)$. One easily finds that $(z, h_{0,6b+1})$ is $\tau$-solvable. Altogether, this proves that if $\varphi$ is one-in-three satisfiable, then $U_\tau$ has the $\tau$-ESSP. Since all gadgets are linear, this completes the proof. \end{proof} \subsection{NP-hardness of \textsc{$\tau_{PPT}^b$-SSP} and \textsc{$\tau_{PT}^b$-SSP}}\label{sec:tau_ppt_tau_pt_ssp} In the remainder of this section, unless stated explicitly otherwise, let $\tau\in \{\tau_{PPT}^b, \tau_{PT}^b\}$ be arbitrary but fixed. The union $U_\tau$ has the following gadget $H_2$ that provides the atom $\alpha = (h_{2,0}, h_{2,b})$: \begin{center} \begin{tikzpicture} \node at (-1,0) {$H_2=$}; \node (h0) at (-0.25,0) {\nscale{$h_{2,0}$}}; \node (h1) at (1,0) {\nscale{}}; \node (hdots_1) at (1.25,0) {\nscale{$\dots$}}; \node (hb_1) at (1.5,0) {}; \node (hb) at (2.75,0) {\nscale{$h_{2,b}$}}; \node (hb+1) at (4.5,0) {\nscale{$h_{2,b+1}$}}; \node (hb+2) at (6,0) {}; \node (hdots_2) at (6.25,0) {\nscale{$\dots$}}; \node (h2b) at (6.5,0) {}; \node (h2b+1) at (8,0) {\nscale{$h_{2,2b+1}$}}; \node (h2b+2) at (10,0) {\nscale{$h_{2,2b+2}$}}; \node (h2b+3) at (11.5,0) {}; \node (hdots_3) at (11.75,0) {\nscale{$\dots$}}; \node (h3b+1) at (12,0) {}; \node (h3b+2) at (13.5,0) {\nscale{$h_{2,3b+2}$}}; \graph { (h0) ->["\escale{$k$}"] (h1); (hb_1)->["\escale{$k$}"] (hb) ->["\escale{$o_0$}"] (hb+1)->["\escale{$k$}"] (hb+2); (h2b) ->["\escale{$k$}"] (h2b+1)->["\escale{$o_2$}"] (h2b+2)->["\escale{$k$}"] (h2b+3); (h3b+1) ->["\escale{$k$}"] (h3b+2); }; \end{tikzpicture} \end{center} Moreover, the union $U_\tau$ has every gadget that has been defined for $U_{\tau_{PPT}^b}$ in Section~\ref{sec:tau_ppt_solvability} except for $H_1$. Altogether, $U_\tau$ is defined as follows: \[ U_\tau=U(H_2,D_0,\dots, D_3, F_0,\dots, F_{2m-1}, G_0,\dots, G_{2m-1},M_0,\dots, M_{m-1}, T_0,\dots, T_{m-1}). \] \begin{lemma}\label{lem:tau_ppt_tau_pt_ssp_implies_model} If $U_\tau$ has the $\tau$-SSP, then $\varphi$ has a one-in-three model. \end{lemma} \begin{proof} Since $U_\tau$ has the $\tau$-SSP, there is a $\tau$-region that solves $\alpha$. Let $R=(sup, sig)$ be such a region. We argue that the signature of the variable events define a sought model of $\varphi$: The event $k$ occurs $b$ times in a row at $h_{2,0}$. Thus, by Lemma~\ref{lem:observations}, a region $(sup, sig)$ solving $(h_{2,0}, h_{2,b})$ satisfies $sig(k)\in \{(1,0), (0,1)\}$. If $sig(k)=(1,0)$, then $sup(h_{2,b})=sup(h_{2,2b+1})=b$ and $sup(h_{2,b+1})=sup(h_{2,2b+2})=0$. This implies $sig(o_0)=sig(o_2)=(b,0)$ and, thus, $sig(k_j)=(0,b)$ for all $j\in \{0,\dots, 3\}$. Otherwise, if $sig(k)=(0,1)$ then $sup(h_{2,b})=sup(h_{2,2b+1})=0$ and $sup(h_{2,b+1})=sup(h_{2,2b+2})=b$. This implies $sig(o_0)=sig(o_2)=(0,b)$ and $sig(k_j)=(b,0)$ for all $j\in \{0,\dots, 3\}$. Just like before, this proves the one-in-three satisfiability of $\varphi$. \end{proof} The following lemma addresses the opposite direction: \begin{lemma}\label{lem:tau_ppt_tau_pt_model_implies_ssp} If $\varphi$ has a one-in-three model, then $U_\tau$ has the $\tau$-SSP. \end{lemma} \begin{proof} Let $M$ be a one-in-three model of $\varphi$. We briefly argue, that $U_\tau$ has the $\tau$-SSP. For start, let $e\in E(U_\tau)\setminus\{k\}$ be arbitrary but fixed. The event $e$ is thinly distributed. Moreover, if $s\in S(U_\tau)\setminus S(H_2)$ and $\neg s\edge{e}$, then, by Lemma~\ref{lem:easy_solvability}, $(e,s)$ is $\tau$-solvable. By Lemma~\ref{lem:essp_implies_ssp}, this implies that if $(s,s')$ is an SSA of $U_\tau$ such that $s,s'\not\in S(H_2)$, then $(s,s')$ is $\tau$-solvable. Thus, it remains to show that any SSA $(s,s')$ of $U_\tau$ where $s,s'\in S(H_2)$ is $\tau$-solvable, too. The corresponding regions can be defined similar to those from Section~\ref{sec:tau_ppt_solvability} and Section~\ref{sec:tau_pt_solvability}. In particular, the atom $(h_{2,0}, h_{2,b})$ can be solved by a region that is defined in accordance to the region $R=(sup, sig)$ of Section~\ref{sec:tau_ppt_solvability} that solves $(k ,h_{1,2b+4})$; one simply has to replace $sup(h_{1,0})=0$ by $sup(h_{2,0})=0$ and to ignore the events $y_0$ and $y_1$. The resulting region also solves $(s,s')$ if $s\not=s'\in \{h_{2,0},\dots, h_{2,b}\}$ or $s\not=s'\in \{h_{2,b+1},\dots, h_{2,2b+1}\}$ or $s\not=s'\in \{h_{2,2b+2},\dots, h_{2,3b+2}\}$. Finally, it is easy to see that all states of $\{h_{2,0},\dots, h_{2,b}\}$ are separable from all states of $\{h_{2,b+1},\dots, h_{2,2b+1}\}\cup \{h_{2,2b+2},\dots, h_{2,3b+2}\}$, and that all states of $\{h_{2,b+1},\dots, h_{2,2b+1}\}$ are separable from all states of $\{h_{2,2b+2},\dots, h_{2,3b+2}\}$. Altogether, this proves that if $M$ has a one-in-three model, then $U_\tau$ has the $\tau$-SSP. \end{proof} \subsection{NP-hardness of \textsc{$\tau$-solvability} and \textsc{$\tau$-ESSP} for $\tau=\tau_{\mathbb{Z}PPT}$ and $\tau=\tau_{\mathbb{Z}PT}$}\label{sec:tau_zppt_tau_zpt_solvability} In the remainder of this section, unless stated explicitly otherwise, let $\tau\in \{\tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b\}$ and let $E_0=\{(m,m) \vert 1 \leq m\leq b\}\cup \{ 0 \}$. The union $U_\tau$ has the following TS $H_3$ that provides the atom $\alpha=(k, h_{3,1,b-1})$: \begin{center} \begin{tikzpicture} \node (init) at (-0.9,0) {$H_3=$}; \node (h0) at (0,0) {\nscale{$h_{3,0,0}$}}; \node (h1) at (1.5,0) {\nscale{}}; \node (h_2_dots) at (1.75,0) {\nscale{$\dots$}}; \node (h_b_2) at (2,0) {}; \node (h_b_1) at (3.5,0) {\nscale{$h_{3,0,b-1}$}}; \node (h_b) at (5.5,0) {\nscale{$h_{3,0,b}$}}; \node (h_b+1) at (0,-1) {\nscale{$h_{3,1,0}$}}; \node (h_b+2) at (1.5,-1) {}; \node (h_b+2_dots) at (1.75,-1) {\nscale{$\dots$}}; \node (h_2b_3) at (2,-1) {}; \node (h_2b_2) at (3.5,-1) {\nscale{$h_{3,1,b-1}$}}; \graph{ (h0) ->["\escale{$k$}"] (h1); (h0) ->["\escale{$u$}", swap](h_b+1)->["\escale{$k$}"](h_b+2); (h_b_2)->["\escale{$k$}"] (h_b_1)->["\escale{$k$}"] (h_b); (h_2b_3)->["\escale{$k$}"] (h_2b_2); (h_2b_2)->[swap, "\escale{$z$}"] (h_b); }; \end{tikzpicture} \end{center} Moreover, for all $j\in \{0,\dots, m-1\}$, the union $U_\tau$ has the following gadgets $F_j$ and $G_j$ that use the variable $X_j$ as event: \begin{center} \begin{tikzpicture} \begin{scope} \node at (-0.9,0) {$F_j=$}; \node (f0) at (0,0) {\nscale{$f_{j,0,0}$}}; \node (f1) at (1.5,0) {\nscale{}}; \node (f_2_dots) at (1.75,0) {\nscale{$\dots$}}; \node (f_b_1) at (2,0) {}; \node (f_b) at (3.5,0) {\nscale{$f_{j,0,b-1}$}}; \node (f_b') at (5.25,0) {\nscale{$f_{j,0,b}$}}; \node (f_b+1) at (0,-1) {\nscale{$f_{j,1,0}$}}; \node (f_b+2) at (1.5,-1) {}; \node (f_b+2_dots) at (1.75,-1) {\nscale{$\dots$}}; \node (f_2b_1) at (2,-1) {}; \node (f_2b) at (3.5,-1) {\nscale{$f_{j,1,b-1}$}}; \graph{ (f0) ->["\escale{$k$}"] (f1); (f0) ->["\escale{$v_j$}", swap](f_b+1)->["\escale{$k$}"](f_b+2); (f_b_1)->["\escale{$k$}"] (f_b)->["\escale{$k$}"] (f_b'); (f_2b_1)->["\escale{$k$}"] (f_2b); (f_2b)->[swap, "\escale{$X_{j}$}"] (f_b'); }; \end{scope} \begin{scope}[xshift= 8cm] \node at (-0.9,0) {$G_j=$}; \node (f0) at (0,0) {\nscale{$g_{j,0}$}}; \node (f1) at (1.5,0) {\nscale{}}; \node (f_2_dots) at (1.75,0) {\nscale{$\dots$}}; \node (f_b_1) at (2,0) {}; \node (f_b) at (3.5,0) {\nscale{$g_{j,b}$}}; \node (f_b+1) at (5,0) {\nscale{$g_{j,b+1}$}}; \graph{ (f0) ->["\escale{$k$}"] (f1); (f_b_1)->["\escale{$k$}"] (f_b)->["\escale{$X_j$}"](f_b+1); }; \end{scope} \end{tikzpicture} \end{center} Finally, for all $i\in \{0,\dots, m-1\}$, the union $U_\tau$ has the following gadget $T_i$ that uses the variables of the clause $\zeta_i=\{X_{i_0}, X_{i_1}, X_{i_2}\}$ as events: \begin{center} \begin{tikzpicture} \node at (-1.4,0) {$T_i=$}; \node (t0) at (-0.6,0) {\nscale{$t_{i,0}$}}; \node (t1) at (0.7,0) {\nscale{}}; \node (t_2_dots) at (0.95,0) {\nscale{$\dots$}}; \node (t_b_1) at (1.2,0) {}; \node (t_b) at (2.5,0) {\nscale{$t_{i,b}$}}; \node (t_b+1) at (4.2,0) {\nscale{$t_{i,b+1}$}}; \node (t_b+2) at (6,0) {\nscale{$t_{i,b+2}$}}; \node (t_b+3) at (7.75,0) {\nscale{$t_{i,b+3}$}}; \node (t_b+4) at (9.5,0) {\nscale{$t_{i,b+4}$}}; \node (t_b+5) at (11,0) {}; \node (t_b+5_dots) at (11.25,0) {\nscale{$\dots$}}; \node (t_2b+3) at (11.5,0) {\nscale{}}; \node (t_2b+4) at (13,0) {\nscale{$t_{i, 2b+4}$}}; \graph{ (t0) ->["\escale{$k$}"] (t1); (t_b_1)->["\escale{$k$}"] (t_b) ->["\escale{$X_{i_0}$}"] (t_b+1)->["\escale{$X_{i_1}$}"] (t_b+2)->["\escale{$X_{i_2}$}"] (t_b+3)->["\escale{$z$}"] (t_b+4)->["\escale{$k$}"] (t_b+5); (t_2b+3)->["\escale{$k$}"] (t_2b+4); }; \end{tikzpicture} \end{center} Altogether, \[U_\tau=(H_3,F_0,G_0,\dots, F_{m-1}, G_{m-1}, T_0,\dots,T_{m-1}).\] \begin{lemma}\label{lem:tau_zppt_tau_zpt_essp_implies_model} If $U_\tau$ has the $\tau$-ESSP, then $\varphi$ has a one-in-three model. \end{lemma} \begin{proof} Since $U_\tau$ has the $\tau$-ESSP, there is a $\tau$-region, that solves $\alpha$. Let $R=(sup, sig)$ be such a region. In the following, we first argue that $sig(k)\in \{(1,0), (0,1)\}$ and $sig(z)\in E_0$. Secondly, we show that this implies that $M=\{X\in V(\varphi)\mid sig(X)=1\}$ is a one-in-three model of $\varphi$. Let $(sup, sig)$ be a $\tau$-region that solves $\alpha$, that is, $\neg sup(h_{3,1,b-1})\ledge{sig(k)}$. If $sig(k)\in E_0$, then we inductively obtain $sup(h_{3,1,0})=sup(h_{3,1,b-1})$. This contradicts $\neg sup(h_{3,1,b-1})\ledge{sig(k)}$. Moreover, if $e\in \{0,\dots, b\}$, then $s\edge{e}$ for all $s\in S_\tau$. Consequently, we have $sig(k)\not\in E_0 \cup \{1,\dots, b\}$. The event $k$ occurs $b$ times in a row. Therefore, by Lemma~\ref{lem:observations}, we get $sig(k)\in \{(1,0), (0,1)\}$. Moreover, if $sig(k)= (1,0)$, then $sup(h_{3,0,b})=0$ and if $sig(k)= (0,1)$, then $sup(h_{3,0,b})=b$. If $s\in \{0,\dots, b-1\}$ then $s\ledge{(0,1)}$ is true, and if $s\in \{1,\dots, b\}$, then $s\ledge{(1,0)}$ is true. Consequently, by $\neg sup(h_{3,1,b-1})\ledge{sig(k)}$, if $sig(k)=(0,1)$, then $sup(h_{3,1,b-1})=b$, and if $sig(k)=(1,0)$, then $sup(h_{3,1,b-1})=0$. For both cases, this implies $sup(h_{3,0,b})=sup(h_{3,1,b-1})$ and thus $sig(z)\in E_0$. We now argue that this makes $M$ a one-in-three model of $\varphi$. Let $i\in \{0,\dots, m-1\}$ be arbitrary but fixed. By the definition of $\tau$-regions, if $p_i$ is defined by \begin{center} \begin{tikzpicture} \node (init) at (-1,0) {$p_i=$}; \node (t_b) at (0,0) {\nscale{$sup(t_{i,b})$}}; \node (t_b+1) at (3,0) {\nscale{$sup(t_{i,b+1})$}}; \node (t_b+2) at (6,0) {\nscale{$sup(t_{i,b+2})$}}; \node (t_b+3) at (9,0) {\nscale{$sup(t_{i,b+3})$}}; \graph{ (t_b) ->["\escale{$sig(X_{i_0})$}"] (t_b+1)->["\escale{$sig(X_{i_1})$}"] (t_b+2)->["\escale{$sig(X_{i_2})$}"] (t_b+3); }; \end{tikzpicture} \end{center} \noindent then $p_i$ is a directed labeled path in $\tau$. By $sig(z)\in E_0$ and $t_{i,b+3}\edge{z}t_{i,b+4}$ we obtain that $sup(t_{i,b+3})=sup(t_{i,b+4})$. Moreover, $k$ occurs $b$ times in a row at $t_{i,0}$ and $t_{i,b+4}$. By Lemma~\ref{lem:observations}, this implies if $sig(k)=(0,1)$, then $sup(t_{i,b})=b$ and $sup(t_{i,b+4})=0$. Similarly, and if $sig(k)=(1,0)$, then $sup(t_{i,b})=0$ and $sup(t_{i,b+4})=b$. Altogether, we obtain that the following conditions are true: If $sig(z)\in E_0$ and $sig(k) = (1,0)$, then the path $p_i$ starts at $0$ and terminates at $b$, and if $sig(z)\in E_0$ and $sig(k) = (0,1)$, then the path $p_i$ starts at $b$ and terminates at $0$. In particular, both cases imply that there has to be at least one event $X\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$ whose signature satisfies $sig(X)\not\in E_0$. Via the functionality of the gadgets $F_0,G_0,\dots, F_{m-1},G_{m-1} $, our reduction ensures that $X$ is unique. More exactly, the aim of $F_0,G_0,\dots, F_{m-1},G_{m-1} $ is to restrict the possible signatures for the variable events as follows: \begin{itemize} \item If $sig(k) = (1,0)$, then $X\in V(\varphi)$ implies $sig(X)\in E_0 \cup \{ b \}$, and \item if $sig(k) = (0,1)$, then $X\in V(\varphi)$ implies $sig(X)\in E_0 \cup \{ 1 \}$. \end{itemize} Before we argue that $F_0,G_0,\dots, F_{m-1},G_{m-1} $ satisfy the announced functionality, we first argue that these restrictions of the signature of $X_{i_0}, X_{i_1}, X_{i_2}$ ensure that there is exactly one variable event $X\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$ with $sig(X)\not\in E_0$. Remember that, by definition, if $sig(X)\in E_0$ then $sig^-(X) + sig^+(X) = \vert sig(X)\vert = 0$. For a start, let $sig(z)\in E_0$ and $sig(k) = (1,0)$, which implies that $p_i$ starts at $0$ and terminates at $b$. Moreover, assume $sig(X)\in E_0 \cup \{ b \}$. By Lemma~\ref{lem:observations}, we obtain \begin{equation}\label{eq:modulo=b} (\vert sig(X_{i_0})\vert + \vert sig( X_{i_1} ) \vert +\vert sig( X_{i_2}) \vert) \equiv b \text{ mod } (b+1) \end{equation} If $sig(X_{i_0}), sig(X_{i_1}), sig(X_{i_2})\in E_0$, then $\vert sig(X_{i_0})\vert = \vert sig( X_{i_1} ) \vert =\vert sig( X_{i_2}) \vert=0$. This contradicts Equation~1. Hence, there has to be at least one variable event $X\in \{ X_{i_0}, X_{i_1} , X_{i_2} \}$ such that $sig(X)=b$. In the following, we argue that $X$ is unique. Assume, for a contradiction, that there are two different variable events $X, Y\in \{ X_{i_0}, X_{i_1} , X_{i_2} \}$ such that $sig(X)=sig(Y)=b$ and that $sig(Z)\in E_0$ for $Z \in \{ X_{i_0}, X_{i_1} , X_{i_2} \}\setminus \{X, Y\}$. By symmetry and transitivity, we obtain \eject \hbox{} \vspace*{-12mm} \begin{align} & b \equiv (\vert sig(X_{i_0})\vert + \vert sig( X_{i_1} ) \vert +\vert sig( X_{i_2}) \vert) \text{ mod } (b+1) && \vert (1) \\ & (\vert sig(X_{i_0})\vert + \vert sig( X_{i_1} ) \vert +\vert sig( X_{i_2}) \vert) \equiv 2b \text{ mod } (b+1) && \vert \text{assumpt.} \\ & b \equiv 2b \text{ mod } (b+1) && \vert (2),(3) \\ & 2b \equiv (b-1) \text{ mod } (b+1) && \vert \text{def. } \equiv \\ & b \equiv (b-1) \text{ mod } (b+1) &&\vert (4),(5)\\ & \exists m\in \mathbb{Z}: m(b+1)=1 && \vert (6) \end{align} By Equation~7, we get $b=0$, a contradiction. Similarly, if we assume that $\vert sig(X_{i_0})\vert = \vert sig( X_{i_1} ) \vert =\vert sig( X_{i_2}) \vert=b$, then we obtain \begin{align} & (\vert sig(X_{i_0})\vert + \vert sig( X_{i_1} ) \vert +\vert sig( X_{i_2}) \vert) \equiv 3b \text{ mod } (b+1) && \vert \text{assumpt.} \\ & b \equiv 3b \text{ mod } (b+1) && \vert (2),(8) \\ & 3b \equiv (b-2) \text{ mod } (b+1) && \vert \text{def. } \equiv \\ & b \equiv (b-2) \text{ mod } (b+1) &&\vert (9),(10)\\ & \exists m\in \mathbb{Z}: m(b+1)=2 && \vert (11) \end{align} By Equation~12, we have $b\in \{0,1\}$, which contradicts $b\geq 2$. Consequently, if $sig(z)\in E_0$ and $sig(k) = (1,0)$ and $sig(X)\in E_0 \cup \{ b \}$, then there is exactly one variable event $X\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$ with $sig(X)\not\in E_0$. Otherwise, if $sig(z) \in E_0$, $sig(k) = (0,1)$, implying that $p_i$ starts at $b$ and terminates at $0$, and $sig(X)\in E_0 \cup \{ 1 \}$, then the following equation is true: \begin{equation}\label{eq:modulo=0} (b+ \vert sig(X_{i_0})\vert + \vert sig( X_{i_1} ) \vert +\vert sig( X_{i_2}) \vert) \equiv 0 \text{ mod } (b+1) \end{equation} This implies $\vert sig(X_{i_0})\vert + \vert sig( X_{i_1} ) \vert +\vert sig( X_{i_2}) \vert) \equiv 1 \text{ mod } (b+1)$. If there is more than one $X\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$ such that $sig(X)=1$, then $2 \equiv 1 \text{ mod } (b+1)$ or $3 \equiv 1 \text{ mod } (b+1)$ is true. If $2 \equiv 1 \text{ mod } (b+1)$, then $b=0$, and if $3 \equiv 1 \text{ mod } (b+1)$, then $b\in \{0,1\}$. Since $b\geq 2$, both cases yield a contradiction. Consequently, there is exactly one $X\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$ such that $sig(X)=1$, and if $Y\in \{X_{i_0}, X_{i_1}, X_{i_2}\}\setminus \{X\}$, then $sig(Y)\in E_0$. Under the assumption that the gadgets $F_0,G_0,\dots, F_{m-1}, G_{m-1}$ behave as announced, we have shown the following: If $(sup, sig)$ is a $\tau$-region of $U_\tau$ such that $sig(k)\in \{(0,1), (1,0)\}$ and $sig(z)\in E_0$, then, for every $i\in \{0,\dots, m-1\}$, there is exactly one variable event $X\in \{X_{i_0}, X_{i_1}, X_{i_2}\}$ such that $sig(X)\not\in E_0$. As a result, the set $M=\{X\in V(\varphi) \vert sig(X) \not\in E_0\}$ defines a one-in-three model of $\varphi$. It remains to argue that the gadgets $F_0,G_0,\dots, F_{m-1},G_{m-1} $ behave as announced. Let $j\in \{0,\dots, m-1\}$. In the following, we show that if $sig(k)=(1,0)$, then $sig(X_j)\in E_0\cup \{b\}$, and if $sig(k)=(0,1)$, then $sig(X_j)\in E_0\cup \{1\}$. To begin with, let $sig(k)=(1,0)$. The event $k$ occurs $b$ times in a row at $f_{j,0,0}$ and $g_{j,0}$ and $b-1$ times in a row at $f_{j,1,0}$. By Lemma~\ref{lem:observations} this implies $sup(f_{j,0,b})=sup(g_{j,b})=0$ and $sup(f_{j,1,b-1})\in \{0,1\}$. Clearly, if $sup(f_{j,0,b})=sup(f_{j,1,b-1})=0$ then $sig(X_j)\in E_0$. We argue that $sup(f_{j,1,b-1})=1$ implies $sig(X_j) = b$. Assume, for a contradiction, that $sig(X_j)\not=b $. If $sig(X_j)=(m,m)$ for some $m\in \{1,\dots, b\}$, then $-sig^-(X_j)+sig^+(X_j)=\vert sig(X_j) \vert = 0$. By Lemma~\ref{lem:observations}, this contradicts $sup(f_{j,0,b})\not=sup(f_{j,1,b-1})$. If $sig(X_j)=(m,n)$ with $m\not=n$, then $\vert sig(X_j)\vert =0$. By Lemma~\ref{lem:observations}, we have $sup(f_{j,0,b})=sup(f_{j,1,b-1})-sig^-(X_j)+sig^+(X_j)$, which implies $sig(X_j)=(1,0)$. But, this contradicts $sup(g_{j,b})\lledge{sig(X_j)}$, since $sup(g_{j,b})=0$ and $\neg 0 \ledge{(1,0)}$ in $\tau$. Finally, if $sig(X_j) = e \in \{0,\dots, b-1 \}$, then we have $1 + e \not\equiv 0 \text{ mod } (b+1)$. This contradicts $sup(f_{j,1,b-1})\lledge{sig(X_j)}sup(f_{j,0,b})$. Hence, we have $sig(X_j)=b$. Overall, it is proven that if $sig(k)=(1,0)$, then $sig(X_j)\in E_0\cup \{b\}$. To continue, let $sig(k)=(0,1)$. Similarly to the former case, by Lemma~\ref{lem:observations}, we obtain that $sup(f_{j,0,b})=sup(g_{j,b})=b$ and $sup(f_{j,1,b-1})\in \{b-1,b\}$. If $sup(f_{j,1,b-1})=b$, then $sig(X_j)\in E_0$. We show that $sup(f_{j,1,b-1})=b-1$ implies $sig(X_j)=1$. Assume $sig(X_j)=(m,n)\in E_\tau$. If $m=n$ or $m>n$, then, by $sup(f_{j,0,b})=sup(f_{j,1,b-1})-sig^-(X_j)+sig^+(X_j)$, we get $sup(f_{j,0,b}) < b$. This is a contradiction. If $m < n$ then, by $sup(g_{j,b+1})=sup(g_{j,b})-sig^-(X_j)+sig^+(X_j)$, we get the contradiction $sup(g_{j,b+1}) > b$. Hence, $sig(X_j)\in \{0,\dots, b\}$ and $(b-1 + \vert sig(X_j)\vert) \equiv b \text{ mod } (b+1)$. This implies that $(b+1)$ divides $(\vert sig(X_j)\vert-1)$ and thus $\vert sig(X_j)\vert \equiv 1 \text{ mod } (b+1)$. Consequently, we obtain $sig(X_j)=1$. This shows that $sig(k)=(0,1)$ and $z\in E_0$ implies $sig(X_j)\in E_0\cup \{1\}$. \end{proof} \newcommand{\freezer}[4]{ \ifstrequal{#4}{0}{ \begin{scope}[nodes={set=import nodes}, xshift= #2cm, yshift=#3 cm] \coordinate (c00) at (0,0); \coordinate(c01) at (1,0) ; \coordinate (c02) at (2,0) ; \coordinate (c10) at (0,-1) ; \coordinate (c11) at (2,-1) ; \node (f00) at (0,0) {\nscale{$f_{#1,0,0}$}}; \node (f01) at (1.5,0) {\nscale{$f_{#1,0,1}$}}; \node (f02) at (3,0) {\nscale{$f_{#1,0,2}$}}; \node (f10) at (0,-1.2) {\nscale{$f_{#1,1,0}$}}; \node (f11) at (2,-1.2) {\nscale{$f_{#1,1,1}$}}; \graph{ (f00) ->[,"\escale{$k$}"] (f01)->[,"\escale{$k$}"] (f02); (f10) ->[,"\escale{$k$}"] (f11); (f00) ->[,swap, "\escale{$v_#1$}"] (f10); (f11) ->[,swap, "\escale{$X_#1$}"] (f02); }; \end{scope} }{ \begin{scope}[nodes={set=import nodes}, xshift= #2cm, yshift=#3 cm] \coordinate (c00) at (0,0); \coordinate(c01) at (1,0) ; \coordinate (c02) at (2,0) ; \coordinate (c10) at (0,-1) ; \coordinate (c11) at (2,-1) ; \node (f00) at (0,0) {\nscale{$f_{#1,0,0}$}}; \node (f01) at (1.5,0) {\nscale{$f_{#1,0,1}$}}; \node (f02) at (3,0) {\nscale{$f_{#1,0,2}$}}; \node (f10) at (0,-1.2) {\nscale{$f_{#1,1,0}$}}; \node (f11) at (2,-1.2) {\nscale{$f_{#1,1,1}$}}; \graph{ (f00) ->[,"\escale{$k$}"] (f01)->[,"\escale{$k$}"] (f02); (f10) ->[,"\escale{$k$}"] (f11); (f00) ->[,swap, "\escale{$v_#1$}"] (f10); (f11) ->[,swap, "\escale{$X_#1$}"] (f02); }; \end{scope} } } \newcommand{\generator}[4]{ \ifstrequal{#4}{0}{ \begin{scope}[nodes={set=import nodes}, xshift = #2cm, yshift= #3cm, ] \coordinate (c00) at (0,0); \coordinate(c01) at (1,0) ; \coordinate(c02) at (2,0) ; \coordinate(c03) at (3,0) ; \node (g00) at (0,0) {\nscale{$g_{#1,0}$}}; \node (g01) at (1.3,0) {\nscale{$g_{#1,1}$}}; \node (g02) at (2.6,0) {\nscale{$g_{#1,2}$}}; \node (g03) at (3.9,0) {\nscale{$g_{#1,2}$}}; \graph{ (g00) ->[,"\escale{$k$}"] (g01)->[,"\escale{$k$}"] (g02)->[,"\escale{$X_#1$}"] (g03); }; \end{scope} }{ \begin{scope}[nodes={set=import nodes}, xshift = #2cm, yshift= #3cm, ] \coordinate (c00) at (0,0); \coordinate(c01) at (1.2,0) ; \coordinate(c02) at (2.4,0) ; \coordinate(c03) at (3.6,0) ; \node (g00) at (0,0) {\nscale{$g_{#1,0}$}}; \node (g01) at (1.3,0) {\nscale{$g_{#1,1}$}}; \node (g02) at (2.6,0) {\nscale{$g_{#1,2}$}}; \node (g03) at (3.9,0) {\nscale{$g_{#1,2}$}}; \graph{ (g00) ->[,"\escale{$k$}"] (g01)->[,"\escale{$k$}"] (g02)->[,"\escale{$X_#1$}"] (g03); }; \end{scope} } } \newcommand{\translator}[7]{ \ifstrequal{#7}{1}{ \begin{scope}[nodes={set=import nodes}, xshift=#5cm+0.5 cm, yshift=#6 cm] \coordinate (c0) at (0,0); \coordinate (c1) at (1,0) ; \coordinate (c2) at (2,0) ; \coordinate (c3) at (3,0) ; \coordinate (c4) at (4,0) ; \coordinate (c5) at (5,0) ; \coordinate(c6) at (6,0) ; \coordinate (c7) at (7,0) ; \coordinate (c8) at (8,0) ; \node (t0) at (0,0) {\nscale{$t_{#1,0}$}}; \node (t1) at (1.5,0) {\nscale{$t_{#1,1}$}}; \node (t2) at (3,0) {\nscale{$t_{#1,2}$}}; \node (t3) at (4.5,0) {\nscale{$t_{#1,3}$}}; \node (t4) at (6,0) {\nscale{$t_{#1,4}$}}; \node (t5) at (7.5,0) {\nscale{$t_{#1,5}$}}; \node (t6) at (9,0) {\nscale{$t_{#1,6}$}}; \node (t7) at (10.5,0) {\nscale{$t_{#1,7}$}}; \node (t8) at (12,0) {\nscale{$t_{#1,8}$}}; \graph{ (t0) ->[,"\escale{$k$}"] (t1)->[,"\escale{$k$}"] (t2)->[,"\escale{$X_#2$}"] (t3)->[,"\escale{$X_#3$}"] (t4)->[,"\escale{$X_#4$}"] (t5)->[,"\escale{$z$}"] (t6)->[,"\escale{$k$}"] (t7)->[,"\escale{$k$}"] (t8); }; \end{scope} }{ \begin{scope}[nodes={set=import nodes}, xshift=#5cm+0.5cm, yshift=#6 cm] \coordinate (c0) at (0,0); \coordinate (c1) at (1,0) ; \coordinate (c2) at (2,0) ; \coordinate (c3) at (3,0) ; \coordinate (c4) at (4,0) ; \coordinate (c5) at (5,0) ; \coordinate(c6) at (6,0) ; \coordinate (c7) at (7,0) ; \coordinate (c8) at (8,0) ; \node (t0) at (0,0) {\nscale{$t_{#1,0}$}}; \node (t1) at (1.5,0) {\nscale{$t_{#1,1}$}}; \node (t2) at (3,0) {\nscale{$t_{#1,2}$}}; \node (t3) at (4.5,0) {\nscale{$t_{#1,3}$}}; \node (t4) at (6,0) {\nscale{$t_{#1,4}$}}; \node (t5) at (7.5,0) {\nscale{$t_{#1,5}$}}; \node (t6) at (9,0) {\nscale{$t_{#1,6}$}}; \node (t7) at (10.5,0) {\nscale{$t_{#1,7}$}}; \node (t8) at (12,0) {\nscale{$t_{#1,8}$}}; \graph{ (t0) ->[,"\escale{$k$}"] (t1)->[,"\escale{$k$}"] (t2)->[,"\escale{$X_#2$}"] (t3)->[,"\escale{$X_#3$}"] (t4)->[,"\escale{$X_#4$}"] (t5)->[,"\escale{$z$}"] (t6)->[,"\escale{$k$}"] (t7)->[,"\escale{$k$}"] (t8); }; \end{scope} } } \begin{figure}\label{fig:example} \end{figure} Conversely, a one-in-three model of $\varphi$ implies the $\tau$-ESSP and the $\tau$-SSP for $U_\tau$: \begin{lemma}\label{lem:tau_zppt_tau_zpt_model_implies_solvability} If $\varphi$ has a one-in-three model, then $U_\tau$ has the $\tau$-ESSP and the $\tau$-SSP. \end{lemma} \begin{proof} Let $M$ be a one-in-three model of $\varphi$, and let $I=\{h_{3,0,0}, t_{j,0}, f_{j,0,0},g_{j,0}\mid 0\leq j\leq m-1\}$ be the set of the initial states of the gadgets of $U_\tau$. We start with the solvability of $k$. The following $\tau$-region $R=(sup, sig)$ solves $\alpha=(k, h_{3,1,b-1})$ and thus $k$ completely in $H_3$: for all $s\in I$, $sup(s)=0$; for all $e\in E(U_\tau)$, if $e=k$, then $sig(e)=(0,1)$; if $e\in \{z\}\cup (V(\varphi)\setminus M)$, then $sig(e)=0$; for all $j\in \{0,\dots, m-1\}$, if $e=v_j$ and $X_j\in M$, then $sig(e)=0$; for all $j\in \{0,\dots, m-1\}$, if $e=v_j$ and $X_j\not\in M$, then $sig(e)=1$; otherwise holds $e \in M\cup\{u\}$, and we define $sig(e)=1$. Notice that this region solves also a lot SSA of $U_\tau$. In particular, if $q_0\edge{k}\dots\edge{k}q_b$, then this region solves $(s,s')$ for all $s\not=s' \in \{1_0,\dots, q_b\}$. The following region $R=(sup, sig)$ solves $(k,s)$ for all remaining relevant states of $U_\tau$: for all $s\in I$, $sup(s)=0$; for all $e\in E(U_\tau)$, if $e=k$, then $sig(e)=(0,1)$; if $e\in \{z\}\cup \{v_0,\dots, v_{m-1}\}$, then $sig(e)=1$; otherwise, $sig(e)=0$. We proceed with the solvability of $z$. Let $i\in \{0,\dots, m-1\}$ be arbitrary but fixed. Let $j,\ell\in \{0,\dots, m-1\}\setminus\{i\}$ such that $j\not=\ell$ and $X_{i_2}\in E(T_j)$ and $X_{i_2}\in E(T_\ell)$. The following region solves $(z,s)$ for all $s\in \{h_{3,0,0}\}\cup S(T_i)$: for all $s\in \{h_{3,0,0},t_{i,0}, t_{j,0}, t_{\ell,0}\}$, $sup(s)=b$; for all $s\in \{f_{0,0,0}, g_{j,0}\mid j\in \{0,\dots, m-1\}\}$, $sup(s)=1$; for all $s\in \{t_{j,0}\mid j\in \{0,\dots, m-1\}\}\setminus\{t_{i,0}, t_{j,0}, t_{\ell,0}\}$, $sup(s)=0$; for all $e\in E(U_\tau)$, if $e=z$, then $sig(z)=(0,b)$; if $e=X_{i_2}$, then $sig(e)=1$; if $n\in \{0,\dots, m-1\}$ and $e=v_n$ and $X_{i_2}\in E(F_n)$, then $sig(e)=b$; if $e=u$, then $sig(e)=1$; otherwise $sig(e)=0$. By the arbitrariness of $i$, this proves also the $\tau$-solvability of $(z,s)$ for all relevant $s\in \bigcup_{j=0}^{m-1}S(T_j)$. Notice that if $s\in \{h_{3,0,0},\dots, h_{3,0,b}\}$ and $s'\in \{h_{3,1,0},\dots, h_{3,1,b-1}\}$ or if $s\in \{f_{i,0,0},\dots, h_{i,0,b}\}$ and $s'\in \{s_{i,1,0},\dots, f_{i,1,b-1}\}$, then this region also solves $(s,s')$. Thus, altogether, we already have proven the solvability of all states of $H_3,F_0,\dots, F_{m-1}$. The following region $R=(sup, sig)$ solves $(z,s)$ for all relevant $s\in S(H_3)\setminus\{h_{3,0,0}\}$: for all $s\in I\setminus\{t_{i,0}\mid i\in \{0,\dots, m-1\}\}$, $sup(s)=0$; for all $s\in \{t_{i,0}\mid i\in \{0,\dots, m-1\}$, $sup(s)=1$; for all $e\in E(U_\tau)$, if $e=z$, then $sig(e)=(0,b)$; if $e=k$, then $sig(k)=1$; if $e=u$, then $u=2$; otherwise $sig(e)=0$. The following region $R=(sup, sig)$ solves $(z,s)$ for all remaining relevant states: for all $s\in \{h_{3,0,0}\}\cup\{ f_{j,0,0}, g_{j,0}\mid j\in \{0,\dots, m-1\}\}$, $sup(s)=b$; for all $s\in \{t_{i,0}\mid i\in \{0,\dots, m-1\}$, $sup(s)=0$; for all $e\in E(U_\tau)$, if $e=z$, then $sig(e)=(0,b)$; if $e=u$, then $sig(e)=1$; otherwise, $sig(e)=0$. We proceed by arguing for the solvability of $u$. The following region $R=(sup, sig)$ solves $(u,s)$ for all $s\in \{h_{3,0,1}, \dots, h_{3,0,b}\}$: for all $s\in I$, $sup(s)=0$; for all $e\in E(U_\tau)$, if $e=u$, then $sig(e)=(0,b)$; if $e=z$, then $sig(e)=2$; of $e=k$, then $sig(e)=1$; otherwise $sig(e)=0$. If $b >2$, then the following region $R=(sup, sig)$ solves $(u,s)$ for relevant states $s\in S(U_\tau)\setminus \{h_{3,0,1}, \dots, h_{3,0,b}\}$: for all $s\in I$, if $s=h_{3,0,0}$, then $sup(s)=0$; otherwise, $sup(s)=1$; for all $e\in E(U_\tau)$, if $e=u$, then $sig(e)=(0,b)$; if $e=z$, then $sig(e)=1$; otherwise $sig(e)=0$. If $b=2$, then we additionally need a slightly modified region that maps $sup(s)=0$ for all $s\in \{t_{j,0}\mid j\in \{0,\dots, m-1\}\}$. This proves the solvability of $u$. We proceed with the solvability of the events $v_0,\dots, v_{m-1}$. Let $i\in \{0,\dots, m-1\}$ be arbitrary but fixed. The following region $R=(sup, sig)$ solves $(u_i,s)$ for all $s\in \{f_{i,0,1},\dots, f_{i,0,b}\}$: for all $s\in I$, $sup(s)=0$; for all $e\in E(U_\tau)$, if $e=u_i$, then $sig(e)=(0,b)$; if $e=X_i$, then $sig(e)=2$; if $e=k$, then $sig(e)=1$; otherwise $sig(e)=0$. If $b> 2$, then the following region $R=(sup, sig)$ solves $(v_i, s)$ for all remaining relevant states $S(U_\tau)\setminus \{f_{i,0,1},\dots, f_{i,0,b}\}$: $sup(f_{i,0,0})=0$; for all $s\in I\setminus\{f_{i,0,0}\}$, $sup(s)=1$; for all $e\in E(U_\tau)$, if $e=u_i$, then $sig(e)=(0,b)$; if $e=X_i$, then $sig(e)=1$; otherwise $sig(e)=0$. If $b=2$, then we additionally need a slightly modified region that maps $sup(s)=0$ for all $s\in \{g_{j,0},t_{j,0}\mid j\in \{0,\dots, m-1\}\}$. This proves the solvability of $v_i$. Since $i$ was arbitrary, this proves the solvability of all $v_0,\dots, v_{m-1}$. It is easy to see, that the variable events $X_0,\dots, X_{m-1}$ are solvable. Thus, for the sake of simplicity, we refrain from the explicit representation of the corresponding regions. Moreover, one easily verifies that the remaining regions that complete the $\tau$-ESSP of $U_\tau$ also solve the remaining SSA of $U_\tau$. Altogether, we have finally proven that if $M$ has a one in three model, then $U_\tau$ has the $\tau$-ESSP and the $\tau$-SSP. \end{proof} \section{Polynomial time results}\label{sec:poly_results} The following theorem states the main result of this section: \begin{theorem}\label{the:tractability} \begin{enumerate} \item \textsc{$\tau_{R\mathbb{Z}PT}^b$-ESSP} can be solved in time polynomial in the size of input $A$. \item\label{the:tau_zppt_tau_zpt_tau_rzpt_ssp} If $\tau\in \{\tau_{\mathbb{Z}PT}^b, \tau_{\mathbb{Z}PPT}^b, \tau_{R\mathbb{Z}PT}^b\}$, then \textsc{$\tau$-SSP} can be solved in time polynomial in the size of input $A$. \end{enumerate} \end{theorem} The contribution of Theorem~\ref{the:tractability} is threefold. Firstly, \textsc{$\tau$-ESSP} and \textsc{$\tau$-Solvability} are NP-complete for all $\tau\in \{\tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b\}$ by Theorem~\ref{the:hardness_results}. However, Theorem~\ref{the:tractability}.\ref{the:tau_zppt_tau_zpt_tau_rzpt_ssp} states that \textsc{$\tau$-SSP} is solvable in polynomial-time for these types. Hence, to the best of our knowledge, Theorem~\ref{the:tractability} discovers the first Petri net types for which \textsc{$\tau$-SSP} and \textsc{$\tau$-ESSP} as well as \textsc{$\tau$-SSP} and \textsc{$\tau$-Solvability} provably have a different computational complexity. Secondly, in \cite{DBLP:conf/stacs/Schmitt96}, Schmitt extended the type $\tau_{PPT}^1$ by the additive group of integers modulo $2$, which leads to the tractable (super-) type to $\tau_{\mathbb{Z}PPT}^1$. Moreover, in~\cite{DBLP:conf/tamc/TredupR19}, we argued that Schmitts approach transferred to $\tau_{PT}^1$ yields the tractable type $\tau_{\mathbb{Z}PT}^1$. However, by Theorem~\ref{the:hardness_results}, lifting Schmitts technique to $\tau_{PPT}^b$ and $\tau_{PT}^b$ does not lead to superclasses with a tractable synthesis problem for all $2\leq b\in \mathbb{N}$. Hence, Theorem~\ref{the:tractability} proposes the first tractable type of $b$-bounded Petri nets, where $b\geq 2$, so far. Finally, Theorem~\ref{the:tractability} gives us insight into which of the $\tau$-\emph{net properties}, where $\tau\in \{\tau_{PT}^b,\tau_{PPT}^b\}$, cause the hardness of \textsc{$\tau$-Synthesis} and the corresponding separation problems. In particular, flow arc relations (events in $\tau$) between places and transitions in a $\tau$-net define conditions when a transition is able to fire. For example, if $N$ is a $\tau$-net with transition $t$ and place $p$ such that $f(p,t)=(1,0)$ then the firing of $t$ in a marking $M$ requires $M(p)\geq 1$. By Theorem~\ref{the:tractability}, the hardness of finding a $\tau$-net $N$ for $A$ originates from the potential possibility of $\tau$-nets to satisfy such conditions by multiple markings $M(p)\in \{1,\dots,b\}$. In fact, the definition of $\tau_{R\mathbb{Z}PT}^b$ implies that $f(p,t)=(m,n)$ requires $M(p)=m$ for the firing of $t$ and prohibits the possibility of multiple choices. By Theorem~\ref{the:tractability}, this makes $\tau_{R\mathbb{Z}PT}^b$-synthesis tractable. While the question of whether there are superclasses of $\tau_{PT}^b,\tau_{PPT}^b$, $b\geq 2$, for which synthesis is doable in polynomial time remains unanswered, the following lemma shows that the type $\tau_{R\mathbb{Z}PT}^b$ yields at least a tractable superclasses of Schmitt's type $\tau_{R\mathbb{Z}PT}^b$ \cite{DBLP:conf/stacs/Schmitt96}. More generally, if $b < b'$ then the class of $\tau_{R\mathbb{Z}PT}^b$-nets is strictly more comprehensive than the class of $\tau_{R\mathbb{Z}PT}^{b'}$-nets: \begin{lemma}\label{lem:contribution_of_sharp_nets} If $b < b'\in \mathbb{N}^+$ and if $\mathcal{T}$ is the set of $\tau_{R\mathbb{Z}PT}^b$ -solvable TS and $\mathcal{T'}$ the set of $\tau_{R\mathbb{Z}PT}^{b'}$-solvable TS then $\mathcal{T }\subset \mathcal{T'}$. \end{lemma} \begin{proof} We present a TS $A$ that is $\tau_{R\mathbb{Z}PT}^{b'}$-solvable but not $\tau_{R\mathbb{Z}PT}^{b}$-solvable: Let $A=(\{s_0,\dots , s_{b'}\},\{a\},\delta, s_0)$ be the TS with transition function $\delta (s_i, a)=s_{i+1}$ for $i\in \{0,\dots, b'-1\}$ and $\delta (s_{b'},a)=s_0$. By other words, $A$ is a directed labeled cycle $s_0\edge{a}\dots \edge{a}s_{b'}\edge{a}s_0$ where every transition is labeled by $a$. Notice, that $A$ has no ESSA. Hence, it has the $\tau$-ESSP for every type of nets $\tau$. Consequently, $A$ is $\tau$-solvable if and only if it has the $\tau$-SSP. Assume, for a contradiction, that $A$ is $\tau_{R\mathbb{Z}PT}^b$-solvable. By $b <b'$, $A$ provides the SSA $(s_0, s_{b+1})$ and the $\tau_{R\mathbb{Z}PT}^b$-solvability of $A$ implies that there is a $\tau_{R\mathbb{Z}PT}^b$-region $(sup, sig)$ that solves it. If $sig(a)=(m,n)$ then $sup(s_1)=sup(s_0)-m+n\not=sup(s_0)$ and, by definition of $\tau_{R\mathbb{Z}PT}^b$, $\neg sup(s_1)\ledge{(m,n)}$. This is a contradiction to $s_1\edge{a}$. Hence, $sig(a)\in \{1,\dots, b\}$. By induction, $sup(s_{b+1})=sup(s_0) + (b+1)\cdot sig(a) = sup(s_0) \text{ mod } (b+1)$ implying $sup(s_{b+1})=sup(s_0)$. Thus, $(sup, sig)$ does not solve $(s_0, s_{b+1})$, which proves that $A$ is not $\tau_{R\mathbb{Z}PT}^{b}$-solvable. On the contrary, it is easy to see that the $\tau_{R\mathbb{Z}PT}^{b'}$-region $(sup, sig)$, which is defined by $sup(s_0)=0$, $sig(a)=1$ and $sup(s_{i+1})=sup(s_i)+ sig(a)$ for $i\in \{0,\dots, b'-1\}$, solves every SSA of $A$. Hence, $A$ is $\tau_{R\mathbb{Z}PT}^{b'}$-solvable. \end{proof} \subsection{Abstract regions and fundamental cycles} In the remainder of this paper, unless explicitly stated otherwise, we assume that $A=(S,E,\delta,\iota)$ is an arbitrary but fixed (non-trivial) TS with at least two states and event set $E=\{ e_1,\dots, e_n \}$. Recall that $\tau\in \{\tau_{\mathbb{Z}PT}^b, \tau_{\mathbb{Z}PPT}^b, \tau_{R\mathbb{Z}PT}^b\}$ and $b\in \mathbb{N}^+$ are also arbitrary but fixed. The proof of Theorem~\ref{the:tractability} bases on a generalization of the approach used in~\cite{DBLP:conf/stacs/Schmitt96} that reduces the solvability of ESSA and SSA to the solvability of systems of linear equations modulo $b+1$. It exploits that the solvability of such systems is decidable in polynomial time: \begin{lemma}[\cite{DBLP:journals/iandc/GoldmannR02}]\label{lem:complexity_solving_linear_system} Let $M \in \mathbb{Z}^{k \times n}_{b+1}$ and $c\in \mathbb{Z}^k_{b+1}$. There is an algorithm that decides in time $\mathcal{O}(nk\cdot max\{n,k\})$ whether there is an element $x\in \mathbb{Z}^n_{b+1}$ such that $Mx=c$. \end{lemma} Essentially, our generalization composes for every ESSA and every SSA $\alpha=(x,y)$ of the TS $A$ a system of equations modulo $b+1$ that has a solution if and only if $\alpha$ is $\tau$-solvable. Hence, the TS $A$ has the $\tau$-ESSP, respectively the $\tau$-SSP, if and only if every system, defined by the ESSA of $A$, respectively by the SSA of $A$, has a solution. We proceed by deducing the notion of abstract regions. Our starting point is the goal to obtain $\tau$-regions $(sup, sig)$ of $A$ as solutions of linear equation systems modulo $b + 1$. By Definition~\ref{def:region} and the definition of $\tau$, $(sup, sig)$ is a $\tau$-region of $A$ if and only if for every transition $s\edge{e}s'$ it is true that \begin{equation}\label{eq:region_definition} sup(s')=(sup(s)-sig^-(e)+sig^+(e)+\vert sig(e) \vert) \text{ mod } (b+1) \end{equation} Hence, installing for every transition $s\edge{e}s'$ the corresponding Equation~\ref{eq:region_definition} yields a linear system of equations whose solutions are regions of $A$. If $(sup, sig)$ is a solution of this system such that $sig(e)=(m,n)\in E_\tau\setminus \{0,\dots, b\}$ for $e\in E(A)$ then, by definition, for every transition $s\edge{e}s'$ it has to be true that $m \leq sup(s)$ and $sup(s')-m+n\leq b$. Unfortunately, the conditions $m \leq sup(s)$ and $sup(s')-m+n\leq b$ can not be tested in the group $\mathbb{Z}_{b+1}$. To cope with this obstacle, we abstract from elements $(m,n)\in E_\tau$ by restricting to regions (solutions) that identify $(m,n)$ with the unique element $x\in \{0,\dots, b\}$ such that $x= (n-m) \text{ mod } (b+1)$. This leads to the notion of \emph{abstract} $\tau$-regions. \begin{definition}[Abstract Region]\label{def:abstract_region} A $\tau$-region $(sup, sig)$ of $A=(S,E,\delta,\iota)$ is called \emph{abstract} if the codomain of $sig$ is restricted to the elements of $\mathbb{Z}_{b+1}$, that is, $sig: E\longrightarrow \{0,\dots, b\}$. If $(sup, sig)$ is an abstract region, then we call $sig$ an \emph{abstract} signature. \end{definition} \begin{remark}[Notation of abstract regions]\label{rem:notation_abstract_region} For the sake of clarity, we denote abstract signatures by $abs$ instead of $sig$ and abstract regions by $(sup, abs)$ instead of $(sup, sig)$. For convenience, we also identify $abs=(abs(e_1),\dots, abs(e_n))$. \end{remark} By definition, two mappings $sup:\{0,\dots, b\} \rightarrow \{0,\dots, b\}$ and $abs: E\rightarrow \{0,\dots, b\}$ define an abstract $\tau$-region if and only if for every transition $s\edge{e}s'$ of $A$ it is true that \begin{equation}\label{eq:abstract_region_definition} sup(s')=(sup(s) + abs(e)) \text{ mod } (b+1) \end{equation} Obviously, for abstract regions, the Equation~\ref{eq:region_definition} reduces to Equation~\ref{eq:abstract_region_definition}. Installing for every transition $s\edge{e}s'$ of $A$ its corresponding Equation~\ref{eq:abstract_region_definition} yields a system modulo $b+1$ whose solutions are abstract regions. However, such systems require to deal with $sup$ and $abs$ simultaneously, which is very inconvenient. It is better to first obtain $abs$ independently of $sup$ and then to define $sup$ with the help of $abs$. The following observations show how to realize this idea. By induction and Equation~\ref{eq:abstract_region_definition}, one immediately obtains that $(sup, abs)$ is an abstract region if and only if for every directed labeled path $p=\iota\Edge{e'_1}\dots\Edge{e'_m}s_m$ of $A$ from the initial state $\iota$ to the state $s_m$ the \emph{path equation} holds: \begin{equation}\label{eq:path_equation} sup(s_m) = (sup(\iota) + abs(e'_1)+ \dots + abs(e'_m)) \text{ mod }(b+1) \end{equation} In order to exploit Equation~\ref{eq:path_equation}, we first introduce the following notions: \begin{definition}[Parikh-vector]\label{def:parikh_vector} Let $p=z_0\edge{a_1}\dots\edge{a_m}z_m$ be a path of the TS $A$ on pairwise distinct states $z_0,\dots, z_m$. The \emph{Parikh-vector} of $p$ is the mapping $\psi_p:\{e_1,\dots, e_n\}\rightarrow \{0,\dots, b\}$ such that $\psi_p(e)=\vert \{i\in \{0,\dots, m-1\}\mid z_i\edge{e}\}\vert \text{ mod }(b+1)$ for every event $e\in \{e_1,\dots, e_n\}$, that is, $\psi_p$ assigns to $e$ the number of its occurrences on $p$ modulo $b+1$. For convenience, we identify $\psi_p=(\psi_p(e_1),\dots, \psi_p(e_n))$. \end{definition} \begin{definition}[Product]\label{def:product} If $x=(x_1,\dots, x_n)$ and $y=(y_1,\dots, y_n)$ are two elements of $\mathbb{Z}^n_{b+1}$, then we say $x\cdot y= (x_1\cdot y_1 +\dots + x_n\cdot y_n) \text{ mod } b+1 $ is the \emph{product} of $x$ and $y$. \end{definition} Definition~\ref{def:parikh_vector} and Definition~\ref{def:product} allow us to reformulate the path equation~\ref{eq:path_equation} as follows: \begin{equation}\label{eq:path_equation_reformulated} sup(s_m)=(sup(\iota) + \psi_p\cdot abs) \text{ mod } (b+1) \end{equation} Notice that if $p,p'$ are two different paths from $\iota$ to $s_m$, then $\psi_p\cdot abs =\psi_p\cdot abs$. Thus, the support $sup$ is fully determined by $sup(\iota)$ and $abs$. We obtain $sup$ explicitly by $sup(s)=(sup(\iota) + \psi_p\cdot abs) \text{ mod } (b+1)$ for all $s\in S$, where $p$ is an arbitrary but fixed path of $A$ that starts at $\iota$ and terminates at $s$. Consequently, every abstract signature $abs$ implies $b+1$ different abstract $\tau$-regions of $A$, one for every $sup(\iota)\in \{0,\dots, b\}$. Altogether, we have argued that the challenge of finding abstract regions of $A$ reduces to the task of finding the abstract signatures of $A$. In the following, we introduce the notion of fundamental cycles, defined by so-called chords of a spanning tree of $A$, which enables us to find abstract signatures. \begin{figure} \caption{Left: An input TS $A$. Right: A spanning tree $A'$ of TS $A$. The unique Parikh vectors $\psi_0,\dots \psi_7$ of $A'$ (written as rows) are given by $\psi_0= (0,0,0,0), \psi_1=(1,0,0,0), \psi_2= (1,1,0,0), \psi_3=(1,1,1,0), \psi_4=(1,1,2,0), \psi_5= (0,0,1,0)$, $\psi_6=(0,0,2,0) $ and $\psi_7=(1,0,2,0) $. The transitions $\delta_A(7,d)=4$, $\delta_A(4,c)=2$ and $\delta_A(6,c)=0$ of $A$ define the chords of $A'$. The corresponding fundamental cycles are given by $\psi_t=\psi_7 +(0,0,0,1) -\psi_4 = (0,2,0,1) $ and $\psi_{t'}= \psi_4 + (0,0,1,0)-\psi_2 =(0,0,0,0)$ and $\psi_{t''}=\psi_6 + (0,0,1,0)-\psi_0 =(0,0,0,0)$. Hence, if $abs=(x_a,x_b,x_c,x_d)$ then $\psi_t\cdot abs= 0\cdot x_a +2\cdot x_b +0\cdot x_c + x_d=2\cdot x_b + x_d$. By $\psi_{t'}\cdot abs = \psi_{t''}\cdot abs =0$ for every map $abs$, only the equation $2\cdot x_b + x_d=0$ contributes to the basic part of every upcoming system. } \label{fig:fundamental_cycles} \end{figure} \begin{definition}[Spanning tree, chord]\label{def:spanning_tree} A \emph{spanning tree} $A'$ of TS $A$ is a sub-transition system $A'=(S, E', \delta_{A'}, \iota)$ of $A$ with the same set of states $S$, an event set $E'\subseteq E$ and a restricted transition function $\delta_{A'}$ such that first $\delta_{A'}(s,e)=s'$ entails $\delta_A(s,e)=s'$ and, moreover, for every $s\in S$ there is \emph{exactly} one path $p=\iota\edge{e_1} \dots \edge{e_m} s$ in $A'$. Every transition $s\edge{e}s'$ of $A$ which is not in $A'$ is called a \emph{chord} (of $A'$). \end{definition} \begin{remark}[Parikh-vector of a state in the spanning tree] For every $s\in S$, by $\psi_s$ we denote the Parikh-vector $\psi_p$ of the unique path $p=\iota\edge{e_1} \dots \edge{e_m} s$ in $A'$. \end{remark} Notice that the underlying undirected graph of $A'$ is a tree in the common graph-theoretical sense. The chords of $A'$ are exactly the edges that induce a cycle in the underlying undirected graph of $A'$. This gives rise to the following notion of fundamental cycles: \begin{definition}[Fundamental cycle]\label{def:fundamental_cycle} Let $t=s\edge{e}s'$ be a chord of $A'$. The \emph{fundamental cycle} of $t$ is the mapping $\psi_t:\{e_1,\dots, e_n\}\rightarrow \{0,\dots, b\}$ that is defined as follows for all $i\in \{1,\dots, n\}$: \[\psi_t(e_i)= \begin{cases} \psi_s(e_i)-\psi_{s'}(e_1) \text{ mod } b+1, & \text{if } e_i\not=e\\ \psi_s(e_i) -\psi_{s'}(e_i) +1 \text{ mod } b+1, & \text{else.}\\ \end{cases} \] For convenience, we identify $\psi_t=(\psi_t(e_1),\dots, \psi_t(e_n))$. \end{definition} By the following lemma, we can use the fundamental cycles to generate abstract signatures of $A$: \begin{lemma}\label{lem:fundamental_cycles} If $A'$ is a spanning tree of a TS $A$ with chords $t_1,\dots, t_k$ then $\emph{abs}\in \mathbb{Z}^n_{b+1}$ is an abstract signature of $A$ if and only if $\psi_{t_i} \cdot \emph{abs} = 0$ for all $i\in \{1,\dots, k\}$. Two different spanning trees $A'$ and $A''$ provide equivalent systems of equations. \end{lemma} \begin{proof} We start with proving the first statement. \textit{If}: Let $\emph{abs}\in \mathbb{Z}^n_{b+1}$ such that $\psi_{t_i} \cdot abs = 0$ for all $i\in \{1,\dots, k\}$ and $sup(\iota)\in \{0,\dots, b\}$. Let $sup(\iota)\in \{0,\dots, b\}$ be arbitrary but fixed and, for all $s\in S$, let $sup(s)=sup(\iota)+\psi_s\cdot abs$. We show that $(sup, sig)$ is an abstract region of $A$, that is, for all edges $t=s\edge{a}s'$ of $A$ holds $sup(s')=sup(s) +abs(a) \text{ mod } b+1$: By definition, we have $sup(s)=sup(\iota)+\psi_s\cdot abs$ and $sup(s')=sup(\iota)+\psi_{s'}\cdot abs$. If $t$ is not a chord, then $\psi_{s'}(a)=\psi_s(a)+1 \text{ mod }b+1$ and $\psi_{s'}(e)=\psi_s(e)$ for all $e\in \{e_1,\dots, e_n\}\setminus\{a\}$. This implies $sup(s')=sup(\iota)+\psi_s\cdot abs + abs(a) \text{ mod }b+1$ and thus $sup(s')=sup(s) +abs(a) \text{ mod } b+1$. Otherwise, if $t$ is a chord of $A'$, then it holds $\psi_t(a)=\psi_s(a) -\psi_{s'}(a)+1$ and the following implications (considered modulo $b+1$) are true: \begin{align*} 0 & = \psi_t \cdot abs &\Longleftrightarrow \\ 0 &= \sum_{i=1}^n((\psi_s(e_i)-\psi_{s'}(e_i))\cdot abs(e_i) +abs(a)&\Longleftrightarrow \\ 0 &= \sum_{i=1}^n \psi_s(e_i)\cdot abs(e_i)- \sum_{i=1}^n \psi_{s'}(e_i)\cdot abs(e_i) +abs(a)&\Longleftrightarrow \\ \psi_{s'}\cdot abs &= \psi_s\cdot abs+abs(a)&\Longleftrightarrow \\ sup(\iota)+ \psi_{s'}\cdot abs &= sup(\iota) + \psi_s\cdot abs+abs(a)&\Longleftrightarrow \\ sup(s') & = sup(s) + abs(a) & \end{align*} Hence, $abs$ is an abstract signature of $A$ and the proof shows how to get a corresponding abstract region $(sup,abs)$ of $A$. \textit{Only-if}: If $abs$ is an abstract region of $A$ then we have $sup(s') = sup(s) + abs(e)$ for every transition in $A$. Hence, if $t=s\edge{e}s'$ is a chord of a spanning tree $A'$ of $A$ then working backwards the equivalent equalities above proves $\psi_t\cdot abs =0$. The second statement is implied by the first: If $A'$, $A''$ are two spanning trees of $A$ with fundamental cycles $\psi^{A'}_{t_1},\dots, \psi^{A'}_{t_k}$ and $\psi^{A''}_{t'_1},\dots, \psi^{A''}_{t'_k}$, respectively, then we have for $abs\in \mathbb{Z}^n_{b+1}$ that $\psi^{A'}_{t_i}\cdot abs =0 ,i\in \{1,\dots, k\}$ if and only if $abs$ is an abstract signature of $A$ if and only if $\psi^{A''}_{t'_i}\cdot abs =0 ,i\in \{1,\dots, k\}$. \end{proof} In the following, justified by Lemma~\ref{lem:fundamental_cycles}, we assume $A'$ to be a fixed spanning tree of $A$ with chords $t_1,\dots, t_k$. By $M_{A'}$ we denote the system of equations that consists of $\psi_{t_i} \cdot abs = 0$ for all $i\in \{1,\dots, k\}$. A spanning tree of $A$ is computable in polynomial time: As $\delta_A$ is a function, $A$ has at most $ \vert E\vert\vert S\vert^2$ edges and $A'$ contains $\vert S\vert -1$ edges. Thus, by $2 \leq \vert S\vert$, $A'$ has at most $\vert E\vert\vert S\vert^2 -1$ chords. Consequently, a spanning tree $A'$ of $A$ is computable in time $\mathcal{O}(\vert E\vert\vert S \vert^3)$~\cite{DBLP:journals/networks/Tarjan77}. To get polynomial time solvable systems of equations, we have restricted ourselves to equations like Equation~\ref{eq:path_equation} or its reformulated version Equation~\ref{eq:path_equation_reformulated}. This restriction results in the challenge to compute abstract signatures of $A$. By Lemma~\ref{lem:fundamental_cycles}, abstract signatures of $A$ are solutions of $M_{A'}$. We get an (abstract) $\tau$-region $(sup, abs)$ of $A$ from $sup(\iota)$ and $abs$ by defining $sup(\iota)$ and $sup(s)=sup(\iota)+\psi_s \cdot abs $ for all $s\in S$. However, if $(s, s')$ is an SSA of $A$ then $sup(s)\not=sup(s')$ is not implied. Moreover, by definition, to solve an ESSA $(e,s)$, we need (concrete) $\tau$-regions $(sup, sig)$ such that $sig:E \longrightarrow E_\tau$. The next section shows how to extend $M_{A'}$ to get such solving $\tau$-regions. \subsection{The Proof of Theorem~\ref{the:tractability}} This section shows how to extend $M_{A'}$ for a given (E)SSA $\alpha$ to get a system $M_\alpha$, whose solution yields a region solving $\alpha$ if there is one. But first we need the following lemma that tells us how to obtain abstract regions from (concrete) regions: \begin{lemma}\label{lem:concrete_to_abstract} If $(sup, sig)$ is a $\tau$-region of a TS $A=(S,E,\delta,\iota)$ then we obtain a corresponding abstract $\tau$-region $(sup, abs)$ by defining $abs$ for $e\in E$ as follows: If $sig(e)=(m,n)$ then $abs(e)=-m+n \text{ mod } (b+1)$ and, otherwise, if $sig(e)\in \{0,\dots, b\}$ then $abs(e)=sig(e)$. \end{lemma} \begin{proof} We have to show that $s\edge{e}s'$ in $A$ entails $sup(s)\ledge{abs(e)}sup(s')$ in $\tau$. If $abs(e)=sig(e)\in \{0,\dots, b\}$ this is true as $(sup, sig)$ is a $\tau$-region. If $sig(e)=(m,n)$ then, by definition, we have $sup(s')=sup(s)-m+n \text{ mod } (b+1)$ implying $sup(s')-sup(s)= -m+n \text{ mod } (b+1)$. By $abs(e)=-m+n \text{ mod } (b+1)$ and symmetry, we get $-m+n = abs(e) \text{ mod } (b+1)$ and, by transitivity, we obtain $sup(s')-sup(s) = abs(e) \text{ mod } (b+1)$ which implies $sup(s')= sup(s) + abs(e) \text{ mod } (b+1)$. Thus $sup(s)\ledge{abs(e)}sup(s')$. \end{proof} If $\alpha$ is an SSA $(s,s')$ then we only need to assure that the (abstract) region $(sup, abs)$ built on a solution of $M_{A'}$ satisfies $sup(s)\not=sup(s')$. By $sup(s)=sup(\iota) + \psi_s\cdot abs$ and $sup(s')=sup(\iota) + \psi_{s'}\cdot abs$, it is sufficient to extend $M_{A'}$ in a way that ensures $\psi_s\cdot abs\not= \psi_{s'}\cdot abs$. The next lemma proves this claim. \begin{lemma}\label{lem:ssp} If $\tau\in \{\tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b,\tau_{R\mathbb{Z}PT}^b\}$ then an SSA $(s,s')$ of $A=(S,E,\delta,\iota)$ is $\tau$-solvable if and only if there is an abstract signature $abs$ of $A$ with $\psi_{s}\cdot abs \not = \psi_{s'}\cdot abs $. \end{lemma} \begin{proof} \textit{If}: If $abs$ is an abstract signature with $\psi_{s}\cdot abs \not = \psi_{s'}\cdot abs $ then the $\tau$-region $(sup, abs)$ with $sup(\iota)=0$ and $sup(s)=\psi_{s}\cdot abs$ satisfies $sup(s)\not=sup(s')$. \textit{Only-if}: If $(sup, sig)$ is a $\tau$-region then we obtain a corresponding abstract $\tau$-region $(sup, abs)$ as defined in Lemma~\ref{lem:concrete_to_abstract}. Clearly, $abs$ is an abstract signature and satisfies the path equations. Consequently, by $sup(s_0)+\psi_{s}\cdot abs= sup(s) \not= sup(s')=sup(s_0)+\psi_{s'}\cdot abs $, we have that $\psi_{s}\cdot abs \not=\psi_{s'}\cdot abs $. \end{proof} The next lemma applies Lemma~\ref{lem:ssp} to get a polynomial time algorithm which decides the $\tau$-SSP if $\tau\in \{\tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b,\tau_{R\mathbb{Z}PT}^b\}$. \begin{lemma}\label{lem:ssp_tractability} If $\tau\in \{\tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b,\tau_{R\mathbb{Z}PT}^b\}$ then to decide whether a TS $A=(S,E,\delta,\iota)$ has the $\tau$-SSP is doable in time $\mathcal{O}(\vert E\vert^3 \cdot \vert S \vert^6\cdot)$. \end{lemma} \begin{proof} If $\alpha=(s,s')$ is an SSA of $A$ then the (basic) part $M_{A'}$ of $M_\alpha $ consists of at most $\vert E\vert\cdot \vert S\vert^2 - 1$ equations for the fundamental cycles. To satisfy $\psi_{s}\cdot abs \not = \psi_{s'}\cdot abs $, we add the equation $(\psi_{s}-\psi_{s'})\cdot abs = q$, where initially $q=1$, and get (the first possible) $M_\alpha$. A solution of $M_\alpha$ provides an abstract region satisfying $\psi_{s} \not= \psi_{s'}$. By Lemma~\ref{lem:ssp}, this proves the solvability of $\alpha$. If $M_\alpha$ is not solvable then we modify $M_\alpha$ to $M_\alpha'$ simply by incrementing $q$ and try to solve $M_\alpha'$. Either we get a solution or we modify $M_\alpha'$ to $M_\alpha''$ by incrementing $q$ again. By Lemma~\ref{lem:ssp}, if $(s,s')$ is solvable then there is a $q\in \{1,\dots, b\}$ such that the corresponding (modified) system has a solution. Hence, after at most $b$ iterations we can decide whether $(s,s')$ is solvable or not. Consequently, we have to solve at most $b$ linear systems with at most $ \vert E\vert \cdot \vert S\vert^2 $ equations for $(s,s')$. The value $b$ is not part of the input. Thus, by Lemma~\ref{lem:complexity_solving_linear_system}, this is doable in $\mathcal{O}(\vert E\vert^3 \cdot \vert S\vert^4)$ time. We have at most $\vert S \vert^2$ different SSA to solve. Hence, we can decide the $\tau$-SSP in time $\mathcal{O}(\vert E\vert^3 \cdot \vert S\vert^6)$. \end{proof} As a next step, we let $\tau=\tau_{R\mathbb{Z}PT}^b$ and prove the polynomial time decidability of $\tau$-ESSP. Let $\alpha$ be an ESSA $(e,s)$ and let $s_1,\dots, s_k$ be the sources of $e$ in $A$. By definition, a $\tau$-region $(sup, sig)$ solves $\alpha$ if and only if $sig(e)=(m,n)$ and $\neg sup(s)\ledge{sig(e)}$ for a $(m,n)\in E_\tau$. By definition of $\tau$, every element $(m,n)\in E_\tau$ occurs at exactly one state in $\tau$ and this state is $m$. Hence, $sup(s_1)=\dots=sup(s_k)=m$ and $sup(s)\not=m$. We base the following lemma on this simple observation. It provides necessary and sufficient conditions that an \emph{abstract} region must fulfill to imply a \emph{solving} (concrete) region. \begin{lemma}\label{lem:essp_tau_4} Let $\tau=\tau_{R\mathbb{Z}PT}^b$ and $A=(S,E,\delta,\iota)$ be a TS and let $s_1\edge{e}s'_1,\dots, s_k\edge{e}s'_k$ be the $e$-labeled transitions in $A$, that is, if $s'\in S\setminus \{s_1,\dots, s_k\}$ then $\neg s'\edge{e} $. The atom $(e,s)$ is $\tau$-solvable if and only if there is an event $(m,n)\in E_\tau$ and an abstract region $(sup,abs)$ of $A$ such that the following conditions are satisfied: \begin{enumerate} \item $abs(e) = -m+n \text{ mod } (b+1)$, \item $\psi_{s_1}\cdot abs = m - sup(\iota) \text{ mod } (b+1)$, \item $(\psi_{s_1}-\psi_{s_i})\cdot abs = 0 \text{ mod } (b+1)$ for $i\in \{2,\dots, k\}$ \item $(\psi_{s_1}-\psi_s)\cdot abs \not= 0 \text{ mod } (b+1)$. \end{enumerate} \end{lemma} \begin{proof} \textit{If}: Let $(sup, abs)$ be an abstract region that satisfies the conditions $1$-$4$. We obtain a $\tau$-solving region $(sup, sig)$ with (the same support and) the signature $sig$ defined by $sig(e')=abs(e')$ if $e'\not=e$ and $sig(e')=(m, n)$ if $e'=e$. To argue that $(sup, sig)$ is a $\tau$-region we have to argue that $q\edge{e'}q'$ in $A$ implies $sup(q)\ledge{sig(e')}sup(q')$. As $(sup, abs)$ is an abstract region this is already clear for transitions $q\edge{e'}q'$ where $e'\not=e$. Moreover, $(sup, abs)$ satisfies $\psi_{s_1}\cdot abs = m - sup(\iota) \text{ mod } (b+1)$ and the path equation holds, that is, $sup(s_1)=sup(\iota) + \psi_{s_1}\cdot abs \text{ mod } (b+1)$ which implies $sup(s_1)=m$. Consequently, by definition of $\tau$, we have $sup(s_1) \ledge{(m,n)} n $ in $\tau$. Furthermore, by $abs(e)= -m+n \text{ mod } (b+1)$ we have $m+abs(e) = n\text{ mod } (b+1)$. Hence, by $sup(s_1)\ledge{abs(e)}sup(s'_1)$, we conclude $sup(s'_1)=n$ and, thus, $sup(s_1)\ledge{(m,n)}sup(s'_1)$. By $(\psi_{s_1}-\psi_{s_i})\cdot abs = 0 \text{ mod } (b+1)$ for $i\in \{2,\dots, k\}$, we obtain that $sup(s_1)=\dots =sup(s_k)=m$. Therefore, similar to the discussion for $s_1\edge{e}s'_1$, we obtain by $sup(s_i)\ledge{abs(e)}sup(s'_i)$ that the transitions $sup(s_i)\ledge{(m,n)}sup(s'_i)$ are present in $\tau$ for $i\in \{2,\dots, k\}$. Consequently, $(sup, sig)$ is a $\tau$-region. Finally, by $(\psi_{s_1}-\psi_s)\cdot abs \not = 0 \text{ mod } (b+1)$, have that $sup(s_1)\not=sup(s)$ and thus $\neg sup(s)\ledge{sig(e)}$. This proves $(e,s)$ to be $\tau$-solvable by $(sup, sig)$. \textit{Only-if}: Let $(sup, sig)$ be a $\tau$-region that solves $(e,s)$ implying, by definition, $\neg sup(s)\ledge{sig(e)}$. We use $(sup, sig)$ to define a corresponding abstract $\tau$-region $(sup, abs)$ in accordance to Lemma~\ref{lem:concrete_to_abstract}. If $sig(e)\in \{0,\dots,b\}$ then $sup(s)\ledge{sig(e)}$, a contradiction. Hence, it is $sig(e)=(m,n)\in E_\tau$ such that $sup(s_i)\ledge{(m,n)}$ for $i\in \{1,\dots, k\}$ and $\neg sup(s)\ledge{(m,n)}$. This immediately implies $sup(s)\not=sup(s_1)$ and, hence, $(\psi_{s_1}-\psi_s)\cdot abs \not= 0 \text{ mod } (b+1)$. By $sup(s_i)\ledge{(m,n)}sup(s'_i)$ and definition of $\tau$, we have that $sup(s_i)=m$ and $sup(s'_i)=n$ for $i\in \{1,\dots, k\}$ implying $(\psi_{s_1}-\psi_{s_i})\cdot abs = 0 \text{ mod } (b+1)$ for $i\in \{2,\dots, k\}$. Moreover, by $sup(s_1)\ledge{abs(e)}sup(s'_1)$ we have $ abs(e) = sup(s'_1)-sup(s_1)\text{ mod } (b+1)$. Hence, it is $abs(e)= -m+n \text{ mod } (b+1)$. Finally, by the path equation, we have $sup(s_1)=sup(\iota)+\psi_{s_1}\cdot abs \text{ mod } (b+1)$ which with $sup(s_1)=m$ implies $\psi_{s_1}\cdot abs = m - sup(\iota)\text{ mod } (b+1)$. This proves the lemma. \end{proof} The proof of the following lemma exhibits a polynomial time decision algorithm for the $\tau_{R\mathbb{Z}PT}^b$-ESSP: Given a TS $A=(S,E,\delta,\iota)$ and a corresponding ESSA $\alpha$, the system $M_{A'}$ is extended to a system $M_\alpha$. If $M_\alpha$ has a solution $abs$, then it implies a region $(sup, abs)$ satisfying the conditions of Lemma~\ref{lem:essp_tau_4} and thus implies the $\tau$-solvability of $\alpha$. Conversely, if $\alpha$ is solvable, then there is an abstract region $(sup, abs)$ that satisfies the four conditions of by Lemma~\ref{lem:essp_tau_4}. The abstract signature $abs$ is the solution of a corresponding equation system $M_\alpha$. Hence, we get a solvable $M_\alpha$ if and only if $\alpha$ is solvable. We argue that the number of possible systems is bounded polynomially in the size of $A$. The solvability of every system is also decidable in polynomial time. Consequently, by the at most $\vert E\vert \cdot \vert S\vert $ ESSA to solve, this yields the announced decision procedure. \begin{lemma}\label{lem:essp_tractability} If a TS $A=(S,E,\delta,\iota)$ has the $\tau_{R\mathbb{Z}PT}^b$-ESSP is decidable in time $\mathcal{O}(\vert E\vert^4\cdot \vert S\vert^5)$. \end{lemma} \begin{proof} To estimate the computational complexity of deciding the $\tau_{R\mathbb{Z}PT}^b$-ESSP for $A$ observe that $A$ has at most $\vert S\vert \cdot \vert E\vert$ ESSA to solve. Hence, the maximum costs of deciding the $\tau_{R\mathbb{Z}PT}^b$-ESSP for $A$ equals $\vert S\vert \cdot \vert E \vert$ times the maximum effort for a single atom. In order to decide the $\tau$-solvability of a single ESSA $(e,s)$, we compose systems in accordance to Lemma~\ref{lem:essp_tau_4}. The maximum costs can be estimated as follows: The (basic) part $M_{A'}$ of $M_\alpha$ has at most $\vert E\vert \cdot \vert S \vert^2$ equations. Moreover, $e$ occurs at most at $\vert S\vert -1$ states. This makes at most $\vert S\vert$ equations to ensure that $e$'s sources will have the same support, the third condition of Lemma~\ref{lem:essp_tau_4}. According to the first and the second condition, we choose an event $(m,n)\in E_\tau$, a value $sup(\iota)\in \{0,\dots, b\}$, define $abs(e)=-m+n \text{ mod } (b+1)$ and add the corresponding equation $\psi_{s_1} \cdot abs = m - sup(\iota)$. For the fourth condition we choose a fixed value $q \in \{1,\dots, b\}$ and add the equation $(\psi_{s_1} - \psi_{s})\cdot abs =q$. Hence, the system has at most $2 \cdot \vert E\vert \cdot \vert S\vert^2 $ equations. By Lemma~\ref{lem:complexity_solving_linear_system}, one checks in time $\mathcal{O}(\vert E\vert^3 \cdot \vert S\vert^4 )$ if such a system has a solution. Notice, we use that $2\cdot \vert E\vert\cdot \vert S\vert^2 = max\{\vert E\vert , 2 \cdot \vert E\vert \cdot \vert S\vert^2 \}$. There are at most $(b+1)^2$ possibilities to choose a corresponding $(m,n)\in E_\tau$ and only $b+1$ possible values for $x$ and for $q$, respectively. Hence, for a fixed atom $(e,s)$, we have to solve at most $(b+1)^4$ such systems and $b$ is not part of the input. Consequently, we can decide in time $\mathcal{O}(\vert E\vert^3\cdot \vert S\vert^4 )$ if $(e,s)$ is solvable. $A$ provides at most $\vert S \vert\cdot \vert E \vert$ ESSA. Hence, the $\tau_{R\mathbb{Z}PT}^b$-ESSP for $A$ is decidable in time $\mathcal{O}(\vert E\vert^4 \cdot \vert S\vert^5)$. \end{proof} The following lemma completes the proof of Theorem~\ref{the:tractability} and, moreover, shows that \textsc{$\tau_{R\mathbb{Z}PT}^b$-Synthesis} is solvable in polynomial time. \begin{corollary}\label{cor:tractability_synthesis} There is an algorithm that constructs, for a TS $A=(S,E,\delta,\iota)$, a $\tau_{R\mathbb{Z}PT}^b$-net $N$ with a state graph $A_N$ isomorphic to $A$ if it exists in time $\mathcal{O}(\vert E\vert ^3 \cdot \vert S\vert^5 \cdot max\{ \vert E\vert, \vert S\vert \})$. \end{corollary} \begin{proof} By \cite{DBLP:series/txtcs/BadouelBD15}, if $\mathcal{R}$ is a set of regions of $A$ containing for each ESSP and SSA of $A$ a solving region, respectively, then the $\tau$-net $N^\mathcal{R}_A=(\mathcal{R}, E(A), f, M_0)$, where $f((sup,sig),e)=sig(e)$ and $M_0((sup,sig))=sup(\iota)$ for $(sup, sig)\in \mathcal{R}, e\in E(A)$, has a state graph isomorphic to $A$. Hence, the corollary follows from Lemma~\ref{lem:ssp_tractability} and Lemma~\ref{lem:essp_tractability}. \end{proof} \begin{example} We pick up our running example TS $A$ and its spanning tree of Figure~\ref{fig:fundamental_cycles}. We present two steps of the method given by Lemma~\ref{lem:essp_tractability} for the type $\tau^2_4$ and check $\tau^2_4$-solvability of the ESSA $(c,1)$. For a start, we choose $(m,n)=(0,1)$ and $sup(0)=0$ and determine $abs(c)=-0+1=1$ which yields $abs=(x_a, x_b, 1 , x_d)$. We have to add $\psi_0\cdot abs=m-sup(0)=0$ which, by $\psi_0=(0,0,0,0)$, is always true and do not contribute to the system. Moreover, for $i\in \{0,2,3,4,5,6\}$, we add the equation $(\psi_0-\psi_i)\cdot abs =0$. We have $\psi_0-\psi_6=(0,0,-2,0)$ and $(0,0,-2,0)\cdot abs= 0\cdot x_a - 0\cdot x_b - 2 -0\cdot x_d=0$ yields a contradiction. Hence, $(c,1)$ is not solvable by a region $(sup, sig)$ where $sup(0)=0$ and $sig(c)=(0,1)$. Similarly, we obtain that the system corresponding to $sup(0)\in \{1,2\}$ and $sig(c)=(0,1)$ is also not solvable. For another try, we choose $(m,n)=(2,2)$ and $sup(0)=2$. In accordance to the first and the second condition of Lemma~\ref{lem:essp_tau_4} this determines $abs=(x_a, x_b, 0 , x_d)$ and yields the equation $\psi_0 \cdot abs=m-sup(0)=2-2=0$ which is always true. For the fourth condition, we pick $q=2$ and add the equation $(\psi_0-\psi_1)\cdot abs= 2\cdot x_a=2$. Finally, for the third condition, we add for $i\in \{0,2,3,4,5,6\}$ the equation $(\psi_0-\psi_i)\cdot abs =0$ and obtain the following system of equations modulo $(b+1)$: \begin{align*} \psi_t\cdot abs &= & &2\cdot x_b & \ & + x_d &=0 \\ (\psi_0-\psi_1)\cdot abs &= 2\cdot x_a & \ & &\ &\ &= 2 \\ (\psi_0-\psi_2)\cdot abs &= 2\cdot x_a & +\ &2\cdot x_b &\ &\ &= 0\\ (\psi_0-\psi_3)\cdot abs &= 2\cdot x_a & +\ & 2\cdot x_b & + 2\cdot 0 &\ &= 0 \end{align*} \begin{align*} (\psi_0-\psi_4)\cdot abs &= 2\cdot x_a & +\ & 2\cdot x_b & + 1\cdot 0 &\ &= 0\\ (\psi_0-\psi_5)\cdot abs &= & & & 2\cdot 0 &\ &= 0\\ (\psi_0-\psi_6)\cdot abs &= & \ & & 1\cdot 0 &\ &= 0 \end{align*} This system is solvable by $abs=(1,2,0,2)$. We construct a region in accordance to the proof of Lemma~\ref{lem:essp_tau_4}: By $sup(0)=2$ we obtain $sup(1)=2+\psi_1\cdot abs=2+(1,0,0,0)\cdot (1,2,0,2)=0$. Similarly, by $sup(i)=2+\psi_i\cdot abs$ for $i\in \{2,\dots, 7\}$ we obtain $sup(2)=sup(3)=sup(4)=sup(5)=sup(6)=2$ and $sup(7)=0$. Hence, by defining $sig(c)=(2,2)$, $sig(a)=1$, $sig(b)=2$ and $sig(d)=2$ we obtain a fitting $\tau_{R\mathbb{Z}PT}^b$-region $(sup, sig)$ that solves $(c,1)$. \end{example} \section{Conclusion}\label{sec:conclusion} In this paper, for all $b\in \mathbb{N}$, we completely characterize the computational complexity of \textsc{$\tau$-SSP} and \textsc{$\tau$-ESSP} and \textsc{$\tau$-Solvability} for the types of pure $b$-bounded P/T-nets, $b$-bounded P/T-nets and their corresponding $\mathbb{Z}_{b+1}$-extensions. This answers an open problem posed by Schlachter et al. in~\cite{DBLP:conf/concur/SchlachterW17}. Some open problems in the field of Petri net synthesis concern the computational complexity of $\tau$-\emph{synthesis up to language equivalence} (\textsc{$\tau$-Language Synthesis}) and $\tau$-\emph{synthesis from modal TS} (\textsc{$\tau$-Modal Synthesis}): \textsc{$\tau$-Language Synthesis} is the task to find for a given TS $A=(S,E,\delta,\iota)$ a $\tau$-net $N$ whose state graph $A_N$ has the same language as $A$, that is, $L(A_N)=L(A)$. If there is a sought $\tau$-net $N$ for $A$, then $A$ is called $\tau$-solvable up to language equivalence. To attack this problem, in~\cite[p.~164]{DBLP:series/txtcs/BadouelBD15}, the language $L(A)$ of $A$ is viewed as the TS $L_A=(L(A), E,\delta_L,\varepsilon)$ where $\delta_L(w,e)=we$ if and only if $we\in L(A)$. By the result of~\cite[p.~164]{DBLP:series/txtcs/BadouelBD15}, there is a $\tau$-net $N$ that solves $A$ up to language equivalence if and only if the TS $L_A$ has the $\tau$-ESSP. Since there might be exponentially (or even infinite) many paths in $A$, computing $L_A$ and then checking the ESSP yields an algorithm that, in general, is at least exponential in the size of $A$. Anyway, the exact computational complexity of $\tau$-language synthesis has not yet been proven, and, so far, there has been also no lower bound. For $\tau\in \{\tau_{PT}^b,\tau_{PPT}^b\}$, our results imply a lower bound, to be seen as follows: If $A=s_0\edge{e_1}s_1\edge{e_2}\dots\edge{e_n}s_n$ is a linear TS, then $L_A=\varepsilon\edge{e_1}e_1\edge{e_2}\dots\edge{e_n}e_1\dots e_n$ (the states of $L_A$ are $e_1$ and $e_1e_2$ and $\dots$ and $e_1\dots e_n$). In particular, it is easy to see that $A$ and $L_A$ are isomorphic. Consequently, by~\cite[p.~164]{DBLP:series/txtcs/BadouelBD15}, a linear TS $A$ is $\tau$-solvable up to language equivalence if and only if it has the $\tau$-ESSP. Thus, by Theorem~\ref{the:hardness_results}, $\tau$-language synthesis is NP-hard, since there is a trivial reduction from \textsc{$\tau$-ESSP} to \textsc{$\tau$-language synthesis}. $\tau$-modal synthesis~\cite{DBLP:conf/concur/SchlachterW17} is the task to find for a given \emph{modal} TS $M$ a $\tau$-net $N$ such that the state graph $A_N$ \emph{implements} $A$: A \emph{modal} TS $M=(S, E, \delta_{must}, \delta_{may}, s_0)$ has a set of states $S$, events $E$, an initial state $s_0$, a (partial) function $\delta_{must}:S\times E\rightarrow S$ that defines the \emph{must}-edges and a (partial) function $\delta_{may}:S\times E\rightarrow S$ that defines the \emph{may} edges of $A$; moreover, $\delta_{must}$ and $\delta_{may}$ satisfy that if $\delta_{must}(s,e)=s'$, then $\delta_{may}(s,e)=s'$, that is, every must-arc is a may-arc, but not every may-arc is necessarily a must-arc. A TS $A$ that has the same event set as $M$ \emph{implements} $M$ if a relation $R\subseteq M(S)\times A(S)$ exists such that $(s_{0,M}, \iota)\in R$ and for all $(s, q)\in R$ and $e\in E(M)=E(A)$ the following holds: \begin{enumerate} \item If $\delta_{must}(s, e)=s'$, then there is a $q'\in S(A)$ such that $\delta_A(q,e)=q'$ and $(s',q')\in R$. \item If $\delta_A(q,e)=q'$, then there is a $s'\in S(M)$ such that $\delta_{may}(s,e)=s'$ and $(s',q')\in R$. \end{enumerate} If there is a searched net $N$ for $M$, then $M$ is called $\tau$-\emph{implementable}. The computational complexity of $\tau$-modal synthesis has been stated as an open problem in~\cite{DBLP:conf/concur/SchlachterW17}. While at least an (exponential) upper bound is given in~\cite{DBLP:conf/concur/SchlachterW17}, a lower bound has not yet been stated. Our results imply a lower bound for $\tau\in \{\tau_{PT}^b,\tau_{PPT}^b\}$. This can bee seen as follows: Every TS $A$ can be interpreted as a modal TS where the must-edges and the may-edges coincide. For such a TS, the just introduced implementation relation then reduces to the well known relation of bisimulation~\cite[p.~22]{DBLP:series/eatcs/Gorrieri17}. Moreover, it is also known that deterministic TS $A_0$ and $A_1$ are bisimilar if and only if they are language equivalent (also-called \emph{trace equivalent})~\cite[p.~26]{DBLP:series/eatcs/Gorrieri17}. Altogether, we have justified that a linear TS $A$ has the $\tau$-ESSP if and only if it is $\tau$-solvable up to language equivalence if and only if, interpreted as modal TS, it is implementable by the state graph $A_N$ of a $\tau$-net $N$. Thus, for $\tau\in\{\tau_{PT}^b,\tau_{PPT}^b\}$, the following theorem is a corollary of Theorem~\ref{the:hardness_results} and, at least, gives lower bounds for the computational complexity of both $\tau$-language synthesis and $\tau$-modal synthesis: \begin{theorem} Let $\tau\in \{\tau_{PT}^b,\tau_{PPT}^b\}$. Deciding for a TS $A$ if it is $\tau$-solvable up to language equivalence or deciding for a modal TS $M$ if it is $\tau$-implementable is NP-hard. \end{theorem} It remains for future work to settle the exact complexity of $\tau$-language synthesis and $\tau$-modal synthesis. Moreover, one might investigate if \textsc{$\tau$-Solvability} and \textsc{$\tau$-ESSP} remain NP-complete for $1$-grade TS if $\tau\in \{\tau_{\mathbb{Z}PT}^b,\tau_{\mathbb{Z}PPT}^b\}$. \end{document}
arXiv
The Schwarzian derivative and polynomial iteration Author: Hexi Ye Journal: Conform. Geom. Dyn. 15 (2011), 113-132 MSC (2010): Primary 37F10; Secondary 37F40 DOI: https://doi.org/10.1090/S1088-4173-2011-00229-3 Published electronically: August 16, 2011 Abstract: We consider the Schwarzian derivative $S_f$ of a complex polynomial $f$ and its iterates. We show that the sequence $S_{f^n}/d^{2n}$ converges to $-2(\partial G_f)^2$, for $G_f$ the escape-rate function of $f$. As a quadratic differential, the Schwarzian derivative $S_{f^n}$ determines a conformal metric on the plane. We study the ultralimit of these metric spaces. Lars V. Ahlfors, On quasiconformal mappings, J. Analyse Math. 3 (1954), 1–58; correction, 207–208. MR 64875, DOI https://doi.org/10.1007/BF02803585 Hans Brolin, Invariant sets under iteration of rational functions, Ark. Mat. 6 (1965), 103–144 (1965). MR 194595, DOI https://doi.org/10.1007/BF02591353 A. F. Beardon, Symmetries of Julia sets, Bull. London Math. Soc. 22 (1990), no. 6, 576–582. MR 1099008, DOI https://doi.org/10.1112/blms/22.6.576 A. F. Beardon, Polynomials with identical Julia sets, Complex Variables Theory Appl. 17 (1992), no. 3-4, 195–200. MR 1147050, DOI https://doi.org/10.1080/17476939208814512 I. N. Baker and A. Erëmenko, A problem on Julia sets, Ann. Acad. Sci. Fenn. Ser. A I Math. 12 (1987), no. 2, 229–236. MR 951972, DOI https://doi.org/10.5186/aasfm.1987.1205 Martin R. Bridson and André Haefliger, Metric spaces of non-positive curvature, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319, Springer-Verlag, Berlin, 1999. MR 1744486 David Dumas, Complex projective structures, Handbook of Teichmüller theory. Vol. II, IRMA Lect. Math. Theor. Phys., vol. 13, Eur. Math. Soc., Zürich, 2009, pp. 455–508. MR 2497780, DOI https://doi.org/10.4171/055-1/13 Laura G. DeMarco and Curtis T. McMullen, Trees and the dynamics of polynomials, Ann. Sci. Éc. Norm. Supér. (4) 41 (2008), no. 3, 337–382 (English, with English and French summaries). MR 2482442, DOI https://doi.org/10.24033/asens.2070 L. DeMarco and K. Pilgrim. Polynomial basins of infinity (preprint). José L. Fernández, A note on the Julia set of polynomials, Complex Variables Theory Appl. 12 (1989), no. 1-4, 83–85. MR 1040911, DOI https://doi.org/10.1080/17476938908814356 Gaston Julia, Mémoire sur la permutabilité des fractions rationnelles, Ann. Sci. École Norm. Sup. (3) 39 (1922), 131–215 (French). MR 1509242 Brad Osgood and Dennis Stowe, The Schwarzian derivative and conformal mapping of Riemannian manifolds, Duke Math. J. 67 (1992), no. 1, 57–99. MR 1174603, DOI https://doi.org/10.1215/S0012-7094-92-06704-4 Kurt Strebel, Quadratic differentials, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], vol. 5, Springer-Verlag, Berlin, 1984. MR 743423 L. V. Ahlfors. On quasiconformal mappings. J. Analyse Math. 3 (1953–1954), 1–58. MR 0064875 (16:348d) H. Brolin. Invariant sets under iteration of rational functions. Arkiv for Matematik 6 (1965), 103-144. MR 0194595 (33:2805) A. F. Beardon. Symmetries of Julia sets. Bull. London Math. Soc. 22 (1990), 576–582. MR 1099008 (92f:30033) A. F. Beardon. Polynomials with identical Julia sets. Complex Variables 17 (1992), 195–200. MR 1147050 (93k:30033) I. N. Baker and E. Eremenko. A problem on Julia sets. Ann. Acad. Sci. Fenn. 12 (1987) 229–236. MR 951972 (89g:30047) M. R. Bridson and A. Haefliger. Metric spaces of non-positive curvature. Springer-Verlag, Berlin, 1999. MR 1744486 (2000k:53038) D. Dumas. Complex projective structures. In Handbook of Teichm$\ddot {u}$ller theory. Vol. II 13 IRMA Lect. Math. Theor. Phys., pages 455–508. Eur. Math. Soc., Z$\ddot {u}$rich, 2009. MR 2497780 (2010j:32043) L. DeMarco and C. McMullen. Trees and the dynamics of polynomials. Ann. Sci. École Norm. Sup. 41 (2008), 337–383. MR 2482442 (2010d:37087) J. Fern$\acute {\mathrm {a}}$ndez. A note on the Julia set of polynomials. Complex Variables 12 (1989), 83–85. MR 1040911 (91b:30069) G. Julia. M$\acute {\mathrm {e}}$moire sur la permutabilit$\acute {\mathrm {e}}$des fractions rationnelles. Ann. Acad. $\acute {\mathrm {E}}$cole Norm. Sup. 39 (1922), no. 3, 131–215. MR 1509242 B. Osgood and D. Stowe. The Schwarzian derivative and conformal mapping of Riemannian manifolds. Duke Math. J. 67 (1992), 57–99. MR 1174603 (93j:53062) K. Strebel. Quadratic differentials. Ergebnisse der Mathematik und ihrer Grenzgebiete (3), 5. Springer-Verlag, Berlin, 1984. MR 743423 (86a:30072) Retrieve articles in Conformal Geometry and Dynamics of the American Mathematical Society with MSC (2010): 37F10, 37F40 Retrieve articles in all journals with MSC (2010): 37F10, 37F40 Hexi Ye Affiliation: University of Illinois at Chicago, Department of Mathematics and Computer Science, MC 249, 851 S. Morgan Street, Chicago, Illinois 60607-7045 Received by editor(s): June 17, 2011
CommonCrawl
What is a continuous path? I would like some help, because I am getting mad trying to answer the following Question: Let $X$ be a topological space, what is a continuous path in $X$? Well, maybe you're already getting nervous thinking: it's just a continuous function $\gamma:[0,1]\rightarrow X$. This definition indeed works very well for manifolds and, more generally, for spaces containing homeomorphic copies of the interval $[0,1]$, but it gets trivial and useless in all other cases, including some of great interest: graphs and, more generally, locally finite metric spaces in the discrete world, but also non-standard objects as ${}^*\mathbb R$. At some point, I've realized a very stupid thing; namely that the gap in the classical definition of a continuous path is that the notion of continuity is imposed from outside, taking as a unit of measurement the unit interval $[0,1]$. This is quite arbitrary, isn't it? This observation was somehow revolutionary, at least for me: at that point, I closed my eyes, imaging to live in a topological space and I tried to capture a notion of continuity from inside: a natural answer is that it would sound, roughly, like: continuity is to move from one point to another one doing the shortest possible steps... This philosophical definition can be made formal for a quite general class of metric spaces (containing for instance all locally finite connected graphs) Example: Let $(X,d)$ be a locally finite metric space. Given $x\in X$, denote by $dN_1(x)$ the smallest closed ball about $x$ which contains at least two points. One may define a continuous path in $X$ to be a sequence of points $x_0,x_1,\ldots,x_{n-1}x_n$ such that, for all $i$, $x_i\in dN_1(x_{i-1})$ and $x_{i-1}\in dN_1(x_i)$. Following a similar idea, one is tempted to define homotopy between paths and so on.. Everything works unexpectedly well as you can see, if interested, in http://arxiv.org/abs/1111.0268. As remarked by Tim Porter, similar ideas have been developed by Helene Barcelo and co-authors. What I would like to do now is to approach the problem of defining an intrinsic homology theory that might be of interest for any topological space. Subquestion: do you know if somebody tried to do something similar? In case of a negative answer to the sub-question, I would appreciate also any help finding the answer to the first question. Indeed, I am really satisfied from the locally finite case and I would like to formalize the philosophical definition: a continuous path connecting $x$ to $y$ is a way to go from $x$ to $y$ making the shortest possible steps. But it is absolutely not clear to me how to make it formal for a general topological space. Update: In case someone is interested, some of these ideas got finally accepted for publication in a paper with Jacob White and Helene Barcelo in the Bull London Math Soc. http://arxiv.org/pdf/1306.3915.pdf gn.general-topology at.algebraic-topology graph-theory Valerio Capraro Valerio CapraroValerio Capraro $\begingroup$ Plaut and Berestovskii have written some papers that might be of interest. They are not exactly the same thing. They talk about $\epsilon$-chains and homotopy of $\epsilon$-chains, where an $\epsilon$-chain is a sequence of points where each point is within $\epsilon$ of the previous. They have some interesting results. $\endgroup$ – Jim Conant Nov 12 '11 at 21:08 $\begingroup$ I've thought of the following: A path from $x$ to $y$ is a set that is contains $x$ and $y$, is connected (every two nonempty open subsets that cover it have nonempty intersection), and is minimal with respect to these properties. The intuition here is that if you remove any one point from $[0,1]$, it stops being connected. This is connected to the Cech cohomology concept because a set which is connected is exactly a set which has 1-dimensional 0th Cech cohomology. $\endgroup$ – Will Sawin Nov 13 '11 at 7:52 $\begingroup$ Try looking at strong shape theory! $\endgroup$ – Tim Porter Nov 13 '11 at 12:06 $\begingroup$ I trust you are aware of Vietoris homology (Brouwer 1911, Vietoris 1927): eom.springer.de/v/v096640.htm $\endgroup$ – Alain Valette Nov 13 '11 at 12:31 $\begingroup$ @Ariyan Javanpeykar: There are plenty of ordinary paths in Berkovich spaces. $\endgroup$ – S. Carnahan♦ Nov 13 '11 at 15:16 It sounds to me like what you're looking for is something like Cech (co)homology. The idea is that you can detect what kind of "paths" there are in a space by the combinatorics of which sets in open covers have nontrivial intersections. As a simple example, you can detect that a circle has a nontrivial loop by covering it with 3 open sets $U$, $V$, and $W$, such that any two of them intersect but $U\cap V\cap W$ is empty. More precisely, given any open cover, you can construct a simplicial complex which is the "nerve" of the open cover and has the "paths" that a space having that open cover "morally" should have. Of course, you shouldn't expect a single open cover to capture all of the information you're trying to capture about a space, so you have to take some sort of limit over all open covers of your space. Taking finer and finer open covers is like taking the "shortest possible steps" that you refer to. As an example of how this might give you what you're looking for, Cech cohomology can't tell the difference between an ordinary circle and a loop built out of a topologist's sine curve, or an ordinary circle and a circle obtained by gluing together the two ends of a closed long line. It won't work for things like the hyperreals unless you restrict to some sort of internal open covers, because the hyperreals as a topological space are disconnected, and Cech cohomology detects topological connectedness (but not path-connectedness in the usual sense). And of course, if you're dealing with things like locally finite metric spaces which are discrete as spaces, there's no hope of saying anything interesting unless you endow the spaces with more structure than just a topology. As a technical point, Cech cohomology is pretty well-behaved, but if you try to do the same thing with homology you run into problems because when you take a limit over all open covers you end up taking an inverse limit of homology groups, and taking inverse limits is not exact. According to nLab there is something called strong homology which tries to remedy this which you might want to take a look at. EDIT: If you want something that can work for discrete spaces with additional structure, note that you can take the nerve of one specific covering, rather than taking a limit over all covers. For instance, I would guess that your homology theory of locally finite metric spaces is the same as the Cech homology of the cover consisting of the balls $dN_1(x)$ for all $x$. For "nice" spaces a similar phenomenon occurs: the limit over all covers coincides with what you get from a single cover satisfying some simple condition. For example, for simplicial complexes, you can take the cover consisting of the open star of each vertex, or for a Riemannian manifold, you can take a cover consisting of geodesically convex sets. Eric WofseyEric Wofsey $\begingroup$ Eric, many thanks for the detailed answer. I didn't know about such things. Tomorrow I will have a look at them in details. $\endgroup$ – Valerio Capraro Nov 12 '11 at 23:18 $\begingroup$ Also: Čech homotopy ncatlab.org/nlab/show/%C4%8Cech+homotopy $\endgroup$ – David Roberts Nov 12 '11 at 23:57 $\begingroup$ Agreeing with David Roberts, there is a large amount of literature on this (see also shape theory and strong shape theory) plus old work by Borsuk on intrinsic homotopy (if I remember rightly). Locally finite spaces may be useful, but play down on the metric. $\endgroup$ – Tim Porter Nov 13 '11 at 13:06 $\begingroup$ Strong homology is another name (or a slight variant of) Steenrod-Sitnikov homology. It can (approximately) be defined by taking the Vietoris (or Cech) inverse system of chain complexes and then taking their homology limit. Finally you take the homology of the result. This needs the fact that the Cech nerve forms a homotopy coherent diagram indexed by the directed set of open covers. $\endgroup$ – Tim Porter Nov 13 '11 at 15:17 Eric, Tim, and David have basically given what I think is the right answer, but I want to mention the words "site" and "topos" which have surprisingly not yet occurred in this discussion, and give some more references. A site or topos is a generalization of a topological space, built on the notions of part and cover. (A topos is the true object; a site is a sort of "base" or "presentation" which generates a topos.) Roughly, a topos has a collection of parts, with containment relations between them, and a notion of when a part is covered by some collection of parts contained in it. This is sufficient for Cech-style homology and homotopy theories, and the resulting notion of "path" is indeed the sort of thing you have described: chains of parts which meet each other. Every topological space gives rise to a topos whose parts are its open subsets, where covers are unions. This is the canonical way of embedding topological spaces into toposes. (It is a full embedding for most topological spaces.) But a topological space can also give rise to other toposes. For instance, one can consider the topos whose parts are the arbitrary (not necessarily) open subtoposes of the topos above; this topos is called the "dissolution" of a space. Or going in the other direction, one can take the open parts but allow only good covers. I think this is the proper way to deal with the question of disconnectedness. One should not, I feel, expect to find any "paths" in a very disconnected object: by the very fact of its being disconnected, there should not be any ways to get from here to there. If one sees "paths" in a disconnected topological space, that suggests that there is some data other than the topology defining some other object in which the paths live. For instance, to take Eric's example, the hyperreals give rise to a topos whose parts are the internal open subsets, and this topos will (I believe) be connected. A couple of interesting papers which are specifically about paths in this context are: Kennison, What is the fundamental group? Moerdijk and Wraith, Connected locally connected toposes are path-connected. Like much other work in this area, they restrict to the locally connected case, but the ideas do not really require that restriction (things just get more complicated otherwise). $\begingroup$ Mike, thanks for the interesting comment. I'd just like to tell that my opinion is that it's the classical notion of continuous path that requires some extra data; i.e. the space $[0,1]$. The question is easy: suppose you live in the 3-cycle graph (or in your favourite connected graph). In your Maths, probably you cannot even define the interval $[0,1]$. What is a continuous path in your world? $\endgroup$ – Valerio Capraro Nov 18 '11 at 9:55 $\begingroup$ That's a different meaning of "extra data" than I was using, but yes, I certainly agree that the choice of [0,1] is a little bit arbitrary, and inappropriate when dealing with spaces that don't admit many maps out of [0,1]. $\endgroup$ – Mike Shulman Nov 19 '11 at 4:00 Adding into the comments by Alain Valette: ... Hence my suggestion of Strong Shape Theory. Vietoris homotopy gives one way of approaching strong shape. Again it is mentioned on the nLab. Vietoris homology still has the problem of non-exact sequences. It can be replaced by Steenrod-Sitnikov homology or Mardesic's strong homology theory. (There is also a renewal of interest in finite topological spaces, see work by Minian and Barmak. An application of similar ideas occurs through topological data analysis, see work by Carlsson et al at Stamford. This also uses a Rips complex, that is probably known to you from your general interests.) Looking at some of the other comments and answers, it may help to look at some of the ideas of graph homotopy theory that are around. These are linked via the original work of Dowker (1953) on the homology of a relation. (I can provide more indicators if you think it would help.) (Edit: The following describes a related theory: Perspectives on A-homotopy theory and its applications, Hélène Barcelo, Reinhard Laubenbacher, Discrete Mathematics 298 (2005) 39 – 61.) Alain Valette Tim PorterTim Porter $\begingroup$ Thank you, Tim. I'll tried to have a look at it this afternoon. $\endgroup$ – Valerio Capraro Nov 13 '11 at 13:36 $\begingroup$ I added one reference to the answer. (If anyone else is interested do ask as the problems in this area do look interesting and quite 'fun'.) $\endgroup$ – Tim Porter Nov 14 '11 at 14:22 Instead of going to the lake (actually the weather is not that good), I've spent all Sunday morning on your references. First of all, thank you very much everybody. Here I want to collect some comments about Cech cohomology and the approach by Berestovskii and Plaut. I have just seen that Tim Porter and Alain Valette have suggested to look at something else. My afternoon will be devoted to those references. Cech cohomology: as observed even by Eric himself, Cech cohomology does not work for disconnected spaces, so my first thought was that the answer didn't help. Indeed I am now sure (well, OK, I'm too young to be sure about something!) that there is some homology/homotopy/cohomology/whatever-theory completely general and intrinsic that is of interest (=non trivial) for any topological space. This is why I cannot accept the answer, but I give +1 because I like the point of view and I think that, at the end, the bad behavior of Cech cohomology is given by the fact that it uses open coverings. The notion of open set is too over-used (does this English word exist??) and sometimes one needs something different (an example is the van Kampen theorem). For instance, in a locally finite metric space every set is open and so it is clear that they are too many. So I give +1 because I want to have a closer look to Cech cohomology in order to understand is one can replace the open coverings with something that works better. Berestovskii and Plaut: At the beginning I got really scared, because I've thought they were doing the same things. But NO. I have to say that I disagree with their approach for the following reasons: besides the (important, but in this case, minor) problem that the construction depends on the radius of the entourage (so it is not clear what happens when the radius goes to zero in a locally finite metric space, or what should be the right radius to choose and so on), I thing that the problem is the definition of homotopy between $\epsilon$-chains. Recall their Definition: An $\epsilon$-chain is a finite sequence $x_0,x_1,\ldots,x_{n-1}x_n$ such that $d(x_i,x_{i-1})<\varepsilon$, for all $i$. Two $\epsilon$-chains are called homotopic equivalent if one can pass from the first to the second via a finite sequence of operation of adding/cancelling point in such a way that every intermediate step is an $\epsilon$-chain starting from $x_0$ and ending in $x_n$. One obtains a group and bla bla bla. The point is that this construction is not interesting for instance for finite graphs (one gets the usual free group generated by the missing edges of a spanning tree) What I have proposed in my preprint is the following (Update: it is turned out that (very) similar definitions have been alreadyproposed in the so-called $A$-theory (see references in Tim Porter answer or in my OT). Definition: Two continuous paths (in the sense of my OT) $x_0x_1\ldots x_{n-1}x_n$ and $y_0y_1\ldots y_{n-1}y_n$ (I can suppose that the length is equal adding some constant path), with $x_0=x_n=y_0=y_n$ are homotopic equivalent if one can find points $z_i^k$ such that the following formal matrix $$ \left( \begin{array}{ccccc} x_0 & x_1 & \ldots & x_{n-1} & x_0 \\\ x_0 & z_1^2 & \ldots & z_{n-1}^2 & x_0 \\\ \ldots & \ldots & \ldots & \ldots & \ldots \\\ x_0 & z_1^{k-1} & \ldots & z_{n-1}^{k-1} & x_0 \\\ x_0 & y_1 & \ldots & y_{n-1} & x_0 \\\ \end{array} \right) $$ verifies the property that every row and every column is a continuous path. This definition, which seems to me more natural (roughly, a continuous deformation of a path is to replace each point of the path to one of the nearest point), also works incredibly well and non trivially. For instance we (my two collaborators for the second piece, A. Gournay and T.Pillon, and I) have an example of a graph with 28 vertex whose fundamental group is $\mathbb Z_2$. I'd like to include it here (you may be interested), but I have no idea how to include a figure here. The technique/framework mentioned by Jim Conant and used by Berestovskii and Plaut goes back to the paper J. Krasinkiewicz and P. Mine, Generalized paths and pointed 1-movability For an update on this subject see http://front.math.ucdavis.edu/0706.3937 and the last section in http://front.math.ucdavis.edu/0812.1407 (which includes a simplified proof of the Krasinkiewicz-Minc result). Edit: I was writing this in rush for an airplane, and could not elaborate on what the references are about. Now that strong shape (and even related things like Cech cohomology and Steenrod-Sitnikov homology) have been mentioned by others this simplifies my job. What Krasinkiewicz and Minc were doing in that paper is essentially paths in the sense of strong shape. (They don't explicitly speak of "strong shape", but on the other hand it happens that papers and books that originally developed strong shape, including Tim Porter's, and most of subsequent literature under the "strong shape" brand has been incredibly focused on either categorical or general-topology aspects and didn't care to pursue any specific geometric problems, so if you're interested in any kind of substantial results on paths in the sense of strong shape, you have to look for them elsewhere!) In the above-mentioned paper, Krasinkiewicz and Minc proved the following wonderful theorem: If $X$ is a connected (metrizable) compactum that is disconnected in the sense of strong shape (that is, not all strong shape morphisms from a point into $X$ are the same) then there exist distinct strong shape morphisms (in fact, uncountably many ones) from a point into $X$ that are represented by genuine points in $X$. This may sound like it should be either trivial or wrong, but no, it's a deep geometric result. Sergey MelikhovSergey Melikhov $\begingroup$ Thank you very much, Sergey, for the references. Tomorrow I will have a look at them. $\endgroup$ – Valerio Capraro Nov 13 '11 at 2:23 $\begingroup$ Thanks for giving more complete references! Hope you are doing well, Sergey. $\endgroup$ – Jim Conant Nov 14 '11 at 3:46 It sounds to me like your inventions are related to persistent homology, developed by Weinberger, Carlsson, and others. There is an informative "What is..." article about this by Weinberger: http://www.ams.org/notices/201101/rtx110100036p.pdf. The idea is to take a discrete subset of Euclidean space and calculate its homology on all different scales essentially by covering it with balls of a given radius $R$ and analyzing what happens as $R$ varies. If your space has a cycle in it then this cycle will be detected for a certain range of values of $R$, and the assumption is that the important structures will "persist" for more values of $R$. Another idea which might be related is Roe's coarse geometry. There's a "What is..." article about this as well: http://www.ams.org/notices/200606/whatis-roe.pdf. Here the interest is in infinite discrete spaces (or really any non-compact metric space), and there is a coarse cohomology theory which detects only the large-scale geometry of a space. Paul SiegelPaul Siegel $\begingroup$ Thank you for the references. I can see right now that Roe's coarse cohomology is not what I am looking for, since that is indeed a coarse theory and I am looking for something which is very local. For instance, my fundamental group of the $n$-cicle graph is $\mathbb Z$ (for $n\geq5$). On the other hand, coarse stuff get trivial for bounded sets. Instead, the idea behind persistent homology looks like something making use of Rips complexes... I spent some time about Rips complexes and I am pretty sure that they are not very good in this case.. Anyway, I will have a look at the "What's" paper $\endgroup$ – Valerio Capraro Nov 20 '11 at 17:37 Another possible direction on the fundamental group(oid) is given a kind of survey in my paper `Three themes in the work of Charles Ehresmann: Local-to-global; Groupoids; Higher dimensions', Proceedings of the 7th Conference on the Geometry and Topology of Manifolds: The Mathematical Legacy of Charles Ehresmann, Bedlewo (Poland) 8.05.2005-15.05.2005, Banach Centre Publications 76, Institute of Mathematics Polish Academy of Sciences, Warsaw, (2007) 51-63. (math.DG/0602499). relating to the themes of monodromy and holonomy, and work of Jean Pradines. Here one is interested in the notion of an "iteration of local procedures" in the context of manifold theory. The question put in that paper is: Can one use these ideas in other situations to obtain monodromy (i.e. analogues of "universal covers") in situations where paths do not exist but "iterations of local procedures" do? This seems related to Valerio's question. Ronnie BrownRonnie Brown $\begingroup$ Thanks for the interesting references. I'll have a look at them. $\endgroup$ – Valerio Capraro May 10 '12 at 10:52 There is something like Cech cohomology. It is Alexander-Kolmogorov cohomology (see Spenyer "Algebraic topology", Alexander cohomology). Simplixes in this theory are collections of near points. As I understand it works well for locally contractible spaces (and coincides with Cech cohomology). But it seems that in your situation it works well too. Nikita KalininNikita Kalinin I am completely layman here, but I have found that question because I thought about very the same idea as OP. At first I've asked myself, why such discrete object as combinatorial simplex can describe something continuous as toplogical torus for example. And my answer was as follows: because homology theory use simplicial sets in general position, it means that only intersections of various n dimensional simplexes matters. I mean we may reduce all the theory to equivalent, using 0 dimensional elements ( points), and various sets which represents more complicated simplexes. For example line segment is just a discrete set (1,2) where 1 and 2 are just ends. In this representation, path is just a ordered set family when subsequent sets have non empty intersections. For discrete spaces it may be even restrictions to the size of such intersections: it should be set of the strictly minimal cardinality. For 2element sets it should consist only 1 element sets, for 3 element sets, only 2 element sets etc. Complex in such setting is a set consisting several subsets, with various cardinality. For example ( combinatorial) triangle would be: (1,2,2,(1,2),(2,3),(3,1),(1,2,3)). In such setting, a hole in a complex is situation where there is"closed path" ( first and least sets are the same) but there is no set in which elements are sum of the sets consisting the path. I suppose it follows to something which is very similar to Čech homology but I have no knowledge to decide if it is. What is interesting it should not be very complicated to use it to discrete objects like graphs. One should just consider not only points, but various sets of points, and for the set families building a paths, a sets is sums of its elements together ( as bigger cardinality objects, constructing something like simplexes of larger cardinality but inside discrete object). Please notice that for a given discrete space ( such as graph) one may consider various subgraphs immersed inside, and ask if they are as dense as possible or have holes ( sidestep of existing graph vertices, but not containing it). In such approach, we may think about internal geometry of subgraphs of given graph relating it to base graph as full space, and subgraphs as objects living inside, and having various characteristics. I would like to ask you for information if anything similar was done for discrete settings, and from combinatorial point of view? kakazkakaz Not the answer you're looking for? Browse other questions tagged gn.general-topology at.algebraic-topology graph-theory or ask your own question. What properties make $[0,1]$ a good candidate for defining fundamental groups? Topological characterization of the closed interval $[0,1]$ What if homotopy were expanded to allow any connected space instead of $[0,1]$? Representability of finite metric spaces What are the topological properties of the metric space retained (inherited) for its completion Defining topological spaces with the notion of continuous path A closed connected component in a topological space does not contain any path-connected subset? Continuous functions on path-connected subsets Every continuous function is homotopic to a locally Lipschitz one Lifts across covering maps Is every path connected space continuously path connected Paths in Cech closure spaces Paths in path component spaces
CommonCrawl
Strong dual space In functional analysis and related areas of mathematics, the strong dual space of a topological vector space (TVS) $X$ is the continuous dual space $X^{\prime }$ of $X$ equipped with the strong (dual) topology or the topology of uniform convergence on bounded subsets of $X,$ where this topology is denoted by $b\left(X^{\prime },X\right)$ or $\beta \left(X^{\prime },X\right).$ The coarsest polar topology is called weak topology. The strong dual space plays such an important role in modern functional analysis, that the continuous dual space is usually assumed to have the strong dual topology unless indicated otherwise. To emphasize that the continuous dual space, $X^{\prime },$ has the strong dual topology, $X_{b}^{\prime }$ or $X_{\beta }^{\prime }$ may be written. Strong dual topology Throughout, all vector spaces will be assumed to be over the field $\mathbb {F} $ of either the real numbers $\mathbb {R} $ or complex numbers $\mathbb {C} .$ Definition from a dual system Main article: Dual system Let $(X,Y,\langle \cdot ,\cdot \rangle )$ be a dual pair of vector spaces over the field $\mathbb {F} $ of real numbers $\mathbb {R} $ or complex numbers $\mathbb {C} .$ For any $B\subseteq X$ and any $y\in Y,$ define $|y|_{B}=\sup _{x\in B}|\langle x,y\rangle |.$ Neither $X$ nor $Y$ has a topology so say a subset $B\subseteq X$ is said to be bounded by a subset $C\subseteq Y$ if $|y|_{B}<\infty $ for all $y\in C.$ So a subset $B\subseteq X$ is called bounded if and only if $\sup _{x\in B}|\langle x,y\rangle |<\infty \quad {\text{ for all }}y\in Y.$ This is equivalent to the usual notion of bounded subsets when $X$ is given the weak topology induced by $Y,$ which is a Hausdorff locally convex topology. Let ${\mathcal {B}}$ denote the family of all subsets $B\subseteq X$ bounded by elements of $Y$; that is, ${\mathcal {B}}$ is the set of all subsets $B\subseteq X$ such that for every $y\in Y,$ $|y|_{B}=\sup _{x\in B}|\langle x,y\rangle |<\infty .$ Then the strong topology $\beta (Y,X,\langle \cdot ,\cdot \rangle )$ on $Y,$ also denoted by $b(Y,X,\langle \cdot ,\cdot \rangle )$ or simply $\beta (Y,X)$ or $b(Y,X)$ if the pairing $\langle \cdot ,\cdot \rangle $ is understood, is defined as the locally convex topology on $Y$ generated by the seminorms of the form $|y|_{B}=\sup _{x\in B}|\langle x,y\rangle |,\qquad y\in Y,\qquad B\in {\mathcal {B}}.$ The definition of the strong dual topology now proceeds as in the case of a TVS. Note that if $X$ is a TVS whose continuous dual space separates point on $X,$ then $X$ is part of a canonical dual system $\left(X,X^{\prime },\langle \cdot ,\cdot \rangle \right)$ where $\left\langle x,x^{\prime }\right\rangle :=x^{\prime }(x).$ :=x^{\prime }(x).} In the special case when $X$ is a locally convex space, the strong topology on the (continuous) dual space $X^{\prime }$ (that is, on the space of all continuous linear functionals $f:X\to \mathbb {F} $) is defined as the strong topology $\beta \left(X^{\prime },X\right),$ and it coincides with the topology of uniform convergence on bounded sets in $X,$ i.e. with the topology on $X^{\prime }$ generated by the seminorms of the form $|f|_{B}=\sup _{x\in B}|f(x)|,\qquad {\text{ where }}f\in X^{\prime },$ where $B$ runs over the family of all bounded sets in $X.$ The space $X^{\prime }$ with this topology is called strong dual space of the space $X$ and is denoted by $X_{\beta }^{\prime }.$ Definition on a TVS Suppose that $X$ is a topological vector space (TVS) over the field $\mathbb {F} .$ Let ${\mathcal {B}}$ be any fundamental system of bounded sets of $X$; that is, ${\mathcal {B}}$ is a family of bounded subsets of $X$ such that every bounded subset of $X$ is a subset of some $B\in {\mathcal {B}}$; the set of all bounded subsets of $X$ forms a fundamental system of bounded sets of $X.$ A basis of closed neighborhoods of the origin in $X^{\prime }$ is given by the polars: $B^{\circ }:=\left\{x^{\prime }\in X^{\prime }:\sup _{x\in B}\left|x^{\prime }(x)\right|\leq 1\right\}$ as $B$ ranges over ${\mathcal {B}}$). This is a locally convex topology that is given by the set of seminorms on $X^{\prime }$: $\left|x^{\prime }\right|_{B}:=\sup _{x\in B}\left|x^{\prime }(x)\right|$ as $B$ ranges over ${\mathcal {B}}.$ If $X$ is normable then so is $X_{b}^{\prime }$ and $X_{b}^{\prime }$ will in fact be a Banach space. If $X$ is a normed space with norm $\|\cdot \|$ then $X^{\prime }$ has a canonical norm (the operator norm) given by $\left\|x^{\prime }\right\|:=\sup _{\|x\|\leq 1}\left|x^{\prime }(x)\right|$; the topology that this norm induces on $X^{\prime }$ is identical to the strong dual topology. Bidual See also: Banach space § Bidual, Reflexive space, and Semi-reflexive space The bidual or second dual of a TVS $X,$ often denoted by $X^{\prime \prime },$ is the strong dual of the strong dual of $X$: $X^{\prime \prime }\,:=\,\left(X_{b}^{\prime }\right)^{\prime }$ where $X_{b}^{\prime }$ denotes $X^{\prime }$ endowed with the strong dual topology $b\left(X^{\prime },X\right).$ Unless indicated otherwise, the vector space $X^{\prime \prime }$ is usually assumed to be endowed with the strong dual topology induced on it by $X_{b}^{\prime },$ in which case it is called the strong bidual of $X$; that is, $X^{\prime \prime }\,:=\,\left(X_{b}^{\prime }\right)_{b}^{\prime }$ where the vector space $X^{\prime \prime }$ is endowed with the strong dual topology $b\left(X^{\prime \prime },X_{b}^{\prime }\right).$ Properties Let $X$ be a locally convex TVS. • A convex balanced weakly compact subset of $X^{\prime }$ is bounded in $X_{b}^{\prime }.$[1] • Every weakly bounded subset of $X^{\prime }$ is strongly bounded.[2] • If $X$ is a barreled space then $X$'s topology is identical to the strong dual topology $b\left(X,X^{\prime }\right)$ and to the Mackey topology on $X.$ • If $X$ is a metrizable locally convex space, then the strong dual of $X$ is a bornological space if and only if it is an infrabarreled space, if and only if it is a barreled space.[3] • If $X$ is Hausdorff locally convex TVS then $\left(X,b\left(X,X^{\prime }\right)\right)$ is metrizable if and only if there exists a countable set ${\mathcal {B}}$ of bounded subsets of $X$ such that every bounded subset of $X$ is contained in some element of ${\mathcal {B}}.$[4] • If $X$ is locally convex, then this topology is finer than all other ${\mathcal {G}}$-topologies on $X^{\prime }$ when considering only ${\mathcal {G}}$'s whose sets are subsets of $X.$ • If $X$ is a bornological space (e.g. metrizable or LF-space) then $X_{b(X^{\prime },X)}^{\prime }$ is complete. If $X$ is a barrelled space, then its topology coincides with the strong topology $\beta \left(X,X^{\prime }\right)$ on $X$ and with the Mackey topology on generated by the pairing $\left(X,X^{\prime }\right).$ Examples If $X$ is a normed vector space, then its (continuous) dual space $X^{\prime }$ with the strong topology coincides with the Banach dual space $X^{\prime }$; that is, with the space $X^{\prime }$ with the topology induced by the operator norm. Conversely $\left(X,X^{\prime }\right).$-topology on $X$ is identical to the topology induced by the norm on $X.$ See also • Dual topology • Dual system • List of topologies – List of concrete topologies and topological spaces • Polar topology – Dual space topology of uniform convergence on some sub-collection of bounded subsets • Reflexive space – Locally convex topological vector space • Semi-reflexive space • Strong topology • Topologies on spaces of linear maps References 1. Schaefer & Wolff 1999, p. 141. 2. Schaefer & Wolff 1999, p. 142. 3. Schaefer & Wolff 1999, p. 153. 4. Narici & Beckenstein 2011, pp. 225–273. Bibliography • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. • Wong (1979). Schwartz spaces, nuclear spaces, and tensor products. Berlin New York: Springer-Verlag. ISBN 3-540-09513-6. OCLC 5126158. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Duality and spaces of linear maps Basic concepts • Dual space • Dual system • Dual topology • Duality • Operator topologies • Polar set • Polar topology • Topologies on spaces of linear maps Topologies • Norm topology • Dual norm • Ultraweak/Weak-* • Weak • polar • operator • in Hilbert spaces • Mackey • Strong dual • polar topology • operator • Ultrastrong Main results • Banach–Alaoglu • Mackey–Arens Maps • Transpose of a linear map Subsets • Saturated family • Total set Other concepts • Biorthogonal system
Wikipedia
Advanced Curation of Astromaterials for Planetary Science Role of Sample Return in Addressing Major Questions in Planetary Sciences Francis M. McCubbin ORCID: orcid.org/0000-0002-2101-44311, Christopher D. K. Herd2, Toru Yada3, Aurore Hutzler1, Michael J. Calaway4, Judith H. Allton1, Cari M. Corrigan5, Marc D. Fries1, Andrea D. Harrington1, Timothy J. McCoy5, Julie L. Mitchell1, Aaron B. Regberg1, Kevin Righter1, Christopher J. Snead6, Kimberly T. Tait7, Michael E. Zolensky1 & Ryan A. Zeigler1 Space Science Reviews volume 215, Article number: 48 (2019) Cite this article Just as geological samples from Earth record the natural history of our planet, astromaterials hold the natural history of our solar system and beyond. Astromaterials acquisition and curation practices have direct consequences on the contamination levels of astromaterials and hence the types of questions that can be answered about our solar system and the degree of precision that can be expected of those answers. Advanced curation was developed as a cross-disciplinary field to improve curation and acquisition practices in existing astromaterials collections and for future sample return activities, including meteorite and cosmic dust samples that are collected on Earth. These goals are accomplished through research and development of new innovative technologies and techniques for sample collection, handling, characterization, analysis, and curation of astromaterials. In this contribution, we discuss five broad topics in advanced curation that are critical to improving sample acquisition and curation practices, including (1) best practices for monitoring and testing of curation infrastructure for inorganic, organic, and biological contamination; (2) requirements for storage, processing, and sample handling capabilities for future sample return missions, along with recent progress in these areas; (3) advancements and improvements in astromaterials acquisition capabilities on Earth (i.e., the collection of meteorites and cosmic dust); (4) the importance of contamination knowledge strategies for maximizing the science returns of sample-return missions; and (5) best practices and emerging capabilities for the basic characterization and preliminary examination of astromaterials. The primary result of advanced curation research is to both reduce and quantify contamination of astromaterials and preserve the scientific integrity of all samples from mission inception to secure delivery of samples to Earth-based laboratories for in-depth scientific analysis. Advanced curation serves as an important science-enabling activity, and the collective lessons learned from previous spacecraft missions and the results of advanced curation research will work in tandem to feed forward into better spacecraft designs and enable more stringent requirements for future sample return missions and Earth-based sample acquisition. Working on a manuscript? Avoid the common mistakes Human fascination with the night sky and with celestial objects that fall to the Earth from the sky is as old as our species, and use of these astromaterials as a natural resource occurred at least as early as the Bronze Age (Jambon 2017; McCoy 2018; McCoy et al. 2017). However, the initial curation of astromaterials as objects of scientific interest to understand our universe began more recently (Marvin 2006) and in earnest with the curation of meteorite samples in museums starting in the year 1748 at the Natural History Museum Vienna (Brandstätter 2006). Meteorites have remained objects of fascination by scientists and the public alike with the establishment of many meteorite collections across the world. Meteorite recovery and curation practices vary widely and are highly dependent on many factors, including the knowledge and resources of the finder and the financial and technical support available for the collection in which the sample is curated. The scientific importance of the sample can also be a determining factor, but this is predicated on the aforementioned factors. All meteorites, regardless of how they were handled from recovery to curation, have experienced uncontrolled entry and exposure to the terrestrial environment, including, at minimum, the terrestrial atmosphere and the ground. This exposure results in terrestrial contamination, the amount of which is typically dependent on the physicochemical properties of the meteorite, the conditions at the fall site, and the amount of exposure time to the terrestrial environment. Consideration of these factors can also be determining factors in how a meteorite sample is curated. An overview of meteorite collections, their contents, and curation practices is available in McCall et al. (2006). Until the 1960's, delivery of all astromaterials to Earth were unplanned events that required reactionary responses for recovery and curation. However, with the initiation of the Apollo program, direct return of pristine astromaterials from another body became possible, and with it, established the need to design a facility to keep those samples in a pristine state for an indefinite period of time. Planning for the Lunar Receiving Laboratory (LRL) began in 1964, and the facility was completed in 1967 (Calaway et al. 2017; McLane et al. 1967). As part of this planning, stringent protocols in the handling, storage, and processing of samples were developed. These protocols ensured that portions of the samples remained pristine or as close to an "as returned" state as possible in perpetuity to enable future scientific discoveries from the returned samples. The delivery of Apollo 11 samples to Earth occurred on July 24, 1969 at 12:50 EDT, four days after the first successful human landing on the Moon. This round-trip journey marked a transformative milestone in human history and as the first sample return mission, provided the initial fuel to drive the burgeoning field of planetary sample science. The planning process for curation prior to the return of the Apollo 11 samples set the precedent that curation involvement and planning begins at the inception of a sample return mission, and this founding principle has guided sample return missions subsequent to Apollo (e.g., Allen et al. 2011; Yada et al. 2014). There have been a total of 13 successful sample return missions, including six manned Apollo missions from NASA, three unmanned lunar sample return missions from the Union of Soviet Socialist Republics (USSR), the NASA Long Duration Exposure Facility that exposed various materials to the low-Earth orbit environment for approximately 6 years, the NASA Genesis mission that returned solar wind from the Earth-Sun Lagrange point 1 (L1), the NASA Stardust mission that returned particles embedded in aerogel from the coma of Comet Wild 2 and from interstellar space, and JAXA's Hayabusa mission that returned material from the surface of asteroid Itokawa (Table 1). In addition, there are two sample return missions in flight, including JAXA's Hayabusa2 mission that will return samples from the asteroid Ryugu and NASA's OSIRIS-REx mission that will return material from the asteroid Bennu (Table 1). Further details about each of these missions are provided in Table 1. With each successive sample return mission comes with it an important set of lessons learned that are used to inform subsequent sample return missions, and these lessons learned extend to curation standards and practices. Table 1 Planetary sample return missions The Apollo program offered the first set of lessons learned and set forth the modern era of curation practices for astromaterials from the solar system. With the exception of USSR Luna missions, all sample return missions in the last two decades have built upon the legacy of Apollo. While recent missions have contributed to lessons learned, the majority of lessons learned and established practices can be linked to Apollo. The Apollo program actively sought out a wide range of scientists and eventually levied the scientific community at large to influence mission conception and design. Mission decisions and laboratory research on returned samples, at least peripherally, were focused substantially in maximizing science obtained from samples in laboratory research. The majority of these sample scientists were found in the field of geological sciences. The management environment was an integration of human spaceflight mission objectives, engineering constraints, sample scientists, and those responsible to prevent back contamination of the Earth. In this management structure, conflicts routinely arose and were not only turf battles, but were rooted in basic technical conflicts to balance crew safety, lunar sample preservation, and potential hazard containment for unknown biological pathogens. Since Apollo was a series of missions, it was possible to improve sampling hardware and laboratory handling devices using experience and samples from the lunar surface. For example, regolith drive tube function was greatly improved through redesign for Apollo missions 15–17 to allow deep penetration with minimal distortion of stratigraphy. Knowledge gained from examination of the first samples (Apollo missions 11–14) allowed the switch from a high-vacuum gloved handling environment to pure gaseous nitrogen positive pressure gloveboxes, which better preserved sample cleanliness and ease of use. A mission series like Apollo allows fine tuning of sample collection and returned sample handling as knowledge is acquired. Building upon Apollo and later sample return missions, a series of lessons learned and best practices for future sample return missions were developed and listed as follows: World-class scientific expertise: Integration of planetary sample scientists as advisors on science issues through formal organizations such as the historical Lunar Sample Analysis Planning Team (LSAPT) and Lunar and Planetary Science Team (LAPST) and as well as today's Curation and Analysis Planning Team for Extraterrestrial Materials (CAPTEM). CAPTEM currently presents findings to NASA on sample allocations ensuring best science and fair access to samples, current curation facilities, and inspection of laboratory operations, capabilities, capacity needs, and staffing. CAPTEM also provides findings for publicizing sample characterization information and service to the community. In addition, CAPTEM provides NASA with findings on design review of sample receiving and curation facilities as well as material restrictions/suggestions to preserve science value of samples. The integration of planetary science and geology training for astronauts, mission managers, and engineers involved in sample return missions. The integration of sample scientists into landing site selection, traverse planning, and sample acquisition. The integration of sample scientists into mission control operations and advisors during missions. The integration of Earth receiving and curation operations personnel into mission conception and engineering spacecraft design is critical for any sample return mission. Selection of materials that have low to zero particulate shedding mechanical properties for spacecraft, primary sample containment, handling, and storage equipment to preserve sample integrity. Selection of materials that have low to zero outgassing mechanical properties for spacecraft, primary sample containment, handling, and storage equipment to preserve sample integrity. Selection of a diversity of materials for primary sample containment, handling, and storage equipment to enable scientific investigations of the entire periodic table, organic compounds, and biological matter. Sample return missions should establish a concept of sample segregation for primary mission goals (e.g., segregation of samples in different containment/isolation used for inorganic, organic, and biological investigations as well as focused goals of the mission). Sample acquisition and containment must always focus on prohibiting cross-contamination and preservation of the scientific integrity of each sample. The integration of curation, proper material selection, and cleaning into mission contamination control requirements and implementation during Assembly, Test, and Launch Operations (ATLO) is critical for sample return. Use of inert and/or vacuum environments or environments close to native collection environments for processing and storage of astromaterials. Develop standard practices to mitigate contamination from terrestrial atmosphere, pressures, and temperatures. Use of environmental monitoring methods, cleanroom technology, and biological safety isolation to maintain desired processing and storage environments. Lessons learned not only inform our best practices, but they also help to identify strategic knowledge gaps that require new research to fill. Furthermore, if we look only at improving upon our current curation capabilities, we will not be prepared when returned samples require care that is very different from those within our current collections. At present, most returned samples are geological in nature, with the exception of the Genesis solar wind atoms that are implanted within a number of high purity material substrates. Most of the samples are kept close to room temperature and, when kept in the pristine environments of a clean laboratory, will maintain their fidelity indefinitely. However, future sample return missions could bring back samples that require storage and handling conditions outside of current capabilities, including gases, liquids, ices, or biological materials. To successfully curate these sensitive materials also requires new research, and we describe here a field of research that we refer to as advanced curation. Advanced Curation is a cross-disciplinary field that seeks to improve curation practices in existing astromaterials collections, including meteorite and cosmic dust samples that are collected on Earth. Specifically, advanced curation has two primary goals that include (1) expansion of the sample processing and storage capabilities of astromaterials facilities to prepare for future sample return missions and Earth-based collection of astromaterials and (2) to maximize the science returns of existing astromaterials sample collections. These goals are accomplished through research and development of new innovative technologies and techniques for sample collection, handling, characterization, analysis, and curation of astromaterials. In addition, advanced curation includes testing and evaluation of new technologies and operational procedures for future sample return missions through human and robotic analog studies. Here we outline best practices and procedures and highlight new results, capabilities, and ongoing activities in the field of advanced curation of astromaterials. In particular, we outline (1) the best practices for monitoring and testing of curation infrastructure for contamination, (2) the development of new storage, processing, and sample handling capabilities, (3) the development and improvement of new astromaterials acquisition capabilities on Earth (i.e., the collection of meteorites and cosmic dust), (4) the importance of contamination knowledge strategies for maximizing the science returns of sample-return missions, (5) best practices and emerging capabilities for the preliminary examination and initial characterization of astromaterials, and finally (6) a summary of the biggest challenges that lie ahead as we look toward future sample-return initiatives. Monitoring and Testing of Curation Infrastructure All sample return curation facilities are designed and built to meet specific controlled environment and cleanliness standards for the curated samples. Curation infrastructure is defined as all engineering systems that control the sample's storage and processing environment. This definition incorporates brick and mortar, temporary, modular, and mobile facilities. In addition, specialized equipment is included such as isolation chambers, gloveboxes, and desiccators that have the ability to alter the atmospheric chemistry, temperature, and pressure of the environment. During the Apollo program, curation infrastructure borrowed many innovative technologies from handling radioactive materials and biological quarantine practices. Today, curation infrastructure is derived from many industries including the nuclear, biotechnology, pharmaceutical, and semiconductor industries (USP 2013; Whyte 2001; Ramstorp 2000). Methods and techniques are either borrowed, augmented, or invented to maintain the controlled environment to mitigate terrestrial cross-contamination. Contamination covers any element that could compromise sample integrity. To quote the definition of pristine from Dworkin et al. (2018), it means that "no foreign material is introduced to the sample in an amount that hampers the ability to analyze the chemistry and mineralogy of the sample". While sample return missions designate contamination limits on specific elements and compounds at time of launch with focused science goals, samples are effectively allocated over time to study everything on the periodic table. Therefore, the implementation of curation infrastructure should be mindful that everything could be a contaminant to some research group. Modern cleanroom facilities have substantial infrastructure footprints that require continual monitoring to ensure they operate within the defined strict contamination control guidelines. This requires continuous monitoring and testing of the labs to verify that the sample processing environments remain clean from the standpoint of inorganic, organic, and biological contamination. As it is unrealistic to eliminate all contamination, careful monitoring and contamination knowledge must be conducted. To this end, curation laboratories that house astromaterials have developed numerous protocols and methods to monitor curation facilities and we outline those practices below. Real-Time Continuous Monitoring and Testing of Curation Cleanroom Laboratories Cleanrooms are a specialized controlled environment that must be continually monitored to verify whether they are working to defined parameters and specifications. The international standards organization (ISO) have developed fundamental standards for cleanrooms, namely ISO 14644. Curation cleanroom laboratories follow this standard as well as many adopted recommend practices from several industries (e.g., IEST, SEMI, GSA, etc.). For curation facilities, cleanroom measurements are regularly made to ensure that the heating ventilation, and air conditioning (HVAC) system is creating the appropriate cascade of positive or negative pressure and that Fan Filter Units (FFU), in conjunction with the HVAC system, are delivering the proper level of airborne particles to accepted limits for the planned ISO class. Temperature and humidity are also kept within pre-specified limits within the intended operational parameters of the HVAC system. Ideally, real-time remote monitoring can track airborne particulates, room-to-room differential pressures, temperature, humidity, and HVAC operations. Remote airborne particle counters have either internal or external pumps with a flow rate of 0.1 CFM (2.83 LPM) or 1.0 CFM (28.3 LPM) dependent on ISO Class and desired statistics. Many of them can output up to 6 channels of simultaneous data within the range of 0.3–25.0 μm. For ISO Class 4 and below, a dedicated 0.1 μm particle counter is desired to improve particle count statistics. While real-time remote monitoring is ideal, hand-held manual particle counters are sometimes used for spot checking spaces and annual ISO Class audits. For ISO Class 5 and above, these handheld particle counting instruments are typically set-up for a 2 minute measurement with a total sampling volume of 5.68 L and particle channels set at 0.3, 0.5, 0.7, 1.0, 5.0, and 10.0 μm. In lieu of real-time continuous remote monitoring, weekly particle counts of all curation labs are desirable of key areas with a full ISO audits conducted annually or bi-annually. Curation cleanroom laboratories primarily use a positive pressure differential barrier to reduce contamination. A pressure differential barrier is based on the concept of using positive pressure air flow cascade to create a cleaner zone towards a less clean zone as a first line of defense to prevent cross-contamination between two adjacent spaces. The pressure differential should be of significant magnitude and stability to prevent any reversal of air flow between barriers including when barrier thresholds are crossed and/or doors are opened. However, the pressure differential should not be too high as to create turbulent air flow that could compromise the clean zone. In addition, too high of pressure between zones can also prevent doors from opening. For example, at 0.10 inH2O (inches of water), a \(3 \times 7\) ft. door requires 11 lbs. of force to open and close. Furthermore, this pressure results in unwanted turbulent air flow. ISO 14644-4, the design, construction, and start-up of cleanrooms and associated controlled environment, contains the international standard for cleanroom air-flow monitoring. ISO 14644-4 Section A.5.3 states that the pressure between clean zones should be set at: \(\Delta P = 0.02\) to 0.08 inH2O (5 to 20 Pa). The cleanroom technology literature generally recommends a pressure differential of 0.04 inH2O (10 Pa) between two cleanrooms and a pressure differential of 0.06 inH2O (15 Pa) between the cleanroom and an unclassified room (Sakraida 2008; Whyte 2001). Whyte (2010) discusses the reason for ISO 14644-4 acceptable minimum of 0.02 inH2O (5 Pa) pressure between adjacent rooms. This acceptable minimum was established for processing facilities that handle products that can be adversely affected from greater pressures. These low pressure differentials can sometimes be found in long tunnels between processing cleanrooms that contain air flow sensitive products. Whyte (2010) further discusses if 0.02 inH2O (5 Pa) must be used; confirmation of the air flow direction must be verifiable with routine observable smoke flow tests (assuming such tests would not be a source of contamination). Sakraida (2008) discusses recent experimental studies that have tested the optimal pressure differential between clean zones. Pressure differentials between 0.03 to 0.05 inH2O were determined to be optimal for mitigating cross-contamination. The study further suggested that clean zones with pressures above 0.05 inH2O showed little increased benefit to mitigate contamination compared to increased energy costs of operating the air handling unit. Based on ISO 14644-4 standards and available cleanroom technology literature, astromaterials curation laboratories should ideally maintain ≥ 0.05 inH2O between interior "dirty" hallways to laboratory anterooms and a minimum of 0.03 to 0.05 inH2O in most adjacent rooms between anteroom and main laboratory. For primary astromaterials storage areas and processing laboratories, ideally 0.05 to 0.08 inH2O should be maintained to mitigate the long-term infiltration of contaminates. However, it is important to note that higher pressures may be desired to create a buffer to mitigate the risk of dropping below 0.05 inH2O based on air flow stability from the HVAC and laboratory layout. Real-time continuous remote monitoring is common for modern cleanrooms with a desired differential pressure accuracy of about \(\pm 0.001\) inH2O or better. For older cleanroom laboratories, manual magnehelic differential pressure gauges are sometimes still used for monitoring differential pressures. In addition, annual or biannual differential pressure audits are conducted between each room doorway threshold with a handheld manometer and data placed on a building map to verify proper cascade of pressures. HVAC ON/OFF and velocity (\(\mbox{m}/\mbox{s}\)) are continually monitored in real-time. The data displayed are also used to check air changes per hour towards the as-built of the cleanroom and ISO standards. The FFUs are biannually or annually checked to be running at \(90 \pm 10\) fpm. While FFUs are typically not monitored in real-time, this is an important routine check to assess failing blower motors and the efficiency of the ULPA or HEPA filters to determine when they need to be replaced. Electrostatic charging and discharging in curation laboratories has the potential to cause damage to samples and equipment. In addition, electrostatic discharges are a serious safety hazard to laboratory personnel. Most curation cleanrooms maintain a temperature between 24 to \(15\ ^{\circ}\mbox{C} \pm 1.0\ ^{\circ}\mbox{C}\) and relative humidity (RH%) of <65% to \({>}35\% \pm 1.0\%\) RH. These ranges are based on ISO 14644-4 and ISO 14644-5 standards and are only for laboratory environments and do not reflect the environment of containment, such as in gloveboxes where moisture (H2O) is commonly measured below 1 ppm. A deviation in any of these parameters or over a certain threshold (per curation protocol specific to the collection) triggers an investigation to understand the source of the problem and mitigate any faults. In case the issue cannot be resolved in a timely manner, samples are securely placed into storage and work stops in the lab, especially for samples processed outside of gloveboxes. Real-Time Continuous Monitoring of Curation Infrastructure Systems Inert Environments Most pristine astromaterials benefit from not being stored and processed in terrestrial atmosphere. Since Earth's atmosphere is an oxidizing environment, preservation of astromaterials are preferred to be placed in an indigenous, vacuum, or inert environment. Most astromaterials on Earth are stored and processed in an inert gas such as nitrogen, argon, or helium, with the exception of JAXA's vacuum receiving glovebox used for the Hayabusa mission. Of these three inert gases, nitrogen is the most cost effective and is often chosen over argon and helium for routine storage. However, nitrogen analysis of astromaterials samples are compromised by processing in nitrogen, so nitrogen is not used exclusively. At NASA Johnson Space Center (JSC), building 31 and 31N has a dedicated 15000 gallon liquid nitrogen (LN2) tank and tank farm that converts high purity LN2 to gaseous nitrogen (GN2) for the entire building infrastructure. This nitrogen gas system provides an inert environment for processing and storing all NASA extraterrestrial sample collections where gloveboxes and desiccators consume ∼3500 scfh of GN2. After gas production, the GN2 is filtered for particulates by the use of sintered 316 stainless steel filters (1 micron filtration at the tank farm and 3 nm point-of-use filters connected to all devices). In addition to 3 nm particulate filtration, the Genesis lab uses point of use Pall gas purifiers that reduces any H2O, CO2, O2, and CO in the GN2 to \(< 1\) ppb. The LN2 is a modified Grade C per MIL-PRF-27401G [LN2 purity 99.995%; H2O <10 ppm; Total Hydrocarbons as CH4 <1.0 ppm; \(\mbox{O}_{2} <10\) ppm; \(\mbox{H}_{2} <10\) ppm; Ar <20 ppm; CO2 <10 ppm; CO <10 ppm; and particulates \({<}1.0~\mbox{mg}/\mbox{L}\)]. LN2 is delivered to JSC weekly and the Curation Office periodically tests the purity of the liquid nitrogen beyond the NASA contract audits. For periodic sampling of the LN2, a cryogenic liquid sampler is connected directly to the LN2 tanker truck with the sampler hose. The LN2 sample is taken to an outside laboratory for analysis. The boil-off of this LN2 at the tank farm produces high purity gaseous nitrogen (GN2). JSC currently tests the purity of the delivered GN2 by conducting airborne molecular organic sampling and SEM triage of inorganic particulates captured in 3 nm sintered stainless steel filters. Adsorbent sample tubes are used for sample collection and sent outside to Balazs Nanoanalysis for TD-GC-MS analysis. GN2 results routinely show no infiltration particulates past the filters and organic compounds and all hydrocarbon loads are below the reporting limit of \(< 0.1~\mbox{ng}/\mbox{L}\) for >C7. The GN2 is also tested monthly for the nitrogen isotopic ratio in a Finnigan MAT 253 IR-MS to ensure that no fractionation occurs over time or within the line. K-bottles of GN2, Ar, and He are also supplied at high purity research grade when required for certain processing activities or experiments. For example, the Subzero Facility for Curation of Astromaterials at the University of Alberta (see Cold Curation section) uses high-purity (99.998%) Ar as a source, which is then further refined using a purification system to bring oxygen (O2) and moisture (H2O) levels to <0.1 ppmv (Herd et al. 2016). High Purity Cleaning Agents Cleaning curation sample handling tools, containers, and other equipment (such as gloveboxes, isolation chambers, and desiccators) is required for the curation of astromaterials. Precision cleaning is typically required where equipment is cleaned to a specified cleanliness and the cleanliness is measured and verified to a standard. These precision cleaning facilities are not a small foot-print and use substantial consumables and equipment for operations. During final precision cleaning, specialized equipment is needed to purify the aqueous cleaning solutions. Historically, Apollo used Freon 113 as the final cleaning agent. The Freon 113 recycled in-house by distillation to achieve the required high purity. Today, NASA JSC uses ultrapure water (UPW) as the final cleaning agent and requires substantial initial investment (>$3M USD) and monthly maintenance cost. For JSC, UPW is not only used for precision cleaning, but is also used to decontaminate Genesis solar wind materials contaminated by macro particles during the hard landing (see Genesis section). The UPW purity is maintained and monitored in continuously flowing production lines. The JSC UPW plant produces 10 gallons/minute of UPW serving 5 laboratories throughout the building within a continuous flowing final loop connected to a 1000 gallon supply tank. Future upgrades to the system will increase the capacity to a 5000 gallon tank producing 15 gallons/minute serving 7 laboratories. Once UPW leaves the final flowing loop, within <5 seconds, CO2 and other compounds in the air quickly dissolve into the highly deionized water and resistivity is immediately lowered from ∼18.18 M\(\Omega \)-cm to <1.0 M\(\Omega \)-cm. Therefore, UPW cannot be stored or transported in containers for use and UPW must be used directly from the flowing final loop for the maximum cleaning effectiveness. The UPW system is outfitted with a continuous real-time monitoring of critical components of the system as well as final water quality. The system monitors flow rate, pressure, resistivity, conductivity, temperature, particulates, total organic carbon, and tank levels. The UPW system conforms to ASTM D 5127-13, Standard Guide for Ultra-Pure Water Used in the Electronics and Semiconductor Industries and produces E-1.1 or better quality of water with a resistivity of 18.18 M\(\Omega \)-cm and total organic carbon (TOC) between 1 to 3 ppb. The quality of the water is routinely tested at least once a year or more for the following: (1) Anions by IC ranging from \(> 0.05\) to 0.02 ppb (\(\upmu \mbox{g}/\mbox{L}\)) of Fluoride (F−), Chloride (Cl−), Nitrite (NO2−), Bromide (Br−), Nitrate (NO3−), Phosphate (\(\mbox{HPO}_{4} ^{2-}\)), and Sulfate (\(\mbox{SO}_{4} ^{2-}\)); (2) Monovalent & Divalent Cations by IC ranging from \(> 0.02\) to 0.01 ppb (\(\upmu \mbox{g}/\mbox{L}\)) of Lithium (Li+), Sodium (Na+), Ammonium (NH\(_{4} ^{+}\)), Potassium (K+), Magnesium (Mg2+), and Calcium (Ca2+); (3) 30 elements Ultra Low Level in UPW by ICP-MS ranging from \(> 10\) to 0.02 ppt (\(\mbox{ng}/\mbox{L}\)) of Aluminum (Al), Antimony (Sb), Arsenic (As), Barium (Ba), Bismuth (Bi), Boron (B), Cadmium (Cd), Calcium (Ca), Chromium (Cr), Cobalt (Co), Copper (Cu), Gallium (Ga), Germanium (Ge), Iron (Fe), Lead (Pb), Lithium (Li), Magnesium (Mg), Manganese (Mn), Mercury (Hg), Molybdenum (Mo), Nickel (Ni), Potassium (K), Silver (Ag), Sodium (Na), Strontium (Sr), Tin (Sn), Titanium (Ti), Tungsten (W), Vanadium (V), and Zinc (Zn); (4) Low-level Dissolved Silica at \(> 0.1\) ppb (\(\upmu \mbox{g}/\mbox{L}\)); (5) Bacteria-ASTM Method-F1094—87 48 Hr Incubation reported in \(>1\) Bacteria per 100 mL cfu. Gloveboxes and Desiccators The inert environments of gloveboxes and desiccators that house astromaterials in storage or during processing should also be monitored. These environments are typically monitored continuously in real-time for their pressure, temperature, and known contaminates. For Apollo lunar material stored and processed in inert GN2, as well as the Subzero Facility used for processing Tagish Lake and other pristine astromaterials (Herd et al. 2016), gloveboxes are continuously monitored for O2 and H2O at a resolution of ±1 ppm. For the lunar sample collection at JSC, these environments are required to be at 1 inH2O positive pressure, room temperature, <25 ppm of O2, and <50 ppm of H2O; but actual achievable can be <1.0 ppmv for H2O and <15 ppmv for O2 with the current system. It should be noted that the vast majority of H2O and O2 levels in gloveboxes do not originate from the GN2 supply lines, but from the isolator gaskets and gloves through molecular infiltration of terrestrial atmosphere even under 1.0 inH2O positive pressure. Inorganic and Organic Testing of Curation Clean Labs Since 1998, the NASA JSC Curation Office has contracted Air Liquide Balazs Nanoanalysis to analyze airborne molecular inorganic and organic contaminates in cleanrooms and laboratory suites (Calaway et al. 2014). Following sampling protocols developed for the semiconductor industry, vertical exposure of \(8''\) and \(6''\) diameter high purity silicon semiconductor wafers are exposed for 24-hours on a work surface or inside gloveboxes to better understand the airborne molecular contamination (AMC). The AMC data is also used to calculate the rate of deposition of surface molecular contamination (SMC). The inorganic and organic AMC for cleanroom monitoring is reported using ISO 14644-8 Classification of Air Cleanliness by Chemical Contamination (ACC) and the SMC for ISO 14644-10 Classification of Surface Cleanliness by Chemical Concentration. For routine inorganic lab and glovebox monitoring, pre-cleaned \(8''\) silicon wafers are packaged in two separate polypropylene wafer carriers; one for sample exposure and one for control, which is not opened. After a 24 or 48 hour of vertical exposure, Vapor Phase Decomposition Inductively Coupled Plasma Mass Spectrometry (VPD ICP-MS) is conducted at Balazs laboratories in Freemont, CA. The VDP-ICP-MS analyses report 35 elements (Al, As, B, Ba, Be, Ca, Cd, Ce, Co, Cr, Cu, Fe, Ga, Ge, Hf, In, K, La, Li, Mg, Mn, Mo, Na, Ni, Pb, Sb, Sn, Sr, Ta, Ti, W, V, Y, Zn, and Zr) with reporting limits ranging from \(10^{8}\) to \(10^{10}~\mbox{atoms}/\mbox{cm}^{2}\). For routine organic lab and glovebox monitoring, two sets of prebaked \(8''\) silicon wafers are sandwiched together and triple-wrapped in baked-out aluminum foil; two for sample exposure and two for control, which are not opened. After a 24 or 48 hour vertical exposure on an aluminum stand, Thermal Desorption Gas Chromatography Mass Spectroscopy (TD-GC-MS) is conducted at Balazs laboratories. The TD-GC-MS measures organic compounds from C6 to C28 with a reporting limit of \(0.1~\mbox{ng}/\mbox{cm} ^{2}\). In addition to organic wafer exposure, which collect airborne molecular and particulate contaminants well, proprietary air absorbent tests are routinely conducted to better understand hydrocarbon and volatile organic compound (VOC) load in cleanroom air or glovebox gaseous nitrogen environments. This test is implemented with an adsorbent tube with a pump running at \(100~\mbox{mL}/\mbox{min}\), for 6 hours that is exposed to the cleanroom or glovebox. The adsorbent tube is analyzed using the same TD-GC-MS method as the organic wafer, but with a reporting limit of \(0.1~\mbox{ng}/\mbox{L}\). Besides these traditional methods of monitoring, the JSC Curation Office also employs the use of optical microscopy and Scanning Electron Microscopy (SEM) as a basic method of direct analysis for inorganic and organic contaminates for the cleanroom laboratory and infrastructure. Cleanroom construction materials, surfaces, sample handling tools, containers, and unknown visible material are analyzed directly or with tape-pulls or polyester wipes. Optical microcopy and SEM typically are used as an initial screening before using other methods of analysis. The following methods have been used in the past at the NASA JSC Curation office, on an as-needed basis, on witness plates, test coupons, millipore filters, and other material samples: (1) Optical Stereomicroscopy/Microscopy for macro particulate/other contamination, (2) FEG-SEM/EDX for micro particulate identification, (3) FT-IR and Raman Spectroscopy for surface contamination, (4) XPS for complete surface/thin-films/oxidation, (5) LA-HR-ICP-MS for gross surface inorganics, (6) VPD-HR-ICP-MS for molecular airborne inorganics, (7) TD-GC-MS with GL Sciences SWA-256 wafer analyzer for molecular airborne organics/outgassing, (8) DART-qTOF-MS for gross surface organics, (9) LC-MS for amino acids, and (10) AFM (Atomic Force Microcopy) for surface roughness/thin-films/cleaning changes. Although not continuously monitored, the Subzero Facility for the Curation of Astromaterials used solid phase microextraction (SPME) fiber GC-MS methods to characterize the glovebox atmosphere during commissioning (Herd et al. 2016); this method shows potential for use in continuous monitoring, although its use requires the assessment and selection of appropriate SPME fibers for the airborne organic compounds of interest. Biological Testing of Curation Cleanlabs Biological testing of clean labs is important in many commercial and academic settings, and biological testing in aerospace and medical settings, like spacecraft assembly facilities, hospital cleanrooms, and pharmaceutical production labs are discussed here in the context of the best practices for monitoring astromaterials curation facilities. The monitoring methods differ among these labs, but the overall goal, to reduce or eliminate contamination, is always the same. A key difference for curation facilities is the need to identify contaminants. Identification is not always a monitoring plan requirement in other industries. Microorganisms like bacteria and fungi are capable of physically and chemically altering astromaterials (Toporski and Steele 2007). Since the nutrient levels in cleanrooms are purposely kept at very low levels, it is likely that microorganisms will seek out nutrient bearing phases in the astromaterials themselves (e.g. phosphorous rich minerals, organic carbon). Therefore, it is important to identify organisms in cleanrooms and understand how they might affect samples stored within the cleanrooms. The most common monitoring method for any cleanroom is cultivation of viable microorganisms like bacteria and fungi. The implementation of a variety of culture-independent analysis techniques that are employed more sporadically are also discussed. In the aerospace industry, biological testing is most commonly performed to meet planetary protection requirements for individual pieces of hardware and entire missions. The goals are defined by Article IX of the 1967 United Nations Treaty on, "Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Bodies". More detailed policies are outlined by COSPAR (Committee on Space Research) (COSPAR 2011). The sampling and testing methods are implemented by NASA (NASA 2010, 2017) and/or ESA (ECSS 2008). NASA requirements for sampling an aerospace cleanroom to meet planetary protection requirements are described in the "Handbook for the Microbial Examination of Space Hardware" (NASA 2010). Briefly, samples are collected with sterile swabs or wipes made of cotton or preferably a synthetic material like polyester. The samples are exposed to \(80\ ^{\circ}\mbox{C}\) for 15 minutes and any surviving microorganisms are transferred to Petri dishes filled with Tryptic Soy Agar (TSA) and incubated at \(32\ ^{\circ}\mbox{C}\) for 72 hours. Cultured organisms are counted but not necessarily identified. ESA requirements are similar, but require cultivation on Reasoners 2 Agar (R2A) for oligotrophic bacteria, Thioglycolate Agar (TGA) for anaerobic bacteria, and Potato Dextrose Agar (PDA) for fungi in addition to TSA (ECSS 2008). Only one set of R2A plates are heat shocked, while the remaining samples are incubated without being exposed to heat. The ESA standards also include provisions for collecting air samples with an impactor style sampling device. Sampling to meet planetary protection requirements is conducted with the assumption that all of the hardware will be exposed to DHMR (dry heat microbial reduction) or an equivalent process to sterilize the spacecraft. Organisms that survive the heat-shock treatment are counted as a proxy for what might be capable of surviving DHMR. These are fit for purpose assays that are not designed or intended to capture the total diversity of the cleanroom environment. While some facilities and/or missions do identify and archive isolates, this is not required or routine in every instance. Similar culture-based assays have been used to monitor Chinese and Russian space craft assembly facilities as well (Novikova 2004; Zhang et al. 2018). Cleanrooms used to manufacture pharmaceuticals and package food are also monitored for biological contamination. There are no detailed methods for how to monitor these types of cleanrooms, but cultivation-based techniques are generally the norm. ISO 14698-1 sets out very general principles and methods for biocontamination control in cleanrooms. The document states, "The appropriate sampling method and related procedures shall be selected and performed to reflect the complexity and variety of situations. Sampling shall be carried out using a device and method selected in accordance with the written procedure and in accordance with the instructions provided by the device manufacturer," (ISO14698 2003). The United States Pharmacopeial Convention also relies on cultivation based methods without specifying a particular set of sampling tools, growth conditions, or nutrients (USP 2013). For example, air samples can be collected with a variety of tools, including: slit to agar samplers, centrifugal samplers, gelatin filter samplers, sieve impactors, impingers, and settle plates (USP 2013). However, the USP document does make several important points regarding sampling methods and data analysis: (1) Total particulate counts from air sampling do not correlate to microbial abundance, although this is an area of open research (Raval et al. 2012). (2) Microbial monitoring is semi-quantitative at best. (3) Colony counts (i.e. the number of culturable organisms) are highly variable from sample to sample and from day to day. Recovery rate is a more reliable statistic for defining a microbial baseline. Recovery rate is defined as: \(\frac{\#\ \mathit{of}\ \mathit{samples} \ \mathit{with}>0\ \mathit{CFU}}{\#\ \mathit{total}\ \mathit{number}\ \mathit{of}\ \mathit{samples}\ \mathit{collected}\ \mathit{during}\ \mathit{a}\ \mathit{sampling} \ \mathit{event}}\) where CFU is a colony forming unit. For example, an aseptic ISO 7 cleanroom should have a baseline recovery rate <10% (USP 2013). The USP document also emphasizes the importance of identifying cleanroom isolates and taking action when new isolates appear and or when an individual sample contains \(> 15\) CFU. In general, sampling of pharmaceutical cleanrooms is focused on cultivating mesophilic organisms from surface swabs, air samples, and cleanroom personnel (Sandle 2012; Whyte 2010). A variety of media types and growth conditions are acceptable as long as they are suitable for enumerating the organisms of concern. Standard efforts to monitor microbial contamination in cleanrooms rely on cultivation based techniques across all industries. Cultivation based techniques are relatively cheap and easy to perform on a regular basis. However, they can be highly variable and even the most comprehensive culture-based sampling campaign is guaranteed to under-sample the environment (e.g., Hug et al. 2016; Lynch and Neufeld 2015; Rappe and Giovannoni 2003). The community recognizes the need to assess these, "unculturable," organisms and has employed a variety of techniques to do so. Next generation DNA sequencing is the most common culture-independent method. Amplification and sequencing of marker genes (tag or amplicon sequencing) like the ribosomal 16S gene for bacteria and archaea and the ITS region for fungi is one promising method for monitoring unculturable organisms in the cleanroom environment. This measurement has changed as the sequencing platforms have improved. Initial tag sequencing was performed using clone libraries and Sanger type sequencing, which only generates about 1000 base pairs of data for a single organism at a time (Shokralla et al. 2012). The 454 platform generates \(10^{2}\)–\(10 ^{4}\) sequences per sample and has allowed researchers to identify hundreds of OTU's (operational taxonomic unit) or organisms per sample (La Duc et al. 2014; Moissl-Eichinger et al. 2015; Vaishampayan et al. 2013). Using this technology, archaea were found to be persistent, viable (Moissl-Eichinger 2011) members of some cleanroom communities (Moissl-Eichinger 2011; Moissl-Eichinger et al. 2015; Moissl et al. 2008). The Illumina sequencing platforms are the current standard for tag sequencing (Mahnert et al. 2015; Minich et al. 2018; Mora et al. 2016). These sequencers can generate \(10^{5}\)–\(10^{6}\) sequences per sample, allowing researchers to identify even more organisms. A recent tag sequencing survey of the SAF (spacecraft assembly facility) at JPL identified \(>16000\) OTU's. Tag sequencing is a powerful monitoring tool, but it does have several important biases. PCR (polymerase chain reaction) based amplification of DNA is required for most low biomass samples. This amplification step does not amplify DNA from every organism equally. For example Moissl-Eichinger et al. (2015) were able to cultivate organisms that they did not detect using, "universal" PCR primers for amplification and subsequent tag sequencing. Secondly, sequencing of any type cannot distinguish DNA from viable organisms from relict environmental DNA inside dead organisms. Several researchers have started treating their samples with compounds like PMA (propidium monoazide) to destroy DNA from non-viable organisms prior to sequencing (e.g., Mahnert et al. 2015; Moissl-Eichinger et al. 2015; Mora et al. 2016; Weinmaier et al. 2015; Zhang et al. 2018). Due to variations in primer choice, sequence length, error rate, and total number of sequences produced, it is very difficult to quantitatively compare data generated by different sequencing platforms (Tremblay et al. 2015). Care should be taken to keep these variables as consistent as possible during monitoring. When changes are made, they should be directly compared to previous methods. Rather than amplifying and sequencing specific marker genes, it is also possible to sequence all of the DNA in a sample (with or without amplification) using the same types of DNA sequencers discussed above. This sequencing technique is commonly referred to as shotgun metagenomics (e.g., Bashir et al. 2016; Minich et al. 2018; Moissl-Eichinger et al. 2015; Weinmaier et al. 2015). Shotgun metagenomics provides more information about the function of abundant organisms in the environment but often fails to detect rare members of the community (Tessler et al. 2017). Additionally, this technique generates large amounts of data that can be very challenging and time-consuming to interpret. At present, metagenomics is a powerful research tool, but it is probably not yet suitable for routine monitoring. New DNA sequencers, like the MinIon platform (Reuter et al. 2015), that generate longer reads may eventually be able to generate metagenomic data that are easier to assemble and interpret, but they are still being developed and improved. DNA sequencing can be used to inform the design of more rapid assays for biological monitoring. qPCR (quantitative polymerase chain reaction) can be used to assess the number of copies of genes in a sample that directly correlates to microbial abundance (Cooper et al. 2011; Hubad and Lapanje 2013; Kwan et al. 2011; Mahnert et al. 2015; Moissl-Eichinger 2011; Moissl-Eichinger et al. 2015; Schwendner et al. 2013; Vaishampayan et al. 2013; Zhang et al. 2018). When interpreting qPCR data, care must be taken to account for organisms that have multiple copies of the 16S or other marker gene (Větrovský and Baldrian 2013). DNA microarrays like the Phylochip have also been investigated as potential monitoring tools (Cooper et al. 2011; Jimenez 2011; La Duc et al. 2009, 2014; Probst et al. 2010; Vaishampayan et al. 2013). Both of these techniques show promise as monitoring solutions, but they probably require initial investigation with culturing and DNA sequencing in order to ensure that probes and primers are designed to capture the communities present inside the cleanroom in question. Techniques that do not involve sequencing DNA are also being tested in cleanroom settings. All living organisms on Earth produce a compound called ATP (adenosine triphosphate) for energy storage. Measuring the concentration of ATP in a cleanroom sample provides information about the total number of viable cells (Benardini and Venkateswaran 2016; La Duc et al. 2007; Mahnert et al. 2015; Venkateswaran et al. 2003), but it is not useful for identifying what organisms are present. MALDI-TOF (Matrix assisted laser desorption time of flight) mass spectrometry is now commonly used in the medical field to identify organisms, and it is being applied in aerospace cleanrooms as well (Andrade et al. 2018; Moissl-Eichinger et al. 2015). However, this technique is still dependent on culturing organisms. Fluorescence based monitoring systems can detect airborne cells but cannot identify them (Hallworth 2012). Biological testing of clean labs suffers from a lack of repetition. Outside the NASA standard assay, very few measurements are routinely replicated between labs. In some respects, this is good and appropriate. Monitoring methods should be modified to suit the environment and the questions being asked. The microbial profile of cleanrooms will be different in different environments (e.g., La Duc et al. 2009). For example, cold curation facilities should explicitly test for the presence of psychrophilic organisms (Sandle and Skinner 2013). It would be a waste of resources to look for psychrophiles in labs maintained at room temperature. However, variations in sample collection, DNA extraction, DNA sequencing, and data processing methods make inter-lab comparisons very difficult. Testing new methods and techniques is an important area of research, but more effort should be made to relate these new measurements to previously generated data. Curation labs should design a monitoring plan that is capable of quantifying and identifying the microbes present therein. Unfortunately, there is no single measurement or technique capable of thoroughly describing a microbial community. Each method discussed above has its strengths and weaknesses. Therefore, a selection of culture-based and culture-independent techniques should be used to monitor cleanroom ecology. Samples should be collected from the air and from surfaces regularly and as frequently as daily when critical operations are being conducted. Special care should be taken to avoid organic or inorganic contamination during sampling. For example, agar filled contact plates used in the pharmaceutical industry are inappropriate for curation labs since they would introduce bioavailable organic compounds and trace metals into the lab. Most importantly, sample collection and analysis methods should be as consistent as possible in order to generate a baseline dataset that can be used as a basis of comparison for new techniques. Regular and consistent sampling is the most important feature of any environmental monitoring program. Development of New Storage, Processing, and Sample Handling Capabilities As technological advancements and new ideas expand the variety and scope of scientific questions that can be asked with astromaterials samples, so expands the need for better storage, processing and sample handling capabilities of curation laboratories that house and process astromaterials samples. Here we summarize a number of important advancements and areas of growth in sample storage, processing, and handling techniques that will be important in the coming decades for maximizing science returns on astromaterials samples. Cold Curation of Astromaterials and Associated Gases, Biological Samples, and Hardware The ever-expanding plans for the return of samples from volatile-rich solar system targets and/or targets of astrobiological significance necessitates the development of curation at temperatures below that of typical curation facilities (\(20\ ^{\circ}\mbox{C}\)). Temperature requirements depend primarily on which volatiles are expected within the returned sample, which in turn relate to the conditions under which the material formed and has since been preserved. The term "cryogenic" is defined as relating to temperatures below \(-183\ ^{\circ}\mbox{C}\); the normal boiling points of the noble gases, oxygen, nitrogen, and air lie below this temperature. More generally, "cryogenic" refers to temperatures below approximately \(-150\ ^{\circ}\mbox{C}\) (https://www.nist.gov/mml/acmd/cryogenics/aboutcryogenics). The term "high temperature cryogenic" is used to refer to temperatures from the boiling point of liquid nitrogen, \(-196\ ^{\circ}\mbox{C}\), up to \(-50\ ^{\circ}\mbox{C}\), the generally defined limit of cryogenics (e.g., Zohuri 2017). The curatorial temperatures for terrestrial materials, including tissue samples and ice cores, include: \({\leq} {-}20\ ^{\circ}\mbox{C}\) (the temperature of typical walk-in freezers in which physical processing and documentation takes place); \({\leq} {-}40\ ^{\circ}\mbox{C}\) for archival storage (e.g., of ice cores); and −80 to \(-196\ ^{\circ}\mbox{C}\) (liquid nitrogen) for biological samples (e.g., Anchordoquy and Molina 2007; Rissanen et al. 2010). Thus, with the exception of biological tissue storage, the field of Earth materials curation has not yet entered the realm of cryogenics. Past and Present Practices in Cold Curation of Astromaterials The expected range of temperatures required to preserve solar system materials spans from those needed for (water) ice cores to cryogenic. Cold curation and sample handling of astromaterials has been done to a limited extent at NASA-JSC over several decades. Several Apollo 17 samples were initially processed under GN2 in a processing cabinet at room temperature for about a month before being transported to cold storage (\(-20\ ^{\circ}\mbox{C}\)) where they have remained. Furthermore, the US Antarctic meteorite collection utilizes cold storage of new Antarctic meteorites, and initially used cold stages in a nitrogen glovebox for cold sample handling. According to Annexstad and Cassidy (1980) "The specimens are transferred from a small staging freezer to the processing cabinet. A specially constructed stage, cooled by liquid nitrogen, is used to keep the sample frozen while an initial cold chip is removed from the meteorite. This chip is immediately returned to freezer storage for future experiments when a frozen piece may be required. The parent meteorite is then allowed to warm to ambient temperature naturally in the cabinet's dry GN2 environment." In the first few years of Antarctic meteorite handling at JSC, the staff gained experience with storing and handling samples frozen, using a cold processing plate in a cabinet and using a cold storage room. Although some hardware was assembled to do this, it became clear after detailed tests that this was not an effective way to handle samples due to the difficulty of keeping samples cold while still allowing dexterity of the sample processor, length of time required to process individual samples, and overall expense. The cold processing approach was abandoned at JSC in 1979, after review and discussion by the Meteorite working Group (MWG) (Righter et al. 2014). More recently, insights into the benefits of curation and processing under cold conditions have been gained from the collection, curation, and study of the Tagish Lake meteorite (Herd et al. 2016 and references therein). Tagish Lake is a unique carbonaceous chondrite that fell January 18, 2000 onto the frozen surface of the eponymous lake in northern British Columbia, Canada. The meteorite was collected about a week after the fall, and collection was done without direct hand contact; more significantly, the meteorite specimens were kept below \(0\ ^{\circ}\mbox{C}\) after collection and during subsequent transport to curation facilities (Herd et al. 2016). The cold ambient temperatures at the location of the fall, coupled with the care with which the collection and subsequent curation were carried out places Tagish Lake among the most pristine meteorites ever collected (Herd et al. 2016). The meteorite is a type 2 carbonaceous chondrite with affinities to CM and CI meteorites (Blinova et al. 2014; Zolensky et al. 2002), and contains among the highest concentrations of organic matter measured in meteorites (Alexander et al. 2014; Grady et al. 2002; Herd et al. 2011). The pristine nature of the meteorite, coupled with the curation methods used to preserve it, have yielded new insights into the formation of nanoscale organic globules in the coldest regions of the protoplanetary disk (e.g., Nakamura-Messenger et al. 2006) as well as the role of asteroid parent-body aqueous alteration in the modification and synthesis of organic molecules (Herd et al. 2011; Hilts et al. 2014). The majority of the Tagish Lake meteorite specimens are stored at \(-30\ ^{\circ}\mbox{C}\) and processed within the Subzero Curation Facility for Astromaterials at the University of Alberta; this facility houses an Ar glovebox within a walk-in freezer maintained at temperatures of −10 to \(-15\ ^{\circ}\mbox{C}\) (Herd et al. 2016). While there are no indications that the Tagish Lake meteorite contains water ice or other such volatiles, these conditions of storage and handling are justified by the discovery of especially volatile and/or reactive organic species (e.g., formic acid, naphthalene, and styrene; Hilts et al. 2014). Challenges and limitations of the Subzero Curation Facility include: mitigation of glovebox leaks, user comfort, and the extreme dryness of the Ar atmosphere, which would result in the sublimation of water or other ices from the samples (Herd et al. 2016). However, the facility achieves the goal of enabling documentation and processing of pristine astromaterials under low temperature in an inert atmosphere. The low-temperature curation of the Tagish Lake specimens reduces reaction rates, preserves intrinsic (volatile) organic compounds, and discourages microbial activity (Herd et al. 2016)—requirements that are desirable for returned samples from organic-rich asteroids, cometary nuclei, Mars, or other volatile-rich returned sample targets (lunar poles, icy moons, etc.) as discussed below. Volatile-Rich Samples from the Lunar Poles The lunar poles are high-priority targets for sample return due to the possibility of significant quantities of water-ice and other volatiles in permanently shadowed regions (PSRs). Remote sensing data indicates that volatiles comprise up to several weight percent of materials in PSRs; the composition of the volatiles in the crater Cabeus' PSR included H2O, CO2, CO, H2S, CH4, OH, SO2, NH3, C2H4, and CH3OH (Colaprete et al. 2010; Gladstone et al. 2010). This mix of compounds present a complex curatorial challenge, and even more so in the presence of local regolith/silicates (largely anorthosite or basalt). The volatiles detected by LCROSS have a range of condensation temperatures, and a subset are highly reactive in the presence of silicate minerals. If the solid and volatile components of a lunar PSR sample are stored together, therefore, a mixed-phase, highly reactive sample will likely result. The preservation of a lunar polar sample would therefore be maximized by separating the solid and volatile components and storing them in that configuration for the long-term. The presence of numerous reactive species presents several additional challenges. First, the corrosive nature of H2S limits the materials to which the sample can be exposed without alteration. Materials will therefore need to be selected that accommodate the curatorial requirements for isolation (materials should not significantly contract under cold temperatures), durability during sample processing/preliminary examination, and particulate contamination. Second, volatile-rich samples often contain gases that are hazardous to humans, even at low concentrations (e.g., CO, H2S, SO2, NH3). This additional risk—on top of the existing particulate exposure risk from solid samples—may require the use of respirators or special masks during preliminary examination and curatorial operations. The need to minimize leakage from curatorial hardware (gas containers, analytical equipment, gloveboxes, etc.) will be significantly higher for volatile-rich samples; because they will operate at cold temperatures, proper materials selection from the component to the system level will be a top priority. Cometary Nucleus Samples The preservation of a cometary nucleus sample lies at the extreme end of cold-curation storage requirements because the sample would contain hypervolatiles including noble gases, nitrogen, and oxygen (Bieler et al. 2015; Le Roy et al. 2015), although the retention of these gases would only likely be achieved if they are trapped within solid ices of primarily H2O, CO, and CO2. Insights from the ROsetta Spectrometer for Ion and Neutral Analysis (ROSINA) instrument, which measured volatiles in the coma of comet 67P/Churyumov-Gerasimenko demonstrate that, while dominated by water, the nucleus of 67P includes a wide range of volatile compounds, including molecular oxygen, CO, CO2, HCN, H2S, CH4, and many others (Bieler et al. 2015; Le Roy et al. 2015). Curation of these ices, which would almost certainly be intimately mixed with non-volatile, fine-grained silicate, oxide, sulfide and more refractory organic materials, would require significant technological development for cryogenic curation—assuming that the sample could be collected and returned to Earth under cryogenic conditions in the first place. Various options for the return of cometary nucleus volatiles have been studied, including cryogenic sample return, for which significant technical challenges exist (Veverka 2010b). Allowing volatile components to be released by warming a comet nucleus sample and capturing them in a separate container removes the need for cryogenic handling (Veverka 2010a), which was the approach proposed for the CAESAR mission concept to comet 67P. No truly cryogenic sample return missions are planned at the time of this writing. Curation facilities may be required to curate biological samples as part of a contamination knowledge collection from the spacecraft build and sampling of flight hardware. This requirement is currently in place for the Mars2020 mission, which may be the first leg in a Mars Sample Return Campaign. Although there are no requirements that the martian samples be kept cold, biological sampling during the spacecraft build and of the flight hardware includes microbiological samples, including swab samples, liquids, isolated pure cultures of bacteria and fungi, and DNA samples. The requirements for long term preservation of these biological samples varies with sample type and intended use. We will discuss two broad sample types: (1) Samples preserved for later growth and (2) samples intended for molecular analysis like DNA sequencing. The guidelines for preserving bacteria and fungi for later cultivation are well established (CABRI 1998). Bacteria should be placed in a protective solution of 15–50% glycerol by volume and frozen at \(-80\ ^{\circ}\mbox{C}\). Commercially available products like cryobeads should be used to improve long-term viability. If viability needs to be maintained for \(> 5\) years, samples should be frozen at \(-130\ ^{\circ}\mbox{C}\). Some species of bacteria and fungi can be freeze dried with liquid nitrogen and stored at \(4\mbox{--}8\ ^{\circ}\mbox{C}\). It is important to test the survivability of each strain prior to committing to a preservation method. Preservation of swabs, liquids, witness plates, or extracted DNA for later analysis is less straightforward. As a general rule, colder is better, but there is little consensus on what temperature is best. There is some evidence that storing samples at too low a temperature can cause more damage than it prevents (Anchordoquy and Molina 2007; Vaught and Henderson 2011). Rapid changes in DNA sequencing technology make it very difficult to predict how samples will be handled in the future (Reuter et al. 2015). DNA extraction techniques are also evolving and can have significant effects on sample quality (Dauphin et al. 2009; Mitchell and Takacs-Vesbach 2008; Rose et al. 2011; Zielińska et al. 2017). Barring additional research, the best strategy is to store unprocessed samples alongside extracted DNA so that future researchers have options for what to analyze. Future sample return missions from icy moons will incorporate both the biological sensitivity of a Mars Sample Return (MSR) campaign and the temperature sensitivity of lunar or cometary samples (Fig. 1). Therefore, even if MSR does not have a low-temperature storage requirement, it is inevitable that biological containment and cold curation will eventually be needed concurrently. The challenges of operating a bio-safety level 4 (BSL-4) facility at cold temperatures are unique to astromaterials curation, and will need to be addressed in the coming years. Many materials suitable for biological containment (e.g., many plastics) become brittle at temperatures at or below the freezing point of water. The additional presence of salts (chlorides, sulfates, etc.) may pose challenges to the selection/durability of metal components. The overlapping requirements for sterility, particulate cleanliness, temperature control, leak prevention/sample isolation, gas safety, and curator comfort will need to be met in the coming years as exploration efforts at Europa, Enceladus, and other icy moons intensify. The overlapping challenges of cold curation and biological containment present unique challenges for future astromaterials curation efforts Curation of Organics and Organic-Rich Materials at Room Temperature Experience with the curation of the organic-rich Tagish Lake meteorite (Sect. 3.1), has provided ample evidence for the value of cold curation in the preservation of organics and organic-rich materials; namely, the retention of volatile organics, the mitigation of volatile organic contaminants, and the suppression of metabolism by any microorganisms in the curation facility (Herd et al. 2016). However, cold curation is not a requirement for the storage of organic-rich materials. For example, curation planning for OSIRIS-REx turned instead towards hermetically sealed storage of samples to preserve organics (Dworkin et al. 2018). This approach had some precedent with the Apollo missions samples (Fig. 2), avoids the potential contamination and time and dexterity intense processing issues associated with cold curation, and is cost effective utilizing known commercially available and tested hardware and approaches. Sketch of typical assembled conflate flange (left). This design is very similar to the specialty "bolt top containers" used for the Apollo program sample storage in 1969 to present (right), except that these are now commercially available, and in various sizes and materials, as explained above When handling organic-rich or organic sensitive materials the use of plastics should be extremely limited. PTFE or Teflon is acceptable in some situations, but glass or metal is preferable. Prior to use, tools and sample containers should be combusted at \(500\ ^{\circ}\mbox{C}\) to remove organic contaminants. Long-term storage of organically sensitive samples should use well-characterized glass baked at \(\geq 500\ ^{\circ}\mbox{C}\) wherever possible (e.g., Grosjean and Logan 2007; Peters et al. 2005; Sherman et al. 2007). Furthermore, frequent microbial monitoring of labs where organic-rich samples are stored and processed is critically important. Moreover, metagenomics studies of any microbes recovered from curation labs that house and process organic-rich samples will be important, particularly for microbes that can metabolize under anaerobic conditions. The primary goal of these metagenomic studies would be to characterize the metabolic function of these anaerobic microbes to understand how they might alter the samples if they are inadvertently introduced to the samples. This is particularly important for organic-rich sample collections stored at room temperature. Future Restricted Earth Return Missions In the 50 years since the Apollo 11 launch, advancements in knowledge and technology allow for not only unprecedented scientific investigations of extraterrestrial samples but also a greater understanding of the potential hazards of sample exposure or release into the environment (e.g. extraterrestrial life). However, in the case of a biological health hazard, more precautions are required, not only to protect the samples from Earth, but also to protect Earth from the samples. Under the UN Space Treaty of 1967, the Committee on Space Research (COSPAR) maintains a planetary protection policy at the international level for all space faring nations. The policy provides "international standard on procedures to avoid organic-constituent and biological contamination in space exploration" (COSPAR Planetary Protection Policy March 2017). The policy also promotes the prevention of "adverse changes in the environment of the Earth resulting from the introduction of extraterrestrial matter" as stated in the UN Space Treaty. For the United States, the NASA Planetary Protection Office in the Office of Safety and Mission Assurance provides the policies and requirements for all NASA exploration missions regarding forward and backward control of biological contamination. NASA Policy Directive (NPD) 8020.7G, Biological Contamination Control for Outbound and Inbound Planetary Spacecraft, complies with the UN Space Treaty and COSPAR planetary protection policy stating "the Earth must be protected from the potential hazard posed by extraterrestrial matter carried by a spacecraft returning from another planet or other extraterrestrial sources". NASA Procedural Requirements (NPR) 8020.12D, Planetary Protection Provisions for Robotic Extraterrestrial Missions, outlines requirements for meeting the NPD 8020.7G as well as specifies planning documents and reviews for Category V Restricted Earth Returns. The Planetary Protection Office classifies any "samples from solar system bodies that may harbor indigenous life" as Category V: Restricted Earth Return. Although there are currently three bodies with this designation (Mars, Europa, and Enceladus), this number can change in either direction as more information about any particular planetary body is gained. For example, during the first three Apollo sample return missions (Apollo 11, 12, 13, and 14), the Moon was considered Restricted. Consequently, the Apollo 11, 12, and 14 samples and astronauts were quarantined upon arrival while health assessments and biohazard tests were performed. However by Apollo 15, which launched just over two year after Apollo 11, the Moon was reclassified as Unrestricted and the final three Apollo missions (Apollo 15, 16, and 17) proceeded without the same level of biohazard Planetary Protection precautions. The scientific community has identified Mars Sample Return (MSR) as a high priority sample return activity for many years, and support for such an endeavor has waxed and waned over the last few decades. Current efforts relating to MSR are focused on a multi-mission campaign, the first of which is the Mars 2020 rover mission to Jezero Crater. At the time of writing, no space agency has fully committed to returning the samples that will be collected by Mars 2020, but NASA and ESA are discussing the possibility of forming a partnership to complete the campaign and decisions are anticipated to be made in the year 2020. Due to both Planetary Protection and Science requirements, the Mars 2020 rover mission has the most stringent inorganic, organic, and biological contamination control requirements of any sample return mission in history. Strategies for satisfying these and other requirements related to MSR and Restricted Earth Return in general are described below. Facility Preparation Samples returned from any planetary body designated as Restricted must be contained within a Biosafety Level 4 (BSL-4) facility until it can be demonstrated that either (1) the samples do not pose a threat to life on Earth or (2) the samples have been adequately sterilized for release (Rummel et al. 2002, NASA technical publication 211842). The requirements and processes associated with biohazard testing and/or sterilization are developed specifically for each mission and each set of samples. International space treaties with the United States, COSPAR planetary protection policies, and NASA planetary protection policy directives and requirements do not impose any specific design requirements on a biocontainment architecture or BSL-4 facility. The policies simply state that the Earth must be protected from the potential hazard posed by extraterrestrial matter and microbial containment is required on Category V (sample return) Restricted Earth Returns. The U.S. Dept. of Health and Human Services traditionally has jurisdiction of design and operating requirements for a BSL-4 facility in the United States. The "Biosafety in Microbiological and Biomedical Laboratories", 5th Edition (Dec. 2009) authored by the U.S. Department of Health and Human Services: Public Health Service, Centers for Disease Control and Prevention, and the National Institutes of Health; HHS Publication No. (CDC) 21-1112 (hereafter BMBL, 2009), houses the primary recommendations, standards, and design requirements for all BSL labs. Under this regulation, any related agents with unknown risk of transmission are classified to be under BSL-4 containment. Presumably, an extraterrestrial or unknown pathogen would require, at minimum, a BSL-4 containment. At this time, we cannot predict what other federal or international agencies may wish to impose additional guidelines and requirements and/or request jurisdiction of a NASA BSL-4 sample return lab. For example, the National Institute of Health (NIH) imposed additional design requirements at the Galveston National Lab beyond the BMBL requirements. The World Health Organization also has guidelines and requirements for BSL-4 laboratories and the Dept. of Agriculture has claimed some jurisdiction of extraterrestrial soils. For the Apollo Program in January 1966, the Interagency Committee on Back Contamination (ICBC) was established to include the CDC with Dr. David Sencer of the CDC as chairman, Department of Agriculture, Department of the Interior, Department of Health, Education, and Welfare, National Academy of Sciences, and NASA, which imposed strict requirements on the construction of the Lunar Receiving Laboratory (LRL), JSC Bldg. 37. Therefore, historically, other agencies have been involved in the construction and operations of such a BSL-4 type lab. One of the major challenges in designing a facility for Restricted Earth Return Missions is the integration of the Contamination Control (CC) requirements necessary to protect the samples from terrestrial contamination and Planetary Protection (PP) requirements necessary to protect the Earth and its inhabitants (all life: from humans to animals to plants, etc.) from a possible extraterrestrial pathogen (e.g., microbes, viruses, or prions). While walls can act as physical barriers for protections, developing the proper pressure differentials inside and outside the laboratory is vital (Fig. 3). Unlike the curation of traditional unrestricted samples, which utilizes positive pressure gradients to protect the samples from contamination, BSL-4 facilities rely on negative pressure gradients to protect the scientists and general public from the samples. Although there have been a number of possible iterations demonstrated in the Draft Test Protocol (Rummel et al. 2002; Fig. 3), the presumed baseline requirement is that the samples must be contained within a BSL-4 Facility (BMBL 2009). In order to best protect the samples and Earth, redundancies are built into the design schematic (Fig. 3). Specifically, not only will the entire cleanroom laboratory be constructed within a BSL-4 facility, but the use of a Biosafety Cabinet (BSC) III double walled isolator for sample processing within the cleanroom will add an additional level of protection with the corresponding differential pressure scheme. For current BSL facilities in the U.S., a Class III BSC glovebox gastight (leak rate) criterion is \(< 1\times 10\mbox{--}5~\mbox{cc}/\mbox{s}\) with 100% He tracer gas under 3 inH2O pressure in the cabinet (Stuart et al. 2012). Dependent on mission science requirements, specialized double walled glovebox or containment seals could be required for maintaining nitrogen or other inert gas environment purity under negative pressure. Non-glove storage isolators can achieve a He leak rate of \(< 1\times 10\mbox{--}7~\mbox{cc}/\mbox{s}\). However, achieving a better leak rate on double-walled isolated containment may require additional engineering development and challenges. While there have been some studies exploring how these requirements could be implemented (Beaty et al. 2009), these studies need to be updated to reflect some significant shifts in possible facility usage (e.g. no animal studies, long-term use, multi-mission use). Comparison of differential pressure gradients used for containment of astromaterials samples from bodies designated as restricted Earth return (adapted from Rummel et al. 2002: 6, Fig. 1): (1) The top schematic represents a standard sample containment design for unrestricted sample return missions. In this configuration, the highest pressure is situated inside a positive pressure glovebox or containment isolation chamber and pushes out to a lower pressurized cleanroom. The cleanroom is also under positive pressure relative to the outside of the lab. This positive pressure cascade is designed to mitigate the infiltration of outside or laboratory contamination to the astromaterials samples. (2) The middle schematic is a standard BSL-4 containment design for working with hazardous biological pathogens. In this particular configuration, the glovebox or containment isolation chamber that houses the biohazard is under negative pressure relative to the laboratory space. In addition, the laboratory is also under negative pressure relative to outside of the lab. This negative pressure cascade is engineered to protect the outside environment from a release of any biohazard. (3) The bottom schematic is the current design concept for a restricted Earth return sample containment that combines both of these technologies. The containment isolation chamber that houses the samples is designed with a double wall and the interstitial space is filled with a high purity gas at a higher pressure relative to the contained isolated samples and the cleanroom. The pressure between the cleanroom and containment is still under a negative pressure differential to maintain BSL-4 containment standards, but any leakage would be the high purity gas that pushes into the containment and out to the laboratory cleanroom. In addition, the cleanroom laboratory would be a positive pressure cascade with a negative differential pressure plenum barrier to maintain BSL-4 containment to the outside environment of the facility Functional Laboratory Design The verification of extinct or extant life within a sample may require the examination of organic compounds within the samples. As such, not only is it vital to ensure no terrestrial biological contamination occurs during sample handling and storage, but the amount of organic contamination must also be minimized. One of the main ways to do this is through the selection of manufacturing equipment and laboratory space with proper materials that have little to no potential to outgas or shed particles. This will require the use of mainly 300 series stainless steel and Teflon in areas with intimate contact with the samples. For this reason (as well as offering additional differential pressure gradients), double walled isolator cabinets are the likely choice for the handling of these materials since the suits utilized in BSL-4 facilities and the glovebox gloves could contaminate the samples with organic matter and make life detection more difficult (Vrublevskis et al. 2018a; Holt et al. 2019). Although the utilization of a double-walled isolator (DWI) helps to mitigate the contamination risk, it requires a significant advancement in remote or robotically assisted manipulation since manual manipulation via a glove port could compromise the organic CC requirements due to material outgassing. Although there is work being done in Advanced Curation related to small particle handling, there are other groups investigating hepatic feedback remote handling, specifically for Mars Sample Return (Vrublevskis et al. 2018b). In addition to robotic sample manipulators, and any analytical equipment should be developed to allow scientists to manipulate samples and run analyses remotely. A further complication of returning samples from a restricted planetary body is the unknown long-term space requirements of the collection. Although Apollo samples were eventually deemed safe and released (two years after the first samples were returned), this will not necessarily be the case for future restricted Earth return missions. If a potential or actual health hazard is found, or if there are too many concerns about unknown unknowns, samples may never be released from containment. Therefore, multiple facilities and sample use strategies would need to be developed to conduct science in containment. One way to approach this is to construct a modular BSL-4 facility that has walls that can be shifted to accommodate the addition of new analytical instrument suites and other long-term curation/scientific needs. Integration of Cleaning and Sterilization Techniques The safety of the samples and the technicians and environment at large will require not only a well-designed facility, but also the integration of cleanliness and sterilization protocols. While there are standard sterilization techniques (e.g. heat, peroxide, formaldehyde) and cleaning procedures (e.g. IPA, UPW), there is not one standard procedure that can do both simultaneously. In the case of MSR, where a major part of "life-detection" will rely on DNA extraction and not viability, any unviable biological matter remaining will compromise the samples and studies. Due to these considerations, strategies for integrating these procedures are underway. A similar strategy as that taken for flight hardware will be employed, clean the materials first and then sterilize without generating contamination. Although the Mars 2020 Mission has integrated the use of vapor phase hydrogen peroxide (VHP), they are using heat sterilization on the sample intimate hardware. However, given the specs of isolator cabinets (e.g. size and mechanics) and the systematic sterilizations needed to avoid cross-contamination and ensure safety, heat sterilization is not a viable option. The need to integrate the isolator cabinets into the facility's infrastructure will mean that they will have to be cleaned and sterilized in place. For this we can draw upon best practices used in Curation Glovebox Laboratories and BSL-4 Cabinet Laboratories. For initial final cleaning/sterilization, a standard UPW/IPA cleaning procedure would likely be utilized (see Sect. 3.7) followed by sterilization utilizing an ultra-pure hydrogen peroxide solution. Given the harshness of the high concentration peroxide required (35%), amount of residual moisture after sterilization is complete, and its limitations in sterilizing instrumentation due to unexposed surfaces, a new technique is being considered. Not only does ionized hydrogen peroxide only require a solution of 8% hydrogen peroxide, it more easily permeates instrumentation, and does not leave a liquid residue (Webb 2011; Grimaldo 2017). Although more research is required to confirm sufficient sterilization without generating long-term corrosion or systemic contamination, the outlook is promising. Collection of Baselines for Science and Planetary Protection The concurrent requirements for sterility, particulate and organic volatile cleanliness, temperature control, leak prevention/sample isolation, and gas safety will need to be met in the coming years as exploration efforts at Mars, Europa, and Enceladus come to fruition. The classification of a mission as Category V Restricted Earth Return not only adds more CC and PP constraints (https://planetaryprotection.nasa.gov/categories), it also broadens the scope of required CK (Fig. 4) to include biological witness materials. Not only does this require more rigorous sets of samples, unlike other collections, which only require storage in an inert ultra-pure gaseous environments (e.g. nitrogen), biological CK will also require samples to be frozen (see Sect. 3.1). Cartoon illustrating the categories of samples needed for testing and verification of spaceflight missions. Each type of sample serves a different purpose and hence requirements for each sample collection related to these categories must be defined While all scientific investigations of returned samples are highly sensitive to terrestrial contamination, contamination is especially detrimental where studies of extant or extinct extraterrestrial life are concerned. The proper collection, storage, and cataloging of Contamination Knowledge (CK) associated with the production and assembly of the spacecraft, rover, lander, orbiter, and/or sampling system will be vital to these investigations. Without a well-constructed and curated CK collection, the baseline for contamination within the returned samples cannot be established. Therefore, after mission inception and design, the development of the CK collection as part of a mission's curation plan (CP) should occur in conjunction with the mission's CC and PP plans. The CP, CC, and PP plans and implementation of these plans during ATLO and Earth receiving operations are paramount to the ultimate value of the returned samples. Technological advancements to instrumentation are continually progressing with greater precision and accuracy for sample measurements, especially in the field of microbiology. An array of CK samples must be made available to scientists once restricted samples are returned to Earth, and those samples should be preserved in such a manner that they can be analyzed by instrumentation that was not invented at the time of their initial storage. Collecting and curating unanalyzed/unaltered samples will minimize the possibility that current analysis and extraction techniques destroy or alter the samples or otherwise inhibit yet to be developed measurements. Some of the types of biological CK samples the NASA Curation Office requires for restricted Earth return missions include: Un-analyzed/Un-altered dry swabs and wipes in sterile containers stored at \({\leq} {-}80\ ^{\circ}\mbox{C}\). All recirculation filters from the clean rooms used for spacecraft and spacecraft hardware assembly and all filters from the laminar flow benches used to assemble sample intimate hardware, packaged in sterile Teflon bags and frozen at \(-80\ ^{\circ}\mbox{C}\). Witness plates collecting airborne contamination within the assembly cleanrooms stored at \({\leq} {-}80\ ^{\circ}\mbox{C}\). Sample Processing Cabinets Under Vacuum Historically, vacuum processing of samples was employed for primary processing of Apollo lunar materials. First envisioned in 1964, the 3 story "High Vacuum Complex" integrated into JSC bldg. 37 Lunar Receiving Laboratory between 1967–1968 was a series of connected glovebox isolation chambers operating at \(10^{-6}\) torr vacuum environment to decontaminate, process, and store samples (Calaway et al. 2014; White 1976). Although vacuum processing takes a very direct approach to minimizing sample contamination by avoiding sample contact with gases, the process is inherently difficult. Maintaining vacuum in large processing cabinets requires constant pumping and the use of cold traps to remove unwanted pumping oils and other contaminants, which renders gloves stiff with attendant processor fatigue, and any leaks in the system tend to introduce relatively humid ambient air. There is also the danger of rapid pressure loss through mechanical failure, which would introduce significant contamination and poses a physical risk to processors. Unfortunately, the F-201 processing glovebox chamber was prone to leaks and glove failures as well as difficulties in using vacuum hardware with an increasingly large volume of lunar samples, which ultimately drove the high vacuum complex to be used only for Apollo 11 and 12. For Apollo 14 onward, the high vacuum complex was replaced by a series of positive pressure gaseous nitrogen gloveboxes called the Sterile Nitrogen Atmospheric Processing (SNAP) Line and Nonsterile Nitrogen Processing Line (NNPL) (Reynolds et al. 1973; Simoneit et al. 1973). The JAXA Extraterrestrial Sample Curation Center (ESCuC) in Sagamihara, Japan has successfully employed vacuum storage for samples returned from the Hayabusa mission to asteroid Itokawa and is planned for JAXA's Hayabusa2 mission to carbonaceous asteroid Ryugu that is currently in flight (Okazaki et al. 2017; Sawada et al. 2017; Yada et al. 2014). In the case of the Hayabusa samples, all the sample handling processes occurred in purified gaseous nitrogen following the sample container opening process under vacuum conditions (Yada et al. 2014). Installation of a newly-developed sample-handling vacuum processing clean chamber (CC) was completed in October, 2018 at ESCuC (Okazaki et al. 2017; Sawada et al. 2017; Fig. 5) two years prior to sample return. The entire sample-handling system in ESCuC consists of five chambers—CC3-1, CC3-2, CC3-3, CC4-1 and CC4-2. The returned sample container will be first connected to CC3-1 for opening in a vacuum, and will be transferred to CC3-2 for vacuum-handling of samples. The container will be then transferred into CC3-3, where the sample handling environment will be changed from vacuum to purified gaseous nitrogen. Further handling of samples will be done in purified nitrogen in CC4-1 and CC4-2. A bird's-view image of the sample-handling system for Ryugu samples. The system consists of five chambers—CC3-1, CC3-2 CC3-3, CC4-1 and CC4-2 Sample Processing Cabinets with Remote Participation of Scientists It is rather common for scientific investigations with astromaterials to require a specific sample or specific portion of a larger sample for subsampling. Principle investigators will occasionally travel to a curation facility to pick-out samples, provide input on sample subdivision, and/or sample preparation. This requires that the PI arrange a visit to the facility to communicate directly with the curatorial processors on which samples to pull and ultimately choose for study. In some cases, travel can take a significant amount of time and cost. Today, communication technology can almost eliminate the need for the PI to travel to the facility with the integration of "Live" real-time video conferencing sessions with remote scientists. NASA JSC has experimented with this technology during the Stardust preliminary examination in 2006 and again in 2014 with the retrofit of one of the Apollo 16 Lunar processing cabinets (Calaway 2015). In this most recent technology demonstration, a Leica DMS1000 Macroscope and Axis Pan-Tilt-Zoom (\(18\times\) optical zoom) IP camera system was integrated into the Lunar processing glovebox. The Axis camera and macroscope were mounted on the outside of the glovebox and focused through glass. This was done to eliminate the concern for cross-contamination from the two systems. The Axis camera was mounted to the top of the glovebox looking through the lighting window and could be used for situational awareness of processing or zoomed in to look at sample splits. The Leica Macroscope was mounted above the PI observation window at the end of the glovebox. A sample was placed onto a jack-stand and the macrosope could focus on the sample through the glass window. While both of these commercial-off-the-shelf (COTS) products offer video streaming capabilities, the video integration was complicated by the JSC firewall and mandated government IT security requirements. In addition, remote wireless connections were hampered by the thick walls of the curation facility. For both the systems, the video needed to be securely accessed outside of JSC. Therefore, the video stream was required to push through the JSC internal firewall to the JSC public zone and then pass through another firewall to get to the internet for public access. The Axis camera browser software is capable of secure viewing with passwords, and the IP address would be routed accordingly by our internal IT group. For the macroscope video stream, a streaming service like YouTube or USTREAM from the DMZ could be used. For the tech demo, a dedicated USTREAM account was set-up. At the March 2015 Lunar and Planetary Science Conference, we demonstrated this system. We successfully live-streamed the Leica Macroscope and Axis camera real-time images from the NASA JSC Lunar laboratory to the Marriot Hotel in The Woodlands, Texas. However, for this test we used the VPN network access to simplify the test due to time constraints. Lunar curation now has all the equipment and tools needed to set up a permanent video conferencing with external PIs during a video or teleconference. In the future, more collections could integrate this type of COTS technology to reduce time and travel costs where appropriate. Small Particle Handling One particular objective of advanced curation efforts is the development of new methods for the collection, storage, handling and characterization of small particles. In this context, "small" refers to microscale particles, typically between one and several hundred microns in diameter (though submicron interstellar particles will be analysis targets in the future). Particles in this size range include dust derived from comets and asteroids that is continuously accreted by the Earth, as well as material collected by robotic sampling missions and by astronauts during Apollo missions. The curation of small particles includes the unambiguous identification of particles in/on collection substrates, the transfer of particles between collection, analysis, and storage substrates, sample characterization, sample preparation/subdivision, and the preservation and documentation of samples in a publicly available catalog. Astromaterials Curation facilities in the United States, Russia, and Japan currently maintain several small particle collections (Allen et al. 2011): Lunar regolith fine-grained samples returned by Apollo astronauts and by Soviet Luna robotic spacecraft, Cosmic Dust that has been collected in Earth's stratosphere by ER2 and WB-57 aircraft, Comet 81P/Wild 2 dust returned by NASA's Stardust spacecraft, interstellar dust returned by Stardust, and asteroid Itokawa particles that were returned by the Hayabusa spacecraft. NASA and JAXA Curation offices are currently preparing for the anticipated return of two new astromaterials collections—asteroid Ryugu regolith collected by Hayabusa2 spacecraft in 2019 and returned to Earth in 2020, and asteroid Bennu regolith to be collected by the OSIRIS-REx spacecraft and returned in 2023 (Lauretta et al. 2017). A substantial portion of these anticipated returned samples are expected to consist of small particle components, and mission requirements necessitate the development of new processing tools and methods in order to maximize the scientific yield from these valuable acquisitions. There are several aspects of microscale astromaterials curation that present challenges that are distinct from macroscopic sample curation. At scales of less than 100 μm, electrostatic and intermolecular forces dominate the behavior of particles. Particles adhere weakly to glass or tungsten needles via Van der Waals intermolecular forces, usually enabling transfer between analysis and storage substrates. These transfer operations are hindered by transient electrostatic forces. Triboelectric charging due to contact, separation, and frictional electrification (Matsusaka et al. 2010) is the primary mechanism by which particles are lost during transfer operations (although environmental and instrumental vibrations also contribute to sample loss during transfers), and these triboelectric effects become more severe in low-humidity environments. Hayabusa2 and OSIRIS-REx collections will be curated in sample processing cabinets purged with dry GN2, and developing methods for suppressing triboelectric charge accumulation in these dry environments will be critical for successful sample processing. Sample characterization at the microscale also presents unique challenges. Typically, optical images of submicron to micron-sized particles do not provide sufficient information to investigators to make informed sample selections. Microscale particles are often imaged and characterized in scanning electron microscopes equipped with an energy-dispersive spectroscopic (EDS) detector for elemental characterization; such analyses are useful for investigators requesting samples with desired mineralogy and are necessary to distinguish true extraterrestrial material from terrestrial contamination for samples that are collected in the stratosphere. However, SEM analysis of microscale particles introduces an additional risk of loss due to sample charging from electron beam bombardment; additionally, some fragile organic and mineral phases may be potentially modified by e-beam characterization. In some instances (especially with rare samples), it may be necessary to subdivide a particle via ultramicrotomy or by focused ion beam (FIB) sample preparation. These methods must be carefully considered in order to avoid compromising the scientific integrity of the sample. The objective of advanced microscale astromaterials curation research is to better understand these challenges and to investigate tools, equipment, and methods that facilitate microscale sample processing. Description of Tools and Equipment Used for Small Particle Handling Commercially available, low-Ni stainless-steel tweezers (e.g. Dumont No. 5 Dumoxel) can be utilized to reliably manipulate samples as small as several hundred microns by hand. Smaller particles (≥ 50 μm) may be manipulated by tweezers that are fixed to devices that enable mechanical or electrical actuation, especially when mounted on a micromanipulator. The NASA Curation Office at JSC has acquired two such devices manufactured by Micro Support Co., Ltd.; these devices are being used to investigate methods of particle removal from OSIRIS-REx contact pads. Challenges remain with small particle manipulation via tweezers (either by hand or by electrical/mechanical actuatable devices) due to lack of force feedback, and risk of deforming or fracturing particles with low tensile strength remains significant. The use of micro-electro-mechanical systems (MEMS) microtweezers for particle manipulation (Keller and Howe 1997) have also been investigated. Initial experiments with these devices revealed similar force feedback limitations to stainless steel tweezers. In addition, silicon devices were more brittle, making removal and placement of particles on rigid substrates precarious operations that often resulted in the shattering of the microtweezers and the loss of the sample. Finally, the low-cost benefits due to mass production on a single wafer have so far not been realized, and the cost of these microtweezers has remained significantly more expensive than their stainless-steel counterparts. Manipulation of particles by fine-tipped needles is a technique that has been utilized by curators since NASA initiated the cosmic dust collection in 1981. Particles smaller than 20 μm are typically transferred from a collection medium to an analytical substrate (e.g. beryllium disk or epoxy bullet) or to a storage container (e.g. concavity slide) using a microneedle made from glass or tungsten. With skill and practice, curation personnel can transfer particles as small as 5 μm between substrates by hand using a glass or tungsten needle attached to a pin vise. However, involuntary hand motions on the order of 100 μm make routine and reliable transfer tedious and precarious operations. For critical transfers, needles are attached to mechanical or motorized micromanipulators to improve transfer reliability and precision, while minimizing user fatigue. The intermolecular forces between the needles and the particles in this size range are typically sufficient to overcome repulsion due to triboelectric charge accumulation. Larger (≥ 20 μm) particles have been more challenging to manipulate. When using the same glass and tungsten microneedles for particles larger than 20 μm, triboelectric charging effects significantly hinder the reliable manipulation of particles. We have recently observed that, by utilizing tungsten carbide needles with low taper ratios (∼3:1), particles as large as 200 μm can be manipulated successfully. We speculate that these needles present greater contact surface area for intermolecular forces to capture particles, and that the needle shape may aid in the rapid redistribution of accumulated triboelectric charge; however, more tests are needed. Glass and quartz needles are fabricated using micropipette pullers that concentrate heat at the center of a solid core glass rod or capillary tube while applying force to each end; this action creates two needles with submicron tips. When a capillary tube is pulled in such a manner, and the tip is carefully broken off, a micropipette is created. A vacuum can be applied to this tube, creating a microscopic version of vacuum tweezers. We have investigated utilizing such a system to manipulate particles that are too large to be handled by Van der Waals adhesion. Our initial results are that, while the vacuum tweezer system is very efficient at securing larger microscale particles, releasing the particle by removing the vacuum frequently results in sample loss. We speculate that the vacuum action induces strong triboelectric charging effects. Stereomicroscopes possess imaging characteristics such as long working distance and laterally correct viewing that make them extremely well-suited for freehand and mechanically assisted manual manipulation of microscale particles. Stereomicroscopes from manufacturers such as Nikon, Leica, and Olympus utilize two main optical designs—the Greenough design, which has two optically independent light paths, and the common main objective (CMO) design, in which optically parallel light paths share a common objective (Zimmer 1998). Greenough designs are preferable in environments in which size and weight must be minimized, and where high magnification is not necessary or desired (e.g., suspended over a collector during cosmic dust harvesting). CMO optical designs afford increased magnification compared with Greenough type microscopes, and are utilized in more critical sample operations such as mounting particles onto analysis substrates. For the manipulation of very small (<10 μm) samples, upright microscopes equipped with geared XY stages are utilized. These microscopes are equipped with long-working distance objectives capable of providing up to 500X magnification. The geared, manual XY stage is coupled to the Z-focus mechanism that raises and lowers beneath a stationary objective; this enables movement in X, Y, and Z independent of objective position. By placing a needle at the focal point of the objective, it is possible to transfer microparticles between substrates by moving the stage rather than by moving the needle. Digital microscopes are particularly useful in processing environments where stereo or upright microscopes would be inconvenient—for instance, in a N2 sample cabinet or a temperature-regulated environment. They also introduce the potential to perform curation activities remotely, reducing contamination risks and operator fatigue. JAXA's Hayabusa sample processing cabinet uses three digital microscopes—two mounted inside the cabinet, and one mounted externally—to image particles during transfer operations. Digital microscopes are best utilized with micromanipulator assisted particle transfers, especially if the microscope suffers from image lag. Micromanipulators are mechanical, hydraulic, and motorized/ electrical devices that enable the precise handling of microscale samples. Most commercially available micromanipulators have three axes of motion, with motorized versions often providing a virtual fourth axis of motion (which is desirable for performing micro-fluid injections). Mechanical micromanipulators often use a combination fine-pitched screw mechanisms and linear guide rails to achieve microscale positioning. Singer instruments manufactures a mechanical micromanipulator with a 3D pantograph design (Robert 1951); the user holds a pencil-grip stylus and, through the pantograph mechanism, manipulates a probe with a 4:1 reduction of hand motions. Motorized micromanipulators employ a combination of precision stepper motors and worm gear mechanisms to achieve microscale positioning and motion. These have the advantage of being able to be operated remotely and, in some cases, can be programmed for autonomous operation. A variety of input mechanisms can be utilized, including joysticks and rotary optical encoders. Motorized manipulators can also be computer controlled. In order to achieve reproducible, robust, and reliable particle transfers and processing, a combination of microscopes, micromanipulators, and XY/XYZ stages are required. Motorized micromanipulators and XY stages require bulky power supplies and motor drivers (sometimes one per axis of motion), and microscopes with digital cameras often require desktop computers to operate the imaging software. Such equipment occupies large footprints in cleanrooms with limited space, compromises pristine environments with instrument cooling fans, and leads to unsightly tangled masses of cables and wires. The NASA Curation Office has recently obtained an integrated system that includes dual motorized micromanipulators, a motorized XYZ stage, and high-resolution digital microscope. The MicroSupport AxisPro system utilizes a graphic user interface control system, allowing all electromechanical components to be operated independently or simultaneously via computer mouse. A number of manipulation and sampling tools are available for the AxisPro, including an ultrasonic milling tool and a device that enables the electrical actuation of stainless-steel tweezers. The compact, integrated design of the system enables the possibility of placing the AxisPro in a N2 sample cabinet with an operator performing sample processing activities remotely. So far, the AxisPro system has been used extensively for microsample handling technique development (e.g. implanting and extracting particles into polyurethane foam collectors). In order to minimize the risk of sample contamination (especially for collections that have been returned from extraterrestrial sources via spacecraft), materials restrictions are placed on tools and equipment used in sample processing cabinets. JAXA manipulates samples within its Hayabusa processing cabinet using an integrated mechanical manipulation system manufactured by Hitachi (Yada et al. 2014). The system consists of an XYZ stage, a left and right micromanipulator, and a sealed digital camera; the system is constructed from T6061 aluminum, 304/316 stainless steel, Teflon, and quartz. No lubrication is used for the bearings, and the entire manipulator is disassembled and serviced annually to maintain performance. Six-Axis Compact Robot Arms While 3-axis micromanipulators have been extremely successful for activities involving the transfer of isolated particles in the 5–20 μm range (e.g. from microscope slide to epoxy bullet tip, beryllium SEM disk), their limited ranges of motion and lack of yaw, pitch, and roll degrees of freedom restrict their utility in other applications. For instance, curation personnel removing particles from cosmic dust collectors by hand often employ scooping and rotating motions to successfully free trapped particles from the silicone oil coatings. Similar scooping and rotating motions are also employed when isolating a specific particle of interest from an aliquot of crushed meteorite. While cosmic dust curators routinely perform with these kinds of manipulations using handheld tools, operator fatigue limits the number of particles that can be removed during a given extraction session. The challenges for curation of small particles will be exacerbated by mission requirements that samples be processed in N2 sample cabinets. We have been investigating the use of compact robot arms to facilitate sample handling within gloveboxes. Six-axis robot arms potentially have applications beyond small particle manipulation. For instance, future sample return missions may involve biologically sensitive astromaterials that can be easily compromised by physical interaction with a curator; other potential future returned samples may require cryogenic curation (Calaway and Allen 2013). Robot arms may be combined with high resolution cameras within a sample cabinet and controlled remotely by curation personnel. Sophisticated robot arm and hand combination systems can be programmed to mimic the movements of a curator wearing a data glove; successful implementation of such a system may ultimately allow a curator to virtually operate in a nitrogen, cryogenic, or biologically sensitive environment with dexterity comparable to that of a curator physically handling samples in a glovebox. Methods for Mitigating Triboelectric Charging Developing tools and methods for mitigating the effects of triboelectric charging during small particle processing activities is a major objective of microscale advanced curation research. Triboelectric charging results from contact, separation, and frictional charge induction (Matsusaka et al. 2010). Examples include friction between storage substrates and instrument support stages or friction between manipulation tools and particles. Many storage substrates currently in use for small particle curation are fabricated out of glass, quartz, corundum, or other optically transparent material that enables the utilization of transmitted illumination; for example, interplanetary dust particles have traditionally been stored and distributed to investigators between a flat glass slide and a glass concavity slide. However, most transparent materials possess poor electron mobility, and any local accumulated charges are unable to easily redistribute. We have identified friction between these slides, particles, manipulation tools, and instrument support stages as a major source of sample electrification. In cases where substrate transparency is not a curation requirement, the glass support slide may be replaced with a silicon wafer. Particles retain a high level of visibility on such substrates (especially under coaxial illumination), and triboelectric charging is significantly reduced such that particles between 40 and 100 μm can be reliably manipulated and arranged in arrays without additional charge-mitigation devices. Recently, we have experimented with producing storage receptacles in silicon using focused ion beam (FIB) milling (Fig. 6). We used an FEI Quanta 3D-FEG Focused Ion Beam (FIB) to mill several shallow (<20 μm) depressions between \(30~\upmu \mbox{m}^{2}\) and \(80~\upmu \mbox{m}^{2}\) into the surface of a silicon chip; material was sputtered using a 65 nA Ga+ beam at 30 kV. A 10 μm particle of CM2 meteorite was placed into one of the FIB-produced wells using a pantograph mechanical micromanipulator. The charge-dissipative nature of the Si chip enabled us to successfully acquire a secondary electron image of the stored particle using a 190 pA beam current at 10 kV. Storage substrates that also enable electron beam imaging and characterization are desirable, as they minimize the need for high-risk microscale particle transfers between storage and analysis substrates. We are currently investigating this technique to produce storage wells in other charge-dissipative substrates that could enable in-situ elemental analyses. Secondary electron image of Focus Ion Beam (FIB)-produced wells in Si chip for particle storage and manipulation For instances where the use of transparent substrates is unavoidable, other steps may be taken to minimize frictional contact between insulating materials. For instance, we have constructed slide support frames out of conducting and electrically grounded materials (e.g. miniature aluminum extrusion framing systems). One effective method for minimizing triboelectric charging effects is to restrict small particle processing to times when the ambient humidity is above 60% (Guardiola et al. 1995); however, this method is not viable for microscale samples that are processed in dry GN2 sample cabinets. Another extremely effective method for mitigating triboelectric charging effects is the use of a 210Po alpha ionizing source. Companies such as NRD® manufacture commercially available devices designed to reduce static charge via alpha particle emission. Because alpha particles have a short penetration range in air, the sources are most effective when placed within 25 mm of the sample. Tools, substrates, and samples can be periodically exposed to the Po-210 source as sample electrification is observed to worsen; alternatively, if working distance permits, the source can be left in place during particle transfer operations to remove any transient charges as they are produced. Due to the short half-life of Po-210 (138 days), sources must be replaced annually to remain effective. Also, the use of radioactive sources may be prohibited in certain facilities and typically requires specific safety training and security protocols. JAXA has developed an electrostatically controlled particle manipulation system to handle Itokawa particles in an ultrapure GN2 sample cabinet (Yada et al. 2014). Instead of attempting to neutralize the charge that has accumulated on the particle, they use it to an advantage by attracting the particle with an oppositely charged needle. The system utilizes a quartz needle with an embedded platinum wire; the samples rest on a grounded conductive surface. When voltage is applied to the system, a charge is induced to the needle by applying a voltage to the platinum wire; this charge is used to attract and release particles and transfer them to custom gold SEM mounts for characterization or to storage wells in quartz slides for allocation and archiving. The NASA Curation Office has reproduced the electromagnetic manipulation system (using needles fabricated at JAXA as part of an ongoing international collaboration between NASA and JAXA curation facilities) and is currently investigating applications for the system for its microscale particle collections. The tools and methods described here represent only a fraction of the techniques and instrumentation currently utilized and under development for microscale astromaterials sample processing and analysis. An international collective of curators and small-particle scientists at curation facilities, research institutions, and commercial industries continue to collaborate to improve our ability to extract high-quality science from these valuable and unique micro-sized samples. Advanced Precision Cleaning for Storing and Handling Astromaterials Precision cleaning of isolation chambers (e.g., gloveboxes and desiccator cabinets), sample containers, and processing tools is important for mitigating terrestrial cross-contamination to pristine astromaterials. Once samples arrive on Earth, the sample environment and how it will be handled will begin to alter the pristine nature of the sample. As stated before, the term "precision cleaning" simply means cleaning materials to a prescribed level of cleanliness, which is measured and verified. Aerospace, semiconductor, pharmaceutical, and optics industries are historically concerned with precision cleaning. Standards for precision cleaning are widespread across industrial processes through trade organizations like the Institute of Environmental Sciences and Technology (IEST) (https://www.iest.org), ASTM International (formerly American Society for Testing and Materials; https://www.astm.org), SEMI (https://www.semi.org) and others. Since curatorial precision cleaning does not directly align with a single industry's cleaning standard, curation precision cleaning procedures and protocols for handling astromaterials are derived from many of these established industry standards. NASA also has its own flight hardware precision cleaning standards and are often different and dependent on program and mission. For the NASA Curation Office, precision cleaning standards were mainly derived from the Apollo program and early cleaning recipes and history have been discussed in Calaway et al. (2014). Currently, precision cleaning at the NASA Curation Office is divided into three categories: PreClean, Final Clean, and Advanced Clean. PreClean is considered gross cleaning when parts arrive from fabrication/machining and/or procurements from a vendor. Final Clean is typically linked to the use of a final cleaning agent, drying, and packaging of the part. During Final Clean, the hardware cleanliness is also measured and verified to meet a certain standard of cleanliness for use. Advanced Clean is a term used for non-routine avenues of cleaning and/or testing of new cleaning methods and techniques. Advanced cleaning is typically done after a routine PreClean and Final Clean process has been completed. This might include techniques that require advanced particulate removal, organic-free cleaning or sterility. Advanced Clean may also use standard or advanced cleanliness verification processes to assess surface cleanliness using a variety of state-of-the-art instrumentation. In the NASA Curation Office, PreClean and Final Clean support all collections by cleaning the sample processing tools and containers. However, each collection has its own tools and containers, which are cleaned in entirely separate cleaning sessions. All hardware items are put away with the exception of items from the collection being cleaned. The cleaning tanks are cleaned and refilled before starting a new collection. All of this careful effort is taken to mitigate the potential for cross-contamination between the different astromaterials collections. Before attempting gross cleaning, it is important to understand and verify the cleaning chemical compatibility with the material that is to be cleaned. In addition, complex equipment and tools are routinely disassembled, then cleaned and then reassembled in a cleanroom after the precision cleaning is completed. PreClean typically consists of removing any visible grease, dirt, adhesives, or other marks with the use of polyester wipes saturated with isopropanol alcohol (IPA) (70% IPA and 30% UPW), if compatible with IPA. If IPA wiping does not work or is not compatible with the material, other cleaners that will not contaminate the material will be used to remove the visible dirt (e.g., citrus-based solvents to remove silicones, ammonia-based solutions, hexane, and household dish liquid have been used for initial gross cleaning). In addition, mechanical gross cleaning may also be necessary in conjunction with cleaning chemicals, such as razor blades and scrubbers (e.g., Scotch Brite pads and nylon brushes). After all visible material is gone, PreClean uses a gross degreasing procedure to remove any machining oils and grease from manufacturing. This is done by soaking and sometimes sonicating the part in a degreasing detergent or chemical. Brulin 815GD (at 5 to 30% concentration with UPW) is commonly used for stainless steel, aluminum, and titanium metal parts. Freon 113 replacements are also sometimes used for degreasing; for example Honeywell Solstice Precision Fluid, DuPont Vertrel specialty fluids (e.g., Vertrel XF), or 3M HFE 7100-DL. Dilute nitric acid is also routinely used to remove trace metal contaminants, such as lead, from newly fabricated items. After degreasing, the hardware is then cleaned with a surfactant. Mechanical scrubbing with polyester wipes or soft brushes are used in a surfactant bath and then placed into a sonication bath for 5 to 15 minutes. Afterwards, parts are removed and spray rinsed with UPW and air dried. Final Cleaning of equipment and tools are typically centered on high purity chemicals or cleaning agents. Final Clean also incorporates a verification step to evaluate the cleanliness of the part and certify the level of cleanliness and qualification for use. From 1966 to 1994, the final cleaning agent at JSC was Freon 113. Established during Apollo, Freon 113 was an excellent degreaser and final cleaning solvent. The United States government environmental policies on ozone depleting chemicals phased out chlorofluorocarbon production from 1992–1995 which forced the NASA Curation Office to change degreasers and final cleaning agent. After an in-depth research process, the final cleaning agent was changed to UPW in 1994 and is used currently (see Sect. 2.2.2 for details on current UPW purity and system). Final Clean can sometimes redo surfactant cleaning or use of a pre-degreaser, however, most of the time, parts are rinsed with UPW and placed into a UPW cascade bath and sonicated for 5 to 15 minutes. The UPW is often heated to 40 to \(70\ ^{\circ}\mbox{C}\) to provide better cleaning. GN2 is also used during the end of sonication to remove particulates out of the bath. Parts are then removed from the bath and thoroughly spray rinsed with UPW. If another high purity chemical is used (such as IPA or a Freon 113 replacement), this would be applied at this stage, then spray rinsed again with UPW and dried by GN2 (sometimes heated GN2) to remove all visible water. During the final rinse, run-off aliquots of UPW may be taken for optical particle counts, liquid particle counts, or TOC analyses. It should be noted that traditional non-volatile residue (NVR) mass balance measurements and black-lights used to be used, however, these methods were eliminated since the Final Clean often showed cleanliness below detection limit of those techniques. The part is then either left to continue to air dry or placed into an oven to remove any more water. After drying, parts are triple bagged in FEP Teflon or nylon bags depending on the collection and its material restrictions. Before bagging, precision cleaning verification often entails an optical inspection of the parts. If parts are shown not to be cleaned to the specified cleanliness standard, parts are sent through the process again. In the NASA Curation Office, the verification of cleanliness reference standard is frequently IEST-STD-CC1246E, Product Cleanliness Levels—Applications, Requirements, and Determination (IEST-STD-CC1246E 2013). This is a derivative of the historical military discontinued standard MIL-STD-1246 (MIL-STD-1246C 1994). It is in IEST-STD-CC1246E that hardware surface cleanliness levels are specified per unit measure both for particles and non-volatile residue (NVR). Particle counts are measured by optical microscopy and/or liquid particle counts. Most NASA astromaterials collections use Level 50 cleanliness standard from IEST-STD-CC1246E. However, the Genesis collection has a dedicated precision cleaning lab, and hardware is generally cleaned to Level 25. For example, cleaning for flight of the Genesis mission involved most surfaces being cleaned to level 25 (no particles \(> 50~\upmu \mbox{m}/0.1~\mbox{m}^{2}\)) for particulates per surface area (a function of particle abundance vs. particle size). NVR is traditional measured by means of gravity mass calculation (a as function of mass vs. surface area) but are often limited to technique sensitivity. As an example, R1E-4 is a designation for NVR indicating \(<100~\mbox{ng}/0.1~\mbox{m}^{2}\). More recent techniques have relied on bench-top total organic carbon (TOC) analyzers (as a function of ppb vs. surface area) and more time-consuming analytical instrumentation. This standard is useful because it is frequently cited when cleaning hardware for spaceflight, including sample collection devices on spacecraft. However, new missions are generally citing total organic carbon (TOC) and the TOC is not always transferable to the NVR level. Technical Tensions Among Cleaning for Particulate, Molecular Organics, and Sterility UPW cleaning to remove inorganic particles is a mature process for curation of astromaterials. However, the expanding diversity of requirements for high level organic cleaning and sterility invokes conflicts among cleaning techniques that need to be managed. Management options for reconciling these tensions requires attention to detail, perhaps a complex handling sequence, or subdivision of samples into separate handling tracks such as organic-cleaned vs. metal-free (e.g. metal vs. plastic handling tools and containers). A simple model of a cleaning process line (Fig. 7) starts with pre-cleaning followed by UPW cleaning for particle removal. This particle-cleaned product is suitable for inorganic usage or for further cleaning. The product could be further cleaned to high organic cleanliness or to sterility by separate tracks. Selection of a direct separate track may be most efficient and adequate. Greater cleanliness or sterility might be achieved by cleaning for both low organics and sterility. However, due to conflicts in materials, environment, process chemicals, and packaging for high level organic cleaning and sterility, adaptability for specific situations is required. Basic cleaning process for hardware and tools used in astromaterials curation laboratories. Items to be cleaned are first introduced to a PreClean process that is configured for gross cleaning. Afterwards, items are introduced to a Final Clean process where they are precision cleaned, cleanliness level verified, and packaged for use in the lab. If specialized advanced cleaning is desired, the item(s) are further processed after the routine Final Clean process. This diagram shows the hypothetical process path for organic and sterilization of hardware and tools. These processes could be a single advanced cleaning process or a combination of several advanced cleaning processes. In addition, cleanliness verification can occur at multiple points during the cleaning process or after the process is completed before packaging. Hardware and tools, coupons, and/or final cleaning agent aliquot are commonly used to verify cleanliness to a set standard The revived need for both high-level organic cleanliness and sterility, driven by recent missions such as OSIRIS-REx, Hayabusa2, and possibly Mars Sample Return, have resulted in investigations into advanced techniques and methods for cleaning and for verification of cleanliness. Traditional methods to achieve organic cleanliness, such as solvent extraction and bake-out, can be augmented by UV-ozone cleaning, plasma cleaning, supercritical fluid cleaning, and CO2 SNOW cleaning (Mickelson 2002a, 2002b; Calaway et al. 2007, 2009; King et al. 2010; Schmeling et al. 2013; Kuhlman et al. 2013). Assessment of surface cleanliness has included the use of instrumentation such as SEM, TEM, FT-IR, Raman, XPS, SIMS, AMF, LC-MS, DART-MS, TD-GC-MS, LA-ICP-MS, and VDP-ICP-MS. Protocols to remove adventitious carbon must be followed by surface passivation and extreme control of environment, which includes packaging. Packaging materials must not off-gas organics. Thus, maintaining an organic-clean surface after cleaning is challenging. Traditional heat-sealing will not work for organic samples. For example, the Mars 2020 CK collection has used hermetically-sealing bag clips in-place of traditional heat sealing. Techniques for achieving sterility of containers and tools include standard autoclave, dry heat, UV, hydrogen peroxide vapor and gamma irradiation and electron beam (Allen et al. 1999; Clark 2004). NASA JSC is currently constructing an advanced precision cleaning lab to further study some of these techniques for future missions. Development of New Astromaterials Acquisition Capabilities on Earth The study of astromaterials in the laboratory allows direct analysis of material arising from the full breadth of the history of our Solar System. The continuing advance of technology has improved not only our technological capacity to make measurements but also our ability to minimize contamination and sample modification during collection of freshly fallen meteorites and cosmic dust. Some of these improvements are related to educating the public on proper methods of handling meteorites, but many of the technological advances have focused on improving our ability to track and find materials. In many cases, these improvements result in reduction of exposure time to uncontrolled conditions, which could reduce terrestrial contamination, especially if clean-collection practices are used during recovery operations. Collection of material has been steadily improved by advances in ground-based and satellite sensors, dissemination of information with the growth of the internet and various social media platforms, and especially by the flow of data through freely available data sources. Although the collection of astromaterials may seem, at first, to be a prerequisite of curation, one of the primary goals of advanced curation is aimed at maximizing the science returns of astromaterials samples, and improvements in sample collection techniques has a direct benefit to science. We are in the midst of an exciting period of growth of truly innovative astromaterials sample collection techniques on Earth, and this section illustrates a current snapshot of a rapidly evolving field. New Astromaterials Collection Capabilities for Cosmic Dust Since inception in 1981, the NASA Cosmic Dust collection has collected interplanetary dust from Earth's stratosphere using flat-plate, oil-coated collectors. The oil is a high-viscosity silicone (polydimethylsiloxane, (C2H6OSi)n) that is mechanically stiff at collection altitude and temperature but engulfs and protects collected material on return to room temperature. This oil has a long track record of successful recovery of cosmic dust, and recent tests have shown that even oil used for sampling in 1981 retains its original viscosity properties. This is heartening for long-term storage of cosmic dust, but silicone oil has a significant drawback in that it is a contaminant for important studies such as oxygen isotopes, organic species, and amorphous silicates. The community has expressed a desire for at least a subset of samples collected "dry", without the contaminating oil. Foam collectors are a promising means to accomplishing dry collection. A small number of foam collectors have been flown as a test of concept, which yielded a few cosmic dust samples (Messenger et al. 2015). The results were generally acknowledged as positive, and comments to the NASA CD Curator indicate that the scientific community has a strong interest in additional dry collection. To this end, NASA will fly foam collectors as a subset of future collection flights. Collection in foam appears to be straightforward and Messenger et al. (2015) claims that the collection rate is comparable with silicone oil (albeit with a short total collection time for reference), but extraction of collected material from foam is a nontrivial exercise. Foam features relatively deep pits in its surface from which the particles must be extracted. This problem is exacerbated by the observation by Messenger et al. (2015) that "20–50%" of individual foam cells were broken when observed post-flight, probably by aerodynamic pressure. In order to test identification and extraction methods, NASA Cosmic Dust personnel fabricated an analog foam collector by adhering a \(1/8''\) thick sheet of white polyurethane foam to a surplus Lexan IDP flag using double-stick tape; excess foam was trimmed to match the profile of the flag. Small (<20 mm) particles of Bells CM2 meteorite were transferred from a concavity slide into individual foam cells using a MicroSupport AxisPro micromanipulation system and bent glass needles. 210Po sources were utilized to minimize triboelectric charging effects. Ten particles were implanted into the experimental foam collector apparatus using this technique. The transfer process was then reversed to remove four of the particles from the collector onto cleaned glass slides. None of the particles were lost due to vibration or triboelectric charging effects. Ultimately, foam may be replaced by a more rigid material, but this awaits future work. For the near term, the use of a computer-controlled micromanipulator shows promise in removing material from foam collectors using standard pulled-glass needles. In addition to dry foam collectors, NASA Cosmic Dust is conducting a project with undergraduate students at Texas A&M to develop a prototype, high altitude balloon-based dust collection platform. The intent of this new system is to supplement existing aircraft-based collection for two major reasons. One, expanding into a new collection platform adds programmatic depth and resilience to CD collection efforts. Should the existing ER-2 and/or WB-57 aircraft become unavailable (either temporarily or permanently), dust collection will continue with a balloon-based platform. NASA balloon flights also regularly operate from both the northern and southern hemispheres, opening up the possibility of CD flights intended to collect material from cometary debris streams, which preferentially impinge on the southern hemisphere. The second reason is to offer new ways to improve CD "timed collection" efforts (Dermott and Liou 1994; Messenger 2002). "Timed collection" is collection of material in the stratosphere timed to coincide with the settling time of material sourced from a specific meteor shower and thus with a specific parent body. Collection by aircraft is possible and has been demonstrated (e.g. timed collection of comet Grigg-Skjellerup in 2003 and of comet Giacobini-Zinner in 2012), but significant flight time constraints can impede both the total flight time and timing of "timed collection" attempts. Collection by high-altitude balloons may be more accommodating, as NASA long-duration balloons (LDBs) feature flight times of up to 100 days. Cosmic dust collector(s) could fly on long-duration missions and deploy for collection only during the settling time for material from a specific meteor shower, maximizing collection time at precisely defined intervals to maximize the chances of collecting material from a specific parent body. A cosmic dust collector prototype for use on high altitude balloons was tested in mid-2019 at the NASA balloon research center in Fort Sumner, NM. The prototype is called Cometary and Asteroidal Research of Dust in Near-space Atmospheric Levels (CARDINAL), a name chosen by the students. CARDINAL uses a swing arm with collectors at each end as a low-power means to move air over the collectors, spinning the arm with a small electric motor. Collector size, rotation size, and swing arm length were chosen to produce an estimated one cosmic dust particle per day per collector. While this collection rate is about 1/48th that of aircraft-based collection, LDB flights last up to over 100 days as opposed to the typical tens-of-hours collection time of aircraft-based collection. The lower airspeed of balloon-based collection may also be advantageous for collection using collectors that are free of silicone oil, or "dry" collection. CARDINAL seals the swing arm within its body to protect the collectors from contamination, and remains sealed at all times below a set altitude. The lid is movable to expose the collectors at high altitude. CARDINAL draws power from the balloon gondola, and both housekeeping and operations data are stored by the onboard microprocessor for post-flight analysis. CARDINAL is self-contained with respect to communications, with all flight functions programmed prior to flight and no communications needed with a ground station or other external controller. Testing at Fort Sumner revealed significant performance shortcomings, which prevented the first test flight. A second design, which draws heavily on lessons learned from CARDINAL, is currently in early development. This design will simplify the collector encapsulation design, reduce weight considerably, and feature its own battery and solar power module that will perform double duty as flight trim ballast. Ultimately, the intent of this project is to collect cosmic dust during precisely defined periods when specific meteor showers are active, to attempt to collect material from known cometary parent bodies. Both oil-based and dry-foam collectors will be used as part of the continuing development of dry-foam cosmic dust collection. New Astromaterials Collection Capabilities for Meteorites Meteorite falls represent opportunities for the recovery of samples that have been briefly exposed to the oxidative, organic-, and moisture-rich environment at the Earth's surface, which is also teeming with life. Substantial contamination of freshly fallen meteorites by such exposure can occur in a matter of days to weeks (Burton et al. 2014; Hilts et al. 2014; Kebukawa et al. 2009). At the same time, advances in curation and contamination knowledge in support of sample return (e.g., Allen et al. 2011; Dworkin et al. 2018; Yada et al. 2014) have resulted in the establishment of curation facilities that prevent or mitigate against such contamination (Herd et al. 2016). Therefore, the present-day challenge is to reduce the amount of time that meteorite samples spend in the field and apply the best methods for their field collection and laboratory curation and handling. The faster the meteorite is recovered and removed to a curation facility, the more scientifically valuable such a sample will be. Rapid collection of meteorites can be seen as strongly complementary to established efforts to collect meteorites from dry deserts such as the Sahara (Grady 2000) and Antarctica (Harvey 2003). These collection efforts produce the large number of meteorites necessary to routinely produce weathered but unusual types such as martian, lunar, ungrouped achondrites, and many others, while rapid collection of fresh falls produces a relatively small number of unweathered meteorites. The combination of both approaches provides a comprehensive approach to meteorite collection, which facilitates study of a wide range of meteorite types with unweathered examples for studying weathering effects, pristine organics and fragile mineral phases (e.g., Haberle and Garvie 2017), and as ground truth for quantifying alteration due to terrestrial weathering. Significant technological advances have been made in recent years that enable more precise observations and more accurate modeling of meteorite falls, and thus the more rapid potential recovery of the meteorites. Advances fall into two main categories: improved observation of meteor/fireball phenomena, and characterization and modeling of meteorites in dark flight. Fireball Detection and Tracking One of the most important "sample return spacecraft" may be the Earth itself. As our planet orbits the Sun, it collects around 40000 tons of extraterrestrial material each year (Flynn et al. 2004; Zolensky et al. 2006a; Zook 2001), ranging from microns in size to occasional large meteorites (and less frequent large impactors). This material originates from a wide range of parent bodies and so has the potential to inform us about the status and histories of a great many parent bodies. Historically, efforts such as the Prairie Fireball Network (Wetherill and Revelle 1981), the Meteorite Recovery and Observation Project (MORP) in Canada (Halliday et al. 1978), the European Fireball Network (Oberst et al. 1998), and others (Bland 2004; Colas et al. 2015; Cooke and Moser 2011; Gritsevich et al. 2014; Hindley and Houlden 1977; Kokhirova and Borovička 2011; Shiba et al. 1997; Sullivan and Klebe 2004; Trigo-Rodriguez et al. 2006; Watson 2009; Weryk et al. 2008; Wiśniewski et al. 2017) surveyed meteors using networks of cameras and recovered small numbers of meteorites. Perhaps the most scientifically significant outcome of these efforts was their recovery of meteorites paired with calculations of their original orbits. This allowed something new in meteoritics—laboratory studies of the recovered meteorites were given a precise "home" in the Solar System, at least immediately prior to the meteorites' fall to Earth. The large number of fireballs observed by these networks and need for accurate fall location calculation encouraged the development of meteor dynamics (e.g., Ceplecha et al. 1998) and strewn field modeling. Today, improvements in digital imagery, geographical information systems, computational capability, and the growing power of the internet to quickly collate data submitted by the public, have driven the development of new fireball reporting networks (Venton 2017). A hallmark of these new networks is their emphasis on including the general public in reporting and other forms of participation (Day et al. 2018a). Another is regular calculation of fireball orbits, such that today the orbits of more than two dozen meteorite falls are known. The Desert Fireball Network (DFN) has used a network of 52 automated digital camera devices to recover four meteorite falls in the period of 2007–2015 (Bland et al. 2012; Sansom et al. 2015). All of the falls are associated with pre-atmosphere orbits, as are a larger number of fireballs that have not yet yielded meteorites. Recent funding from the Australian Research Council expands the Australian-based Desert Fireball Network into a Global Fireball Observatory (GFO), bringing the total area covered by all-sky cameras to over 12 million \(\mbox{km}^{2}\), and enabling 24-hour all-sky observation from both hemispheres with 156 state-of-the-art camera stations. The GFO is expected to see 800 bright fireballs every 6 months and track 5 meteorite falls per month globally. Eyewitness Reporting via the Internet Historically, most meteorite falls have been recovered based on eyewitness accounts and many of these recoveries were facilitated by eyewitnesses close enough to observe a meteorite striking the ground. Many eyewitnesses doubtlessly observed meteorite falls from a distance, but the fragmented nature of eyewitness reports and lack of widespread public understanding inhibited collection of fallen material. With the growth of the internet, however, several organizations have begun collating eyewitness accounts into meaningful bodies of data capable of rapidly constraining the site of potential meteorite falls. The non-profit American Meteor Society (AMS) stood up a webpage with this purpose in 2006, allowing individuals to log meteor sighting reports at no cost to the user (https://www.amsmeteors.org). The sightings are collated into reports of individual fireballs, and automated calculations, based on sighting azimuths, produce an estimated ground track for each event. Similar websites are operated by the International Meteor Organization (IMO) (https://www.imo.net), Fireballs in the Sky in Australia (http://fireballsinthesky.com.au), the UK Meteor Network (https://ukmeteornetwork.co.uk), and EXOSS Citizen Science in Brazil (https://exoss.org). The AMS, for example, recorded 5473 separate fireballs in 2017 and has produced eyewitness-based reports for meteorite falls to include Park Forest, IL (27 March 2003), Battle Mountain, NV (22 August 2012), Sutter's Mill, CA (22 April 2012) and others. With the increase in public access to the internet, these organizations have seen significant growth and can rapidly provide public notice of new meteorite falls (Fig. 8). Top-down composite view of weather radar signatures of falling meteorites. This shows meteorites falling during the Park Forest, IL meteorite fall (26 March 2003). The meteorites are size sorted during free fall to the ground with the most massive stones landing first, and individual radar "sweeps" record a cross-section of the resulting curtain of falling material. Signatures here record meteorites from 15.5 km down to 5.0 km altitude, and the altitude range can change for different falls based on the observation geometry of nearby radars. Typical detection timing for a meteorite fall ranges from radar detection of the fireball itself to observation of the last material (larger than dust) to reach the ground ∼10–12 minutes later. Data is provided via website access by NOAA, and this image was composed in Google Earth Weather Radar Detection of Meteorite Falls Weather radars are commonly used for weather observation and forecasting worldwide, but the cost of building and maintaining a nationwide weather radar system demands that most weather radars are operated as nation-wide, government-run networks. The upside of this is that data is standardized within a national network and many countries maintain archives of their radar imagery. The downside is that public access to weather radar imagery is scant, with most data available in a short-lived image download format, often in the form of a calculated product such as rainfall rate. The United States has, arguably, led the way in free public dissemination of weather radar imagery, with the nationwide NEXRAD network operated by the National Oceanic and Atmospheric Administration (NOAA). NEXRAD data are freely available in all formats, from lightly formatted data to compiled images and calculated products (Fig. 9). All NEXRAD data are available online at no cost and are updated in near real time. NEXRAD data are made available on the internet at such a rapid rate that, for a typical meteorite fall lasting 10–15 minutes, at least one set of radar data showing falling meteorites is available online before all of the meteorites have reached the ground. Finding a meteorite fall in weather radar imagery currently requires manual data processing and analysis, but falls have been identified within hours of the event via weather radar imagery (Jenniskens et al. 2012) and the possibility exists for more rapid, automated identification of meteorite falls. Eyewitness accounts of individual meteor events collected by the American Meteor Society (AMS), shown by year. The rise in observed events tracks with the growth of the internet and the popularity of the AMS website Weather radar detection of meteorite falls was first demonstrated with the Ash Creek, TX fall of 15 Feb 2009 (Fries and Fries 2010). Two researchers, Drs. Marc Fries and Robert Matson, had been investigating weather radar for this purpose without knowledge of the others' efforts and independently noted the ability of NEXRAD radars to detect falling meteorites and provide their fall location with great accuracy. Ash Creek was also noted by a National Weather Service office in Dallas, TX, who serendipitously noted the fall in the imagery of the KFWS radar. Ash Creek was a perfect test case in that it was a sizable fall that occurred in otherwise clear skies and stands out clearly in radar imagery. Since then, twenty-four recovered meteorite falls have been identified in NEXRAD imagery from 1997 to the present, with an additional thirteen identified that have not been recovered for various reasons (Fries et al. 2017). This is an average of 1.8 meteorite falls found per year, with 1.1 per year recovered in the United States and Canada (Fig. 10). Falls are identified by a combination of factors. Falls occur at the time and location identified by eyewitnesses, progress from high altitude to low over a period of 10–15 minutes, may include short-range turbulence caused by supersonic meteorites, and progress from moderate to high spectral width (a measure of the range of sizes of reflectors in a given image pixel) to low spectral width as the meteorites size-sort on the way to the ground. By contrast, nearly everything else that appears in weather radar imagery—such as weather, birds, insects, and aircraft—move laterally and do not show the other features listed above. Work is currently underway to include quantitative measurements of meteorite fall mass, and it may be possible to suggest meteorite types from radar data based on the fragmentation behavior of the bolide. Fig. 10 Comparison between the number of meteorite falls seen in the Meteoritical Society's database of approved falls (blue) and those that appear in NOAA weather radar imagery (red). MetSoc falls are shown only for the area scanned by the NOAA radar network. Most years are in parity between the two, with more falls overall seen by the radar network than appear in the database. This is because some falls occur into water or inhospitable terrain and are not recovered. On average, 1.8 meteorite falls are noted per year since 1997 with 1.1 recovered per year Weather radars are not limited to the U.S., of course. According to the World Meteorological Organization (a United Nations Specialized Agency), weather radars around the world comprise approximately \(3.6\times\) the total areal coverage of the NOAA NEXRAD network. In other words, if all the world's weather radars could be put to use finding meteorite falls, the number of falls observed on radar could conceivably increase by an additional ≈3.6×. Such an increase should provide freshly-fallen meteorites for study and constitutes significant public outreach potential. There are significant obstacles to using weather radar to search for meteorite falls to include (1) some national radar networks choose to limit public access to data, (2) many countries do not archive sufficient data to be useful, and/or (3) some countries use proprietary data formats that are difficult for the public to utilize. These problems must be overcome within individual nations in order to realize the full potential of weather radar data to identify and locate meteorite falls. Seismic Data Another important asset for locating meteorite falls is seismic data. Seismometers monitor ground motion over most of the Earth's surface, and they are capable of detecting sonic booms from both high-altitude deceleration of meteoroids (Edwards et al. 2008) and relatively closer low-altitude passage of falling meteorites in "dark flight", or the portion of a meteorite fall after luminous flight. Seismometers are typically operated in networks, and those networks are usually regional or local suites of instruments operated by government entities, universities, or other research facilities. The Incorporated Research Institutions for Seismology (IRIS) database collates publicly-available seismometer data into a common format and makes it available online. Sonic booms from meteorite falls often appear as a pair of peaks in signal intensity graphs, or seismograms (Fig. 11). The main signature seen in most seismometer data comes from the point of maximum sonic boom generation during the fall, between ∼20–30 km altitude (Ceplecha et al. 1998). The first peak comes from coupling of the air-transmitted pulse with the ground underneath the sonic boom. The resulting ground-transmitted pulse moves approximately \(6~\mbox{km}/\mbox{s}\) and outpaces the air-transmitted pulse, usually arriving at the seismometer first. The air-transmitted sonic boom pulse moves approximately \(340~\mbox{m}/\mbox{s}\) but is a stronger signal, producing a stronger pulse shortly after the ground-transmitted pulse arrives. Additional sonic booms may arrive afterwards, from individual falling meteorites in dark flight. Such signals are usually less intense than the initial signature. The arrival time of each of these signals is strongly dependent on the distance between the source and the seismometer, permitting triangulation of the source location. Seismometer trace showing a sonic boom from a meteorite fall. Seismometers often record a pair of intensity "spikes" like the ones shown here, one from a ground-coupled signal and another transmitted directly through the air (see text). This signature is from the Cartersville meteorite fall (02 Mar 2009), which resulted in a single recovered stone. This illustrates how seismometers provide useful information for even small meteorite falls. Seismometer data provides precise timing of a bolide and can locate the event through triangulation Meteorite Recovery from Bodies of Water Approximately 70% of the Earth's surface is covered by water, and that same percentage of meteorite falls disappear into the oceans and other bodies of water. Most of these have historically been lost, although there have been several meteorite falls wherein significant meteorite masses were recovered by people diving after an observed fall. These include Angra dos Reis (the namesake of the angrite clan of meteorites) (Prinz et al. 1977), Peña Blanca Spring where the recovered meteorite landed in a pond in front of a group of ranch hands (Lonsdale 1947), the main mass of Björbole which was recovered from icy water in 1899 (Martin and Mills 1976), and the main mass of Chelyabinsk which was recorded on video landing in a lake (Popova et al. 2013). These were fortuitous recoveries that all feature relatively shallow water and eyewitness(es) who saw the actual point of entry into the water. Recently, attempts have been made to expand this capability by recovering probable meteorite falls from deeper water using Remotely Operated Vehicles (ROVs) and other modern deep-sea technologies. The Aquarius Project (Bresky and Fries 2018) is a student project to recover meteorites from Lake Michigan. On 06 Feb 2017, a bright green fireball accompanied by sonic booms heralded a large meteorite fall into the lake. The fall was observed by four radars in the NEXRAD national weather radar network. The meteorites now lie on the lake floor at a depth of around 100 meters. A consortium of Lake Michigan-area institutions was formed, including the Field Museum of Chicago, the Shedd Aquarium, and the Adler Planetarium with assistance from NASA and NOAA. Teenage students from Chicago public schools have worked with the project and assist in the design and testing of original devices used to collect meteorites from the lake floor. Aquarius Project updates are archived online (https://openexplorer.nationalgeographic.com/expedition/rovmeteoritehunt). In July of 2018, the students deployed their equipment to attempt to recover meteorites from the lake bed, recovering small rocks and sediment. This material is currently under examination by Aquarius Project institutions, and additional trips to collect material are planned for 2019. Another attempt to retrieve meteorite material from an observed fall occurred in early July 2018, targeting a very large meteorite fall that occurred in the Pacific Ocean on 07 March 2018 about 20 km off of the coast of Washington state. Calculations of total meteorite mass based on radar reflectivity indicate that this fall was the most massive fall seen to date by the NEXRAD system since its inception in the mid-1990's. More importantly, the distribution of meteorite mass is unlike any of the two dozen recovered meteorite falls recorded by NEXRAD. The Washington coast fall features significantly more large, surviving meteorites proportional to smaller size fractions than previous events. This implies a stronger than typical mechanical toughness for this fall, which may in turn come from a slow infall velocity or a meteorite type that is inherently stronger than typical ordinary chondrites. The fall itself occurred during region-wide cloud cover and so was not observed directly, although at least two videos record bright flashes through the clouds. It is scientifically important to understand the reason why this fall produced an atypically large number of large meteorites, both as a planetary defense issue and to allow identification of future meteorite falls of the same type from radar data. For these reasons, a one-day effort was mounted by the Ocean Exploration Trust (OET) exploration vessel E/V Nautilus to map the fall site with multibeam sonar and attempt retrieval of meteorite material sufficient to identify the meteorite type. Sonar revealed that the ∼100 m-deep seafloor was flat and featureless. The ROV pair Argus and Hercules performed a seafloor examination/sampling transect along a ∼1.6 km track, collecting one sample with a magnetic rake, five more with a water-jet sediment sampler and one with a scoop. These samples were examined by optical microscopy, Raman spectroscopy, and electron beam analysis at NASA JSC. In June 2019, a follow-on team returned to the site aboard the research vessel R/V Falkor, operated by the Schmidt Ocean Institute (SOI). This second expedition focused specifically on retrieving \({\sim}1\) cm-sized meteorite fragments from the area of the meteorite strewn field where that size of meteorite should predominate. Over the course of three continuous days of ROV operations, an SOI-developed pair of sediment samplers sifted a large volume of ocean floor sediment but did not recover cm-sized meteorites. A suite of sediment samples was collected, washed, sieved, and searched in a manner identical to the Nautilus samples, in an iterative search for progressively smaller meteorite fragments. This effort was ultimately successful in the size fractions below ∼ 2 mm in diameter. At the time of this writing, over 100 small melt spherules and other fragments have been recovered from a combination of the Nautilus and Falkor samples. Work continues on this project to identify any meteorite type(s) among the spherules and whether they can be definitively linked to the fall event on March 07, 2019. The public outreach aspect of this effort was a dramatic success, with a large global audience watching in real time via the OET webpage as the Argus and Hercules scanned the seafloor and collected samples. Through the course of eight hours with the ROVs on the seafloor, the crew took questions and narrated the effort for a pan-global audience. Overall, both the Aquarius Project and the Washington coast fall recovery show that modern oceanographic surveying and sampling techniques have made water-borne recovery of meteorite falls a real possibility. These pioneering efforts are yielding a first trip up the learning curve towards optimizing the techniques needed, and have set up a powerful new means of engaging students and the public for meteorite research and recovery. At present, the majority of all meteorite falls are lost to science because they fall into the oceans, which cover \({\sim}70\%\) of the Earth. In the future, development of the meteorite recovery techniques explored in this effort could be used to identify and sample meteorite falls for any event that terminates into water. Costs and effort requirements would naturally limit that number, but seaborne meteorite fall recovery could be employed for extraordinary events such as the recent (18 Dec 2018) 173 ktonne-TNT event over the Bering Sea. Another example of a worthy recovery target is the infall of 2019 MO, which was observed while still in space and fell into the Caribbean Sea on 22 June 2019. The Geostationary Lightning Mapper on GOES Satellites—A New Meteorite Fall Detector NOAA recently launched the GOES-16 and -17 satellites, a new design of geostationary weather surveillance satellite. These satellites provide weather imagery services for most of the United States and parts of contiguous countries. For the first time, the GOES-16/17 satellites feature a lightning mapper instrument—the Geostationary Lightning Mapper (GLM). GLM "stares" at a large ground footprint area, collecting imagery at a rate of up to 500 frames/second. The instrument features sufficient dynamic range to detect lightning during local daytime on the ground, and can discern altitude sufficiently to differentiate cloud-to-ground from cloud-to-cloud lightning. Inadvertently, NOAA built a superb meteorite fall detector with the GLM sensors. While visible and IR-wavelength weather satellite imagery can detect meteorite falls (e.g. Almahata Sitta as described in Borovička and Charvát 2009 and Chelyabinsk as seen by ESA's Meteosat-9 and reported in Miller et al. 2013), considerable luck is required because the image collection rate is very low. Previous GOES satellites, for example, only collected images once every few minutes and a meteor would have to happen exactly at the moment when the camera was operating to be recorded. GLM's rapid imaging and sensitivity to bright flashes renders it a very capable meteor detector, as described recently (Jenniskens et al. 2018). Jenniskens et al. (2018) describe ten separate bolides detected by GLM to include one meteorite fall (British Columbia, Canada, 05 Sep 2017) and has since detected another (Hamburg, MI, 16 Jan 2018). GLM provides location, fireball luminosity, timing, and light curve data for bolides. Presently, both of NOAA's new GOES satellites are in place (GOES-16 and GOES-17) and their GLM sensors are operating and data is available online (https://www.class.noaa.gov). The two satellites cover the continental United States from locations near the east and west coasts, which may allow stereo observation of some meteorite falls (Fig. 12). This feature may be used to rapidly identify bolides among the lightning flashes that GLM is intended to monitor. Lightning usually occurs below 15 km altitude (e.g., Mecikalski and Carey 2018) while meteors produce optically bright signatures between ∼20–90 km (Ceplecha et al. 1998). Stereo observation of bolides from the two satellites should yield rapid calculations of the altitude of various events, and finding bolides would be a matter of identifying bright flashes that occur at higher altitudes than that of lightning. The possibility exists that streaming GLM data can be used to identify the location, timing, and potential for producing meteorites from bolides in near real-time. NOAA graphic indicating the overlapping imaging footprints of GLM sensors on the two GOES satellites, one located off of the US east coast and another off of the west coast. Colors indicate the expected lightning frequency per year. Note that the imaging footprints of the two satellites overlap over most of the contiguous United States and Central America, where stereo imaging of bolides over land is theoretically possible Putting It All Together—Dedicated Collection of Fresh Meteorite Falls The march of technology has availed the astromaterials community with the new technologies and capabilities described here. Historically, even though meteorite falls are far outnumbered by meteorite finds, the light terrestrial alteration inherent in recent falls makes them highly scientifically valuable. Regular and rapid collection of freshly fallen meteorites is possible to an extent that a new possibility exists—dedicated recovery of freshly fallen meteorites by a dedicated recovery team. The nature of such a team can take many forms, to include enlisting the assistance of the general public. This approach takes advantage of the spirit of public inclusivity, which created such endeavors as the American Meteor Society's eyewitness reporting program and is a powerful opportunity for outreach. A nationwide program could stand up to mobilize meteorite falls as they are detected, using eyewitness accounts and GLM to identify a new meteorite fall, seismic and radar data to calculate the strewn field, and astromaterials experts encouraging and guiding meteorite recovery at the fall site. Impromptu public lectures and media contact shortly after a meteorite fall have a history of enthusiastic reception from the public and tend to promote recovery of meteorites. The possibility exists that a dedicated collection(s) of freshly fallen meteorites could be founded and sustained in this way. Importance of Contamination Knowledge Strategies for Sample Return Missions to Maximize Science Returns from Samples The scientific value of the returned samples for all previous sample return missions have benefitted from having an archive of contamination knowledge (CK) materials that could include (1) spacecraft hardware, spares, and flight-like coupons, (2) materials used in the fabrication of spacecraft hardware or construction of a curation lab, and (3) witness materials deployed during ATLO or during construction of the curation lab. The information gained from studying the collected reference materials and witness plates is defined as the CK of a sample collection, and the CK is crucial for verifying and validating scientific results. As part of a sample return mission, these CK samples are archived along with the returned samples and the CK samples are made available for allocation and analysis by the scientific stakeholders of a sample collection. These flown, flight-like, and non-flight reference materials and witness plates provide the scientific community investigating astromaterials with the fundamental ability to reconstruct the contamination history of a sample collection. Furthermore, they serve as a baseline from which to compare tantalizing results attained from the analysis of astromaterials. CK collections are a requirement for sample return missions because contamination control efforts cannot anticipate all possible contamination vectors that can occur during the dynamic activity that is a sample return mission. In fact, all sample return missions have developed non-nominal contamination to some degree, which are highlighted in this section as important lessons learned. In every case, the CK samples have helped to mitigate these unplanned contamination events, improving and, in some cases, enabling scientific returns on the returned astromaterials. The NASA Curation Office has allocated hundreds of samples to scientists in support of gaining contamination knowledge for their respective collections (mainly Apollo, Genesis, and Stardust, and a newly constructed OSIRIS-REx collection). We outline the CK methodology and lessons learned for many completed and ongoing sample return missions and provide insights into best practices for collecting CK when biological contamination from indigenous or exogenous extant life is a possibility. For Apollo, contamination knowledge has a broader definition and overlaps with contamination control, compared with recent sample return missions, principally because Apollo was a series of missions in which in-lab monitoring of returned samples and hardware resulted in improvements for subsequent missions. Here we cite a 1965 formal report wherein three scientific discipline groups made early recommendations concerning sample collecting materials and procedures relevant to contamination. This is followed by examples of organic and inorganic contamination of samples being measured, by both curatorial staff and by researchers using samples, which resulted in improvements both to lunar surface sampling procedures and hardware and curation handling practices. Thus, contamination knowledge is an ongoing process for returned sample collections. Early advice on contamination issues was captured through 3 of the 7 discipline working groups convened in 1965 for the purpose of advising NASA on science for a 10-year period (NASA 1965). Geochemistry, Bioscience, and Geology Group reports were most concerned with contamination knowledge. For example, top level equipment requirements included: (1) "Sample containers should keep samples sterile and chemically clean. Stainless steel is acceptable. More studies should be completed relative to the use of Teflon in the lunar environment." It was the Geochemistry Group that emphasized the value of sample analysis and foresaw many of the curation guidelines that are still followed today. The report calls for CK studies to determine the amounts and effects of outgassing of the astronaut suits, the escape of atmosphere from the Lunar Excursion Module (LEM), analyses of possible contaminants in LEM fuel and effects of those contaminants on samples. The Geochemistry Group specified acceptable materials to touch the samples (materials that would not interfere with scientific measurements). In general, this meant use of materials of known and simple chemistry, easily distinguishable from lunar material. Apollo Organic Contamination Knowledge Apollo organic contamination monitoring conducted in the Lunar Receiving Laboratory (LRL) offers an opportunity to compare the monitoring effort on sample handling equipment to the actual detection of organic compounds in lunar samples by investigators. Apollo planners conducted extensive organic contamination monitoring of the containers, tools, and sample handling facilities. Simoneit et al. (1973) summarized the potential sources of organic contamination: (1) surface contamination of the lunar-bound rock box and its contents; (2) surface contamination on the Apollo lunar hand tools used to obtain samples on the lunar surface; (3) exhaust products from the lunar descent engine and reaction control system engines (both using unsymmetrical dimethyl hydrazine and nitrogen tetraoxide); (4) lunar module outgassing; (5) astronaut spacesuit leakage; (6) particulate material abraded from spacesuits or other sources during EVA; (7) venting of lunar module fuel and oxidizer tanks, cabin, and waste systems; (8) venting of spacesuit life support back packs; (9) exposure to LRL vacuum or nitrogen processing chambers; (10) surface contamination of sample processing tools and containers; (11) surface contamination of containers sent to PIs. Items 1, 2, 3, 9, 10, and 11 were considered most serious. Simulations, modeling, and engineering data were used to estimate the contamination contributed by flight items 3, 4, 5, 7, and 8 (Aronowitz et al. 1966a, 1966b). Virtually all rocket exhaust products were low molecular weight and rapidly diffused over large areas. Because of their low concentration, this was not predicted to be a major contaminant. The varied organic products included acetylene, HCN, ethylene, formaldehyde, methyl amines, and others. For laboratory handling operations, measurements of contamination via "monitors" or witness plates were used. Clean coupons of a woven aluminum alloy called York mesh (2024 aluminum alloy) or aluminum foil were processed along with the lunar-bound tools or placed inside the rock boxes bound for the Moon. Upon return, these coupons were analyzed by solvent extraction and subsequent gas chromatography and mass spectrometry. Aliquots of clean Ottawa sand, exposed inside sample processing cabinets, were analyzed by direct pyrolysis and mass spectrometry. The solvent rinsings from tool, container, and cabinet cleaning were also analyzed. Some of the most frequently encountered contaminants were hydrocarbons from pump oils and fatty acids. Detected in the vacuum chamber, some of the fatty acids were thought to be from the polishing compound used on the rock boxes. Dioctylphthalate, a common plasticizer additive for polyethylene, was ubiquitous in cabinets and bags. Simoneit and Flory (1971), Flory and Simoneit (1972), and Simoneit et al. (1973) provide an extensive list. York mesh and aluminum foil monitored organic contamination levels of about \(1~\upmu \mbox{g}/\mbox{cm}^{2}\) inside the rock boxes. Bakeout of the Apollo 11 rock box actually added organic contamination, but as a result of the monitoring, cleaning improvements were made which produced flight hardware for Apollo 12 through Apollo 15 with only \(10\mbox{--}100~\mbox{ng}/\mbox{cm}^{2}\) contamination (Simoneit et al. 1973). With exceptional care, curatorial cleaning procedures during Apollo could produce \(1\mbox{--}10~\mbox{ng}/\mbox{cm}^{2}\) contamination ranges for polished, planar surfaces. Flory and Simoneit (1972) concluded that organic contamination to lunar samples during Apollo 11 was in the \(1~\upmu \mbox{g}/\mbox{g}\) (ppm) range, but improved to \(0.1~\upmu \mbox{g}/\mbox{g}\) (ppm) for Apollo 12. The actual analyses of lunar samples were consistent with the estimated contamination levels. Burlingame et al. (1970) concluded, based on analyses of their allocated samples, existence of systematic organic contamination of about \(5~\upmu \mbox{g}/\mbox{g}\) for Apollo 11 samples, except for those samples processed in the organic reserve cabinet. Reports of organic compounds in lunar fines from other investigators were mixed, ranging from no detection at the \(\mbox{ng}/\mbox{g}\) levels (Abell et al. 1970; Lipsky et al. 1970; Meinschein et al. 1970) to ng level detection of various organics (Henderson et al. 1971; Murphy et al. 1970; Preti et al. 1971) and 0.5 ppm via pyrolysis (Oro et al. 1970). Porphyrin-like pigments were detected at the trace ng to pg level by Kvenvolden et al. (1970) and Hodgson and co-workers (1970, 1971), but not by Rho et al. (1970, 1971, 1972). Porphyrins as a possible rocket exhaust contaminant were discussed. Amino acids were detected at the \(50~\mbox{ng}/\mbox{g}\) level by Hare et al. (1970) and Gehrke et al. (1970) after aqueous or other processing to the sub-nanogram level (Murphy et al. 1970) and below detection by Gehrke et al. (1972). No viable organisms were detected in Apollo 11 and 12 samples (Oyama et al. 1970, 1971; Taylor et al. 1971). (For estimates of indigenous lunar carbon in soils and breccias, Vaniman et al. 1991 present values of about 100 ppm selected from analyses less likely, but still possibly, containing terrestrial contamination.) The presence of diamond and polishing compounds on the surfaces of Apollo thin sections have also been documented by Raman spectroscopy, and these contaminants are likely a common result of the thin-section making process (e.g., Steele et al. 2010). Keeping samples organically clean in the LRL proved difficult. Thus, a small facility to analyze and repackage a lunar sample collected in a special container was constructed at the University of California, Berkeley (Burlingame et al. 1971). The organically clean area consisted of two gloveboxes in tandem preceded by a vacuum entry chamber. The atmospheric nitrogen gas was scrubbed to remove oxygen, water, and organics. The glovebox was equipped with a liquid nitrogen cold finger to remove water generated by the glove operator. In summary, Apollo organic contaminants were greatly reduced by institution of (1) restrictions on materials allowed contact with, or proximity to samples; (2) isolation of samples in controlled environments; (3) procedures to clean all surfaces in proximity to or contact with samples; and (4) controls on fabrication, processing, and handling of lunar sample hardware. Inorganic Contamination Knowledge As is the case with organic contaminants, feedback from investigators on inorganic contaminants was essential in improvement. Some materials were selected for use as lubricants or seals because those materials were not predicted to interfere with scientific measurements. Thus, it made engineering sense to use an alloy of 90% indium and 10% silver for the metal knife-edge seal on the Apollo rock boxes. It made engineering sense to use molybdenum disulfide as thread lubricant in curation sample containers. However, these materials were not of sufficient purity for high precision scientific research. Only a few unopened lunar samples remain in indium-silver sealed containers. The use of MoS2 was discontinued and all containers removed from service. Molybdenum disulfide was replaced with a Teflon thread lubricant Xylan. However, Xylan was not pure Teflon and contained a binding agent, so Xylan was also removed. Attention to detail is required and compromises made, especially for the fabrication processes, and this is illustrated by engineering selection of surface treatments for the Apollo drive tube cores and drill cores. The materials used for fabrication of the large diameter Apollo drive tubes, anodized aluminum, and the drill core tubes, canadized (a proprietary passivation technique) titanium were selected for engineering reasons. A list of materials would not normally raise contamination flags, unless the reviewer understood the details of the process (and in many cases an engineering requirement might be the best solution anyway). Lead content in amounts compromising science results was found in the anodized aluminum and in the canadized titanium by investigators making extremely low level measurements. The solution in the drive tube case was to physically remove the outer 1 mm of regolith during core dissection, before sample allocation. One excellent example of institutionalized CK in Apollo sample curation was the construction, completed in 1979, of Building 31N at JSC to house the Apollo lunar collection. The entire design and construction was reviewed in real time by a facility subcommittee of the Lunar and Planetary Sample Team. The committee, comprised of planetary petrologists and geochemists—users of the lunar samples for research—requested chemical analysis for most of the material selections and reviewed the data in detail. Examples are chemistry of paint, floor coverings, adhesives, electrical cords, etc. Ongoing Contamination Knowledge Efforts Collection of CK for the Apollo curation lab continued after construction of the lab and continues to this day. This process is often a joint effort between curation personnel and members of the scientific community that develop in response to interesting, novel, or unexpected results stemming from scientific analysis of the samples. We provide here two notable examples of successful collaboration between curation personnel and the scientific community to better understand the contamination environment within curation facilities at NASA JSC. As mentioned above, Xylan was used on screw threads in the Apollo processing cabinets, tools, and containers to prevent galling. Over time, however, it became evident that the Xylan did not adhere well to the screws because it was flaking off into the processing cabinets and hence served as a potential source of contamination to the Apollo and Antarctic Meteorite samples (Xylan has 45 wt.% C and 4 wt.% N, and it is not removed by step combustion until samples are heated above \(600\ ^{\circ}\mbox{C}\); Wright et al. 1992). In response to the characterization of Xylan as a potential contaminant that could affect the analysis of H, C, N, and O in astromaterials samples, Xylan was banned from use in any of the curatorial sample-handling hardware at NASA JSC. More recently, a concern was raised that the stainless steel tools used during the processing of Apollo samples may contribute highly siderophile elements (HSE) to the processed samples (Papanastassiou et al. 2015; Tikoo et al. 2014), given the low abundances of HSE that occur naturally in lunar samples (Walker et al. 2004). Consequently, the stainless steel tools and sample containers used in the Apollo curation labs were subsampled and analyzed to determine a contamination threshold that would affect HSE analyses of Apollo samples (Day et al. 2018b). Day et al. (2018b) reported that the potential for HSE contamination from the stainless steel containers and tools was low. Genesis was the first sample return mission since the Apollo program and ended a 32 year hiatus for sample return missions. The Genesis mission launched on August 8, 2001 and traveled to the Earth-Sun Lagrange 1 (L1) point. The Genesis spacecraft was held in halo orbit at L1 outside Earth's magnetosphere for 2.3 years, and the mission collected solar wind plasma that was implanted into several arrays of high purity materials and subsequently returned to Earth for analysis. Unfortunately, due to an inversion design mistake of the sample return capsule's (SRC) drogue parachute gravity switch, the SRC experienced a terminal velocity hard landing at the Utah Test and Training Range (UTTR) on September 8, 2004. The impact into the lacustrine sediment breached the science canister and littered thousands of broken high purity collectors throughout the science canister and SRC. Despite this set-back, curation contingency plans were invoked and after much effort decontaminating samples at UTTR and JSC's curation laboratory, all primary science mission goals were achieved and Genesis is marked as a successful mission. The ability of Genesis to rise from the ashes was in-part due to the fact that at mission inception, the mission had a well-orchestrated CK plan that led to a remarkable CK collection. Contamination knowledge for Genesis solar wind sample return is captured in archived reference materials and associated documentation of six types, from pre-launch to post landing events: (a) flight-like collector substrates; (b) science canister duplicate components; (c) assembly environment material coupons and process witness plates; (d) post-landing UTTR soil samples; (e) post-landing science canister and sample return capsule hardware; (f) recovery processing tools and containers. Thus, Genesis is an example that contamination knowledge is an ongoing effort and that post-recovery contamination knowledge is important. Flight-Like Collector Substrates Genesis had ambitious goals for determining the elemental and isotopic composition of the solar nebula. Desired elemental accuracy was \(2 \sigma \) limits of ±10% of the number of \(\mbox{atoms}/\mbox{cm}^{2}\) on the collector substrates. Desired isotopic precision for many elements was \(\pm 1\%\) compared to terrestrial standards. Given that the estimated 2-year fluence (atoms per \(\mbox{cm}^{2}\)) for the more abundant elements is in the \(10^{8}\) to \(10^{12}\) range, requirements for bulk purity and surface cleanliness of collectors were very stringent (Burnett et al. 2003). Much effort was expended by the science team in verifying purity and surface cleanliness of candidate batches of collector substrates. Fifteen types of ultra-pure materials were flown as collector substrates. Three hundred passive collectors, mounted in 5 arrays, individually consisted of single crystal silicon (FZ and CZ), sapphire, germanium, and sapphire coated with aluminum, silicon, diamond-like-carbon, or gold. Targets in a concentrator for O, N, and C were comprised of silicon carbide, isotopically enriched polycrystalline diamond and diamond-like-carbon coated on silicon. Additionally, special collectors of metallic glass, gold foil, polished aluminum alloy and molybdenum coated foils were deployed (Jurewicz et al. 2003). The diversity of the collector materials on these arrays not only provided multiple analytical background choices for optimum specific analyses, but also multiple material choices for the various surface cleaning processes that the hard landing subsequently required. The samples archived for CK served as reference pieces for purity and surface cleanliness or may have been implanted to make calibration pieces during analysis. These materials were also widely used to test surface cleaning protocols before cleaning Genesis-flown samples. It was evident in some cases of solvent application that reactivity of Genesis-flown pieces was different than non-flight reference pieces, presumably due to solar irradiation. To date, allocations of 600 Genesis-flown samples were accompanied by allocations of 300 collector reference materials. The base inventory supporting these allocations is 5000 Genesis-flown collector samples and 2000 non-flown collector reference substrates. Science Canister Duplicate Components The canister containing the payload of samples for return was cleaned using ultrapure water (\(> 18\) M\(\Omega \)-cm resistivity) inside of an ISO Class 4 cleanroom. Re-assembly, including installation of ultraclean collectors, was performed by staff completely enclosed in powered HEPA filtered Dryden suits. More than 200 duplicate canister components, with many cleaned exactly like flight components, are archived for comparison of contaminants. Some cleaning fluids and manufacturing fluids (e.g. electric discharge machining oil) were also archived. Assembly Environment Material Coupons and Process Witness Plates More than 100 environment material coupons of the assembly room are archived. Examples include samples of wall construction material, flooring materials, adhesives, paint, fire retardant and subsamples of air handler intake filters. Process witness plates for particle chemistry and airborne molecular contaminants were periodically set out, but only the data were saved (Allton et al. 2016). Post-Landing UTTR Soil Samples Just prior to Genesis capsule return, 8 UTTR soil samples from 5 sites were collected by the helicopter recovery crews as they practiced. These samples are archived and have been allocated for contamination studies on solar wind collectors. After the capsule hard landing and recovery of the spacecraft components, eighteen 5-gallon buckets of UTTR soil and spacecraft materials from the impact site were collected and archived. Subsequently, collector fragments were high-graded and removed from buckets for permanent archive. Post-Landing Science Canister and Sample Return Capsule Hardware Many of the collectors were broken, but most remained confined in the science canister. The field crew was able to gather and transport to a nearby cleanroom the entire science canister containing most of the collectors and the sample return capsule major components within 8 hours of the crash. All of this material is archived for CK, except for pyrotechnic devices and batteries, which were deaccessioned after 5 years (Stansbery and Team 2005). Ellipsometry was used to measure molecular contamination on some of the collector plates (McNamara and Stansbery 2005; Stansbery and McNamara 2005). Recovery Processing Tools and Containers Select recovery and UTTR processing tools and containers are archived for CK. Examples include polystyrene containers, fine brushes, and cleanroom post-it paper used for securing small fragments. The post-it adhesive remains under investigation and these CK samples have been helpful. The lesson here is awareness that any material added to the handling stream at the last minute should have batch specific material archived. Long Duration Exposure Facility The Long Duration Exposure Facility (LDEF), was a school bus-sized cylindrical facility designed to provide long-term data on low-earth orbit (∼300 miles altitude) environment and its effects on space systems, materials, and operations (Kinard and O'Neal 1991). Originally intended to be the cargo for the first space shuttle mission in 1981, it was finally placed in low-Earth orbit by Space Shuttle Challenger in April 1984. Fifty-seven science and technology experiments from nine countries flew on the satellite. The original plan called for the LDEF to be retrieved in March 1985, but because of the destruction of Challenger (the only shuttle with a sufficiently large cargo bay to accommodate LDEF) it was eventually returned to Earth by the newly built Columbia in January 1990. LDEF was an early test bed for ideas on micrometeoroid capture. It was carefully placed in orbit and gravity stabilized such that its orientation relative to Earth remained constant. Thus, the trailing side of the satellite would capture micrometeoroids but no space debris and see no secondary impacts from satellite surfaces—completely eliminating the most significant sources of contamination to captured astromaterials. No subsequent spacecraft has repeated this feat. Unfortunately, shortly after recovery into the still open Columbia cargo bay the shuttle began to rotate end over end, destroying many experiments and causing severe sample contamination. NASA Mission Control chose not to waken the sleeping astronauts and halt the rotations, despite pleas from the LDEF science team. Thus, recovery of LDEF by the Space Shuttle was non-nominal, resulting in contamination and thereby degrading many mission goals. An important lesson is to carefully consider the possible deleterious consequences of using an astronaut crewed platform for sample recovery operations. Despite these problems, the LDEF mission was successful in guiding scientists to the design of the capture media for the subsequent Stardust Mission. However, one LDEF lesson was not properly learned. Outgassing of silicone-based adhesives and lubricants coated most of the exterior of the satellite with a Ca- and Si-containing coating, which was baked to a brown color by solar radiation (Whitaker and Dooling 1995). This "brown stain" was to reappear with a vengeance in the Genesis, and to a lesser extent Stardust Missions. Stardust Mission For the Stardust Mission, contamination control procedures were integral to flow of spacecraft manufacture, assembly, testing, flight, and recovery, and the science team took a very active role in planning and implementing contamination control measures, monitoring contamination through numerous witness materials (Sandford et al. 2010; Zolensky and Girard 1997). However, despite these precautions, the captured comet Wild 2 coma dust grains experienced significant contamination from several sources, including the presence of indigenous organic and inorganic material in the silica aerogel capture media, spacecraft outgassing, and an unfortunate sample return capsule (SRC) recovery procedure. Preflight Contamination The flight aerogel used in Stardust was marveled, but cleaning it was not a sufficiently high priority for the mission. There were alternative sources for the silica aerogel, which were known to be cleaner than the material manufactured by the Jet Propulsion Laboratory in Pasadena, CA. In addition, recommended work on improving aerogel cleanliness were not adequately performed. Synthesis of the aerogel employed a tetraethyl orthosilicate precursor in a solvent that included ethanol, methanol, acetonitrile, and/or other organic liquids, and Synlube 1000 was used as a mold release agent (Sandford et al. 2010). In the end, the severely contaminated aerogel was baked to reduce the volatile organic content, but several weight percent of carbon in the form of organics remained strongly bonded to the aerogel. To be fair, most persons believed at the time that organics could not be adequately captured by aerogel at the mission capture velocity of \(6.2~\mbox{km}/\mbox{s}\). The comet Wild 2 coma grains entered the Stardust aerogel at \({\sim}6.1~\mbox{km}/\mbox{sec}\). Such collisions are sufficiently energetic that they could alter any organic compounds originally present in both the impacting particles and the aerogel collector material (Sandford et al. 2010; Sandford and Brownlee 2007; Spencer et al. 2009; Spencer and Zare 2007). Thus, it came as a surprise when relatively intact cometary organics were recovered from a few captured coma grains. Had we known this was possible, we would have undoubtedly made greater efforts to fly organically-clean aerogel. It is now clear that some fraction of the impacting comet Wild 2 coma particles survived with little or no alteration, while other portions of the samples were severely heated (Brownlee et al. 2006; Elsila et al. 2009; Sandford et al. 2006, 2010; Zolensky et al. 2006b). Conversion of carbon original to the aerogel, and in the impacting cometary particles, into new forms likely occurred in a similarly variable manner. Thus, before one can assign organics seen in Stardust samples to a cometary origin, it is necessary to consider the possibility that they are either altered cometary materials or materials formed from carbon original to the aerogel. IR absorption difference-maps of individual tracks suggest that impacting Stardust particles do not convert the majority of the original carbon in the aerogel tiles into new chemical forms that remain in the aerogel. However, laser ablation, laser ionization mass spectrometry (L2MS) studies demonstrate that at least a small amount of the original aliphatic carbon in the aerogel is converted into aromatic materials in the form of lightweight polycyclic aromatic hydrocarbons (PAHs). Thus, while most of the original aerogel carbon appears to be unaffected by the impact process, the issue of the possible presence of impact converted organics must be considered on a case-by-case basis whenever specific organics are being sought in Stardust aerogel samples. Additional contaminants found their way into the aerogel during flight. These include materials outgassed from nearby spacecraft components, propellant byproducts, and secondary materials from dust impacts on other parts of the spacecraft, particularly the Whipple shields and solar panels. Cometary particles impacted on the aerogel tiles in the collector tray perpendicular to the forward direction. Thus, any tracks seen with oblique orientations must be either due to strikes by random interplanetary dust particles or to secondary materials from impacts on other parts of the spacecraft. Oblique tracks have, in fact, been found in the flight aerogel tiles, most of which fall in non-random spatial distributions on the cometary collector (Westphal et al. 2008). The materials in these tracks could include components from both the original impactor and from the spacecraft. Many of these tracks originated from a grazing impact on the central Whipple shield of the spacecraft as the origin of clustered low-angle oblique tracks. In these tracks, the most likely contaminant would be the Mylar thermal protection material that wrapped the edge of the Whipple shields. A second population of high-angle oblique tracks unambiguously originate from a non-cometary impact on the spacecraft bus just forward of the collector. The exact location of this strike on the spacecraft bus is not known, but possible contaminants include materials used for the sides of the spacecraft bus—highly ordered graphite embedded in an epoxy matrix. In summary, it is clear that the Stardust cometary collector tray was struck by a limited number of secondary particles resulting from impacts on other parts of the spacecraft. Materials in these oblique tracks should be viewed with considerable caution before interpreting their significance as possible cometary materials. Fortunately, the most likely contaminants, Mylar wrap on the Whipple shields and carbon composites from the body of the spacecraft, have distinctive C X-ray absorption near-edge spectra (XANES) that make them relatively easy to recognize. At present, there is no evidence that this process has introduced contamination outside the domain of the oblique tracks themselves. Contamination During Flight It is possible that contaminants could have been introduced to the Stardust sampling trays directly from the spacecraft during its nearly 7-year flight. This is of special concern for the aerogel collectors since aerogel, with its very large surface area to mass ratio, is an excellent 'sponge' for adsorbing contaminants. To assess the extent of on-flight contamination, several 'witness coupons' were enclosed in the Stardust SRC (Tsou et al. 2003). These coupons included 1 cm diameter disks of aluminum and sapphire, and one 'interstellar' aerogel tile (2 cm wide × 4 cm long × 1 cm deep). These coupons were located on the arm that deployed the aerogel collector array and were placed low enough that they resided in the shadow of the main Whipple shield. Thus, these coupons were exposed to the same flight environment as the aerogel collectors for the entire mission, but were never directly exposed to the cometary influx. Examination of the aerogel witness coupon showed no visible signs of adhering materials or stains. Although we know that silicone-based adhesives and lubricants outgassed during the mission (famously coating the cold camera optic surface), none of the exposed surfaces in the Stardust sample return canister showed any signs of the 'brown stain' seen on many of the surfaces of the LDEF (Fred Hörz, personal communication, 1990) and Genesis hardware (Burnett 2013). Analyses of the aerogel witness coupon (i.e., aerogel that was exposed to all environmental conditions as the collector aerogel except the comet) shows similarities to collector aerogel, although the levels of contaminants, when detected, are generally lower and some components (for example, the carrier of the 1700 cm-1 IR C=O feature) are dramatically less abundant. This suggests that contamination associated with the operational environment of the spacecraft during flight was not a major source of sample contamination. Contamination from SRC Recovery and Curation Operations The accumulation of local soils (mud) from the recovery site was a major issue of concern for the Stardust Science Team prior to recovery of the SRC. Fortunately, integrity of the SRC during landing, the relative inability of the mud at the recovery site to stick to the SRC, and the fact that none of the bouncing impacts occurred at the locations of the two backshell vents greatly decreased the magnitude of this concern. Thus far, there is no indication that soils from the recovery site infiltrated the sample canister or in any way contaminated the returned samples. However, an unfortunate decision was made to place the recovered SRC into a polypropylene bag during the brief helicopter transport from the landing site to a hangar for preliminary deintegration operations. Subsequent detailed analysis revealed that the aerogel soaked up outgassed organics from this bag, providing an additional source of organic contamination to the comet coma grains (Hope Ishii, personal communication, 2007). One important lesson learned from the stardust mission is that recovery operations for the SRC significantly suffered from the lack of a hermetic seal for the samples, probably in many additional ways that will only become apparent in the future. Mission engineers should be pushed to provide truly hermetic seals for future returned samples. Contamination from curatorial operations has been carefully mitigated, although there is some evidence that the Stardust Interstellar aerogel has collected some contamination during handling in various labs during analyses (Bechtel et al. 2011). Hayabusa Mission The curation and contamination knowledge of the samples recovered from asteroid Itokawa by the Hayabusa mission (2004–2010) are very well described by Yada et al. (2014). In order to limit contamination to the recovered samples, the constituents of the Hayabusa sampler were limited to A6061 aluminum alloy coated with pure aluminum, stainless steel (304), Viton, aluminum oxide glass, and Teflon. Before launch, every part of the sample container was cleaned in 2-propanol using an ultrasonic cleaner, installed in an ISO Class 7 cleanroom. A contamination coupon made of aluminum oxide glass was installed inside the sample catcher to monitor contamination during the mission. In order to minimize ground contamination following Earth return, the sample container was designed to seal the samples, though the mission budget did not permit a hermetic seal. However, terrestrial atmosphere permeating through the double O-rings seal was estimated to be <1 Pa. During atmospheric entry, the sample container was designed to experience less than \(80\ ^{\circ}\mbox{C}\) by using carbon fiber-reinforced plastic (CFRP) capsule ablators. The actual temperature from recovery on the Australian desert until introduction to the JAXA cleanroom was monitored with a temperature logger attached to a transportation box for the reentry capsule. The data of the logger showed that the sample container had been kept under \(30\ ^{\circ}\mbox{C}\). The JAXA cleanrooms are maintained under \(26\ ^{\circ}\mbox{C}\). The magnetic condition of Hayabusa-returned samples should have been disturbed during a return trip to Earth due to the Hayabusa ion engine operation. Additional electric disturbance and shock from atmospheric entry, landing, and transportation, which might affect the samples, are still poorly understood. JAXA Hayabusa Lab The JAXA Hayabusa curation laboratory (hereafter "curation lab") consists of four cleanrooms of different clean levels: a planetary sample handling room (ISO Class 5 to 6), an electron microscope room (ISO Class 6), a sample preparation room (ISO Class 6), and a manufacturing and cleaning room (ISO Class 7). These have vertical air flow from the ceiling into a raised, perforated stainless steel floor. All filters used in fan filter units are polytetrafluoroethylene (PTFE), and an additional chemical filter absorbs acid gases such as halogen, sulfate, nitrate, elemental boron, and borate. Design of this lab made effective use of decades of knowledge from NASA's curation labs, and then significantly improved upon them. Four special-use rooms exhaust their air independently to outside of the cleanrooms to protect the other cleanrooms from chemical and particle contamination. Additionally, there is a basement for equipment that cannot be set in the cleanrooms, such as roughing pumps for vacuum systems, a compressed air supply system, an ultra-pure water supply system, and nitrogen purifiers. The curation lab has two clean chambers for initial sample handling, Nos. 1 and 2. These are constructed mainly of 304 stainless steel, and their inside walls were electrochemically polished. They were baked in vacuum to at least \(120\ ^{\circ}\mbox{C}\) before and after the installation to reduce residual contamination. Both chambers are equipped with turbo molecular pumps (TMPs) and dry scroll pumps. Clean nitrogen supplied by a cyclic type nitrogen purifier and a flow type nitrogen purifier. The former is directly connected to each of the chambers to exclude H2O, O2, and hydrocarbon from the circulating nitrogen. The chambers operate at positive pressure to exclude ambient air. Chamber 1, where the Hayabusa capsule was initially opened, can be operated at conditions of ultrahigh vacuum or purified nitrogen. Residual gas expanding into the chamber from the container was collected in bottles made of stainless steel. The lower part of the sample container was maintained in cabinet No. 1 under a vacuum. Chamber No. 1 was also equipped with Viton gloves through gate valves permitting the sample container and catcher to be manipulated with special tools. After opening, the sample catcher was sent to chamber 2 (in nitrogen) for the extraction of captured Itokawa grains. Both chambers 1 and 2 contain ultraviolet (UV) neutralization lamps to compensate electrostatic charge, which should occur in the pure nitrogen condition. Also, an alpha-ray neutralizer containing a grain of 210Po is employed for the same purpose—use of this neutralizer was pioneered in NASA's Stardust and Cosmic Dust Labs. The clean chambers were constructed using stainless steel (304 and 316), aluminum, and A6061 aluminum alloy, quartz glass, PTFE, and Viton. Gold and copper are used for materials in Itokawa grain sample holders, borosilicate glass for containers of less important items, and polyetheretherketone (PEEK) for electric connectors. Measurement of Gas in the Capture Cell upon Initial Opening Residual gas sampling bottles connected to clean chamber 1 were prepared to capture gas released from the container at the time of its initial opening (on June 23, 2010, 13 days after capsule recovery). As it expected that the container could contain some terrestrial atmosphere, O2 and 40Ar could be used to identify any leaks into the "sealed" sample container. Noble gases sampled in the gas bottles were analyzed at the University of Tokyo (Okazaki et al. 2011). Elemental ratios of the noble gases collected from the sample container were essentially identical to the terrestrial atmosphere. The inner pressure of the sample container was much higher than expected. Possible causes are a small leak of air through the double Viton O-rings seal, larger-than-expected permeability of the Viton O-rings, or a temporary leak of air accidentally happening during deintegration of the sample return capsule. Sample Removal from the Container On 24 June, 2010, the sample container was transferred to the transportation chamber from clean chamber 1 and the inner lid and the sample catcher were set into a catcher handling container. The inner lid, which was connected to the cover of sample catcher room A (there was also a room B), was removed, and its inner surface was observed and photographed. The catcher was placed into the catcher handling container, and these were transferred to clean chamber 2. The inner surface of catcher room A was observed in detail by optical microscopy, and very few particles larger than a few hundred micrometers were observed inside room A. Initial attempts to remove Itokawa grains from the aluminum sample catcher were unsuccessful. Without exception, every suspected grain proved to be a small piece of protruding Al metal. Next, a special PTFE spatula was used to sweep the interior surface of catcher room A. Observation of the spatula in the FESEM showed the presence of hundreds of rocky particles 1–30 μm in size, half of which proved to be Itokawa regolith grains (based on EDX spectra, Nakamura et al. 2011). Finally, the inverted sample catcher rooms A and B were tapped with a screw driver, causing a rain of Itokawa and Al grains to fall onto specially-prepared quartz glass disks. The particles were moved from the quartz disks to copper SEM mounts for examination by FESEM-EDS at low kV and no conductive coating. It was thought that particles analyzed by FESEM might be contaminated by vacuum pump hydrocarbons. However, Naraoka et al. (2012) showed that no measurable hydrocarbons could be detected on identically-treated witness surfaces by time of flight-secondary ion mass spectrometry. Because of differences in the requirements for proposed analyses, sample containers used in the initial analyses of Itokawa grains varied. The samples analyzed for the mainstream of initial analyses, including synchrotron X-ray computed tomography and diffraction, FESEM and FE electron microprobe analysis, and a secondary ion mass spectrometry were embedded in epoxy resin and mounted onto glass fibers. They were transported within a stainless steel container filled with nitrogen. Samples for transmission electron microscopy were mounted in epoxy resin in a special nitrogen glovebox and transported within the same containers. In this case, they were processed without exposure to air. Samples for noble gas analyses were set in holes in a stainless steel base in a special flange of stainless steel, which had been baked beforehand to decrease contamination. This process was performed in nitrogen. Samples for organic analyses and instrumental neutron activation analysis were set in holes in a diamond plate with a diamond cover. Each investigation required special sample handling and encapsulation procedures. NASA's Hayabusa Curation Laboratory Ultimately, 10% of the captured Itokawa grains will be transferred to NASA, and curated in the Hayabusa Curation Laboratory at JSC, although at the time of this writing less than 100 grains have been transferred to NASA's care. Sample containers for the NASA samples distribution consists of a pair of vacuum flanges of stainless steel as an outer container and a pair of synthetic quartz glass plates as a case to enclose the samples. All parts of the containers are separately cleaned by JAXA. The samples are placed into the quartz dimple slides using an electrostatically-controlled micromanipulator in clean chamber 2 at JAXA. The flanges are sealed with six screw bolts and oxygen-free copper gasket coated by gold. The sealing was also performed in the clean chamber, so the inside of the container was filled with atmospheric pressure nitrogen in the clean chamber. Only three materials are used for these sample containers: synthetic quartz, gold, and stainless steel. The NASA Hayabusa Cleanroom is a single room, containing at its core a stainless steel and glass cabinet for sample storage. No special sample handling is performed in this lab, as of now. Samples are merely stored, and allocated as needed. Contamination knowledge for the OSIRIS-REx mission was covered in detail by Dworkin et al. (2018). This section will be a broad overview of the approaches and implementation of CK for OSIRIS-REx. Because curation scientists were involved in mission planning from the start, level 1, 2 and 3 mission requirements address aspects of CK and are integrated into the Contamination Control and Contamination Knowledge Plans. Because the mission is focused on amino acids and in general a carbonaceous asteroid, contamination efforts included attention to organics. An amino acid baseline of \(180~\mbox{ng}/\mbox{cm}^{2}\) for OSIRIS-REx was based partly on analysis of Stardust foils, most of which was from a known contaminant called epsilon amino caproci acid (EACA), which is derived from hydrolyzation of nylon (Elsila et al. 2009). The planning for CC/CK involved identification of restricted materials and assessment of hydrazine contamination (from monoprop thrusters), ATLO cleanroom and payload faring monitoring, coupon and material archiving, flight system witness plates, and sample container air filter system. All of these activities and categories have led to a detailed understanding of potential contaminants for the collected sample. Materials Restrictions The OSIRIS-REx team was already aware of several specific and classes of compounds that would need to be avoided or restricted due to contamination concerns such as nylon and organic polymers (e.g., silicones, lubricants, adhesives). However, through open communication channels with engineers, additional materials or components were identified in advance, allowing ample time for identification of substitutes. For example, one process required diamond abrasives while another used a coating that included amorphous silica; both nanodiamonds and amorphous silica may also be present in primitive asteroid materials. The diamond-abraded surface was cleaned and verified diamond-free at JSC via FTIR, and the silica-containing material was removed. Galling (i.e., wear resulting from adhesion between sliding surfaces) is a frequent problem in spacecraft assembly and can be mitigated using various lubricants. This unavoidable use of lubricants is an example where material archiving can be helpful, and indeed the Braycote lubricant was archived for every use on the spacecraft during ATLO. Open communication between the subdiscipline engineers also led to the chemical investigation of products whose chemical makeups were unclear and/or proprietary. Analyses at GSFC and JSC allowed materials of concern to be tested in more detail, including couplants and adhesives, in several cases helping to identify replacement products (Dworkin et al. 2018). Detailed reports of materials testing were shared with the mission contamination knowledge scientists and placed on the internal science team website for review. OSIRIS-REx thruster propellant, hydrazine, is known to react with organics via a Wolff-Kishner reduction (Dworkin et al. 2018). The mission team conducted tests of the reactivity of various organic compounds with anhydrous hydrazine, and decided that spacecraft thrusters should be canted away from the sampling site, which would result in \({<}180~\mbox{ng}/\mbox{cm}^{2}\) hydrazine to be deposited on TAGSAM surfaces. Even this hydrazine will rapidly evaporate from bare metal at sampling temperatures, but traces might be adsorbed by minerals or react with free carbonyls. In addition, drawing on experience from the Mars Phoenix lander mission and carrying out new calculations specific to OSIRIS-REx, mission engineers were able to estimate the amount of unreacted hydrazine in a thruster plume seeing the sample would be \({<}120~\mbox{ng}/\mbox{cm}^{2}\) for a single collection event. The only times when the spacecraft thrusters could deposit hydrazine onto the TAGSAM head are when the head is in the sampling configuration. This occurs during initial deployment and checkout, baseline sample-mass measurements, the TAG rehearsals, and the TAG event(s). All these considerations led to a much better understanding of potential for hydrazine interaction with the samples, and alleviated concerns for contamination. Materials and Coupon Archiving Archiving of materials identified to be of potential concern included those associated with the construction of the spacecraft, launch vehicle, SRC, TAGSAM (and associated hardware), science instruments, and materials used for packaging, containment, and processing of samples. During spacecraft assembly, the science and curation teams worked with the OSIRIS-REx mission engineers and ATLO personnel to archive materials from the spacecraft. Archiving began in February 2014, with work on the sample acquisition and retention assembly (SARA) composite panel in Denver, peaked near launch, and was completed by January 2017. Additionally, as the instruments were assembled and readied for integration, members of the instruments team packaged materials to send to JSC. As instruments and sub-assemblies of the spacecraft were tested and integrated, material coupons and items were sent to JSC through integration at KSC, with the last items having arrived at JSC in early 2017. A total of 406 items were received for the non-flight contamination knowledge collection. Data archived for each item also includes photos, its location on the spacecraft, physical description, the company that made the item along with its webpage or other contact information, the archiving location, archiver, and date. The materials fall into general categories including metals (stainless steel, aluminum, titanium alloys), epoxies, paints, polymers, lubricants, non-volatile-residue samples (NVR), sapphire, and various miscellaneous materials (a detailed list of items is found in Dworkin et al. 2018). The collection of CK materials, including witness plates, is archived and stored at NASA JSC in an ISO 7 cleanroom in dedicated stainless steel desiccator cabinets, with separately supplied dry nitrogen lines. Monitoring ATLO Cleanrooms: LM, KSC, and Payload Faring Witness plates were deployed to monitor cleanliness levels in the spacecraft assembly cleanrooms, high bay, environmental test facilities, spacecraft transport containers, Kennedy Space Center (KSC) Payload Hazardous Servicing Facility (PHSF), and the interior of the Atlas V launch vehicle fairing up until one day prior to launch of O-REx. All through the ATLO process (from March 2015 until late August 2016; Fig. 13) Si wafer and Al foil witness plates were deployed in these areas at LM and KSC to provide a record of particle counts and volatiles for current and future scientific studies. These plates were deployed in roughly monthly increments for 16 months with each unit containing 4 Si wafers and 4 Al foils, for a total of 128 individual witness plates (64 Si wafers and 64 Al foils). One of each witness plate (Si and Al) was analyzed immediately, while the remaining three are archived at JSC with the materials archive (described in previous section) for future analysis. In addition to the witness plates, gas samples were collected in selected ATLO locations such as cleanrooms and testing facilities, with the goal of identifying any unexpected or problematic species, although no unexpected species were identified. Simplified schedule for SARA development, fabrication and assembly, and OSIRIS-REx Assembly, Testing and Launch Operations (ATLO) Flight Witness Plates and Air Filter CK To witness the environment experienced by the sample collection system on the spacecraft, a series of witness plates were designed and implemented in three different areas in the sample canister—the top of the TAGSAM head, the TAGSAM wrist joint assembly, and the inside of the sample canister. The aluminum and sapphire witness plates are designed to be deployed on TAGSAM and in canister recording three different exposure timeframe—always, pre-stow, and post-stow. These plates will be removed immediately upon return to JSC in the cleanroom and then stored in dedicated cabinets for contamination assessment studies. In addition to the witness plates, the sample canister has a two-way air filter in its lid which protects the sample from external contaminants, but the lid can also trap any volatile or particulate material from leaving the canister after TAGSAM is stowed. The filter performance was tested for moisture, particulate, and organic trapping efficiency during pre-launch activities and also drew heavily on similar filters used for the Stardust mission. Additional contamination knowledge activities will continue during Phase E of the mission (Sept. 2016 to September 2023) with archiving of materials associated with the cleanroom construction at JSC and materials from the UTTR recovery site (might include soil, air, and other environmental background materials that could pose a contamination risk to OSIRIS-REx samples). Mars 2020 Rover Mission Although the Mars 2020 rover mission is not a sample return mission itself, it will be collecting and caching samples from the martian surface that could be picked up by a future mission, and hence Mars 2020 may represent the first mission in an overall Mars sample return campaign. Apart from in situ surface science, the goal of this first mission is to assemble a collection of rigorously documented and returnable cached samples. Contamination knowledge samples from Mars 2020 are in the process of being collected and curated to be part of the overall Mars sample collection if the cached samples are eventually returned. While there are many aspects of the MSR contamination knowledge samples that will be similar to previous sample return missions, this sample return campaign has presented a number of unique opportunities and challenges. Perhaps the most notable is that due to the possibility of extinct or extant life on the surface of Mars, this sample return mission is designated as restricted by NASA's Planetary Protection Office. This designation requires more stringent organic contamination control requirements as well as the addition of biological contamination control and CK. Furthermore, this designation requires that the returned martian samples are curated in a containment facility in order to protect Earth from possible martian biohazards. The second notable difference is that the Mars 2020 mission would be part of a MSR campaign. Therefore, not only will the CK collection need to be coordinated between multiple missions, but significant point sources of possible contamination will remain on the martian surface. This mission architecture could make it more challenging to track a contaminant to its source. To minimize this knowledge gap, the CK samples collected will have to be more extensive than traditional sample return mission architectures. Finally, the Mars 2020 mission was not officially considered a sample return mission from its inception, so interaction with curation personnel and/or appropriate curation expertise was delayed and did not occur during the early stages of mission design, which has added an extra complication given the additional costs and time constraints of assembling a comprehensive CK collection. Contamination Knowledge Samples As with other sample return missions, an array of CK samples will be collected throughout ATLO. These samples will range from high fidelity flight reference materials to airfall witness samples. The flight and non-flight hardware and spacecraft components considered for CK are those items that are a potential contamination risk to the sample intimate hardware during launch, cruise, Mars entry/decent, landing, and rover surface operations. These "line-of-site" items are determined based on the Master Equipment List (MEL) and the Master Material List (MML). The MML will also be utilized to determine materials that have any potential to shed particulates and/or outgas molecular organics, directly or indirectly impacting the sample and caching subsystems. These samples can range from paint samples to flight spares or flight spare equivalents. Given the expanded scope of CK samples, an array of non-flight reference materials could help in tracing contamination. Some of these non-flight reference materials include: facility tools, equipment, and environmental components used during the fabrication, part processing, precision cleaning, and assembly that are considered a potential contamination risk. Witness items (e.g. plates and wipe samples) will be deployed within cleanrooms and on flow-benches during the assembly of the sample and caching subsystem. These witness items are duplicates of the witness items deployed and taken for CC and PP verification. All data collected during CC and PP verification will be tied to the respective CK sample, integrated into the CK database, and made available to the scientific community. Finally, unlike all other CK collections stored in nitrogen at ambient laboratory temperatures, the introduction of biological CK samples add the requirement to curate frozen samples (\({<}{-}80\ ^{\circ}\mbox{C}\)). Innovation in Sample Storage Due to the stringent organic, inorganic, and biological CC and PP requirements, new ways to secure and store CK samples were developed for Mars 2020. Due to organic contamination concerns, all storage bags utilized for organic CK cannot be heat-sealed, so we developed customized bag clips. These two piece bag clips are constructed from 300 series stainless steel and Teflon. These clips will provide a strong seal that will ensure sample safety during shipment and long-term curation, and they will be used for future missions. For the long term storage of CK samples, a few different storage containers needed to be upgraded or designed from scratch. Due to size constraints, this most high fidelity sample intimate hardware will be stored in a custom 316 stainless steel bolt-top canister with a Teflon seal, plumbed to accommodate an inert atmosphere to ensure sample safety in case of a breach in the primary containment. However, the bulk of the inorganic and organic reference and witness items will be stored within highly customized desiccators. As with the bolt-top containers, this highly customized desiccator is a modified version of the desiccators utilized for the other Astromaterials collections. However, due to stringent contamination control, the construction materials are highly limited. For example, unlike other desiccators that can utilize traditional flexible and highly compressible multi-use gasket material, the organic contamination requirements for Mars 2020 preclude their use. Therefore, a new door gasket design was required. Due to its low outgassing properties, a Teflon gasket was the preferred material. However, the trade-off for low outgassing is low flexibility and low compressibility, which means a whole new door gasket design was required. The new design leverages the slight compressibility of the Teflon gasket material while also utilizing it as a barrier material between a possible contaminant and the samples. The collection of Mars 2020 CK is an ongoing activity that will continue until the spacecraft is launched from Kennedy Space Center in July of 2020. Full details of the contamination knowledge collection and the laboratories that support these samples will be described in subsequent publications through joint efforts between the Mars 2020 science team and JSC curation personnel after a decision is made regarding the overall MSR campaign, which is expected sometime in 2020. Preliminary Examination of Samples The preliminary examination (PE) of returned samples, be they collected from an extraterrestrial body and returned by spacecraft or recovered from a frozen lake or someone's back yard on Earth, is arguably the first step in their curation. The importance of this step cannot be overstated with respect to either the early identification of the sample type or the careful preservation of planetary materials for the future. Preliminary examination affects both of these, and many activities in between. Simply defined, preliminary examination is the process by which returned samples are documented and characterized to the point that the appropriate scientific research community is provided with enough information to select and request the samples for their individual, PI-led scientific studies. The results of preliminary investigations are typically presented in catalogs or online databases that are publicly available, and preliminary examination is considered to be a science-enabling activity. Steps Involved in Preliminary Examination The very first steps of preliminary examination of samples may take place before they are even collected. For all returned samples, the astronauts and/or instruments onboard the spacecraft (orbiter, lander, or rover) will gather as much data as possible about the surface of the body from where the samples are gathered. These data will likely include photographs, spectroscopic data, and other possible measurements given the available instruments/crew. For samples collected on the ground, such as meteorites, ideally, a photograph including a scale bar with an indication of compass orientation, a general description of the sample (e.g., percentage of fusion crust, possible rock type, notable physical characteristics), and a description of the site where the sample was found or any other noteworthy features. Care should be taken during these steps to minimize exposure to contamination sources, and any potential contamination should be documented in the collection notes; a list of recommended procedures and materials for collection of freshly-fallen meteorites is provided in Herd et al. (2016). Photographs of sample containment vessels may occur during and after collection. At the very least, the type of sample containment/transport vessel should be documented, to inform future sampling and curation documentation as well as subsequent scientific analysis. In all cases, the utmost care should be taken to ensure curation best practices are implemented to minimize forward contamination. Once samples are received in a curatorial facility, documentation of their current state should be made. These details may include information about the type of materials that the astromaterials samples were transported in, the state of those materials after landing (e.g., are seals intact? are they covered in dust?), and the documentation should include photographs. Any sampling of head gas in sample containment vessels that would be required should be completed and documented before sample containers are opened. Once the sample vessels are breached, information that may be of interest to research will be lost, along with the opportunity to ever gather it on those particular samples again. In addition, any tomographic scanning (e.g., X-ray or neutron) that is required to occur before samples are opened should be completed, and all processes involved in those analyses should be carefully documented. Once samples are opened and, if applicable, removed from their sample containment vessels, the initial, curatorial steps involved in basic characterization of these materials take place. The main purpose of these efforts is to document exactly how the sample existed when it was opened. A documentation should be made as to the sample mass and its appearance at the time of opening. There should be a written description of the state of the material (intact rock, crumbled rock or sediments, powder, microscopic grains in a gel, etc.), its general attributes (color, grain size, physical appearance) and any notable features that are present (veins, fractures, fusion crust, metal content, etc.). Of utmost importance is photographic documentation (possibly with video, depending on the type of samples). Sample numbers should be assigned during this phase if they were not designated during collection. In short—the initial characterization of samples includes any process that can take place without making changes to the state of the samples other than opening their sample containment device, which is obviously unavoidable. Ideally, these steps should be made without touching the samples with anything other than curatorial tools comprised of materials determined to make contact with the samples without compromising their pristine nature. Determination of appropriate materials to use are based on several factors and typically represents a compromise between functionality and contamination risk, but foremost those materials must not compromise the ability of the samples to be used to answer the primary science requirements for a mission. After basic characterization is completed, the curatorial phase of preliminary examination of the samples begins. Preliminary examination is distinct from science activities, and its goal is to produce a sample catalog with a level of detail about each sample that is sufficient for members of the scientific community to make informed requests of materials to conduct their PI-led scientific investigations. Preliminary examination of materials can occur for each representative portion of a sample, if needed to produce a meaningful and informative sample catalog. For Preliminary examinations on samples that also have a mission science team, preliminary examination can happen in parallel with science activities, but the goals of these two activities remains distinct. The methods and analytical techniques used during preliminary examination of a sample will be tailored to the primary science requirements for that sample. Furthermore, these processes will be determined based on sample size, sample form, sample vessel, and the need to prevent either forward or backward contamination. Large samples such as meteorites need to be touched with tools/gloves to be weighed, photographed, and described, and they need to be broken with tools to provide material for their classification. The next steps of preliminary examination of meteorites require the classification chip to be processed even further. As part of the U.S. Antarctic Meteorite Program (AMP) the smaller meteorite chip is weighed, placed into a sample container, and sent from NASA JSC to the Smithsonian National Museum of Natural History for further visual examination via binocular microscope, chipping, insertion into a sample holder grid, and polishing for energy dispersive spectroscopic (EDS) analyses, and/or made into a petrographic thin or thick section (both of which involve exposure to epoxy and polishing grit). These samples are coated with carbon and analyzed with a scanning electron microscope with EDS and/or an electron microprobe to determine their mineral compositions (namely olivine, pyroxene, and in the case of iron meteorites, FeNi metal). Other collections contain much smaller specimens and require much more careful micromanipulation of materials as discussed previously in the small particle handling section. Preliminary examination of gas and volatile-rich samples present unique challenges to the curatorial preliminary examination process. In the gas phase, samples cannot be photo-documented or weighed. Condensed volatiles can be weighed, but only as a supplement to total or partial-pressure measurements of the quantity of sample in the gas phase. Whether the sample is condensed or not, total and partial pressures of major species (e.g. H2O, CO2, etc., depending on the sample origin) should be monitored using techniques that consume little to no sample. Spectroscopy-based techniques (e.g., FTIR, Raman, cavity ring-down spectroscopy) provide possible non-destructive means by which an initial characterization of the compounds in a sample can be ascertained; however, care must be taken to ensure that the techniques (and wavelengths) used do not pose a risk of altering the sample composition or isotopic distribution. High-sensitivity gas chromatography-mass spectrometry (GC-MS) or other high-sensitivity gas analysis techniques can be used to supplement spectroscopy using small quantities (<1 g) of sample. The concept of a "representative sample" for allocation purposes requires further development for gas and volatile-rich samples. Depending on the temperature at which preliminary examination takes place, some species may be condensed while others remain in the gas phase. Additionally, thorough mixing of a gas-phase sample may be impossible to guarantee, especially while the sample remains sealed in the flight sample container. Therefore, when producing aliquots of gas samples for distribution to the scientific community, homogeneity between aliquots should not be expected. One possible solution is to separate gas-phase compounds by their condensation temperatures, freezing out compounds in a sequence that allows them to be separated by composition. This would limit (and possibly prohibit) bulk compositional analyses, but it would separate the sample into known compositions from which aliquots could be obtained. Regardless of how representative samples are defined, significant development is still required to address this capability gap for future volatile-rich sample return missions. An alternative to conducting preliminary examination of gas samples is to immediately conduct analysis of the gases for scientific purposes prior to preliminary examination. This option would be desirable in any instance where the primary science goals for a sample could be compromised through any preliminary examination processes. Once the data are collected during preliminary examination, they are compiled and released to the scientific community in various online/digital formats or sample catalogs in order to allow researchers to request them for study. For U.S. meteorites, this is the Antarctic Meteorite Newsletter published in February and September each year (for example, https://curator.jsc.nasa.gov/antmet/amn/amn.cfm#n412). For other collections, different mechanisms of reporting of available materials are put in place. All NASA curated samples are detailed in the ARES website (https://curator.jsc.nasa.gov/), and new samples announced biennially in the Astormaterials Newsletter (https://curator.jsc.nasa.gov/newsletter/#n0101). And meteorites from around the world are detailed in the Meteoritical Bulletin (https://www.lpi.usra.edu/meteor/metbull.php). Preliminary examination does not end with the process of reporting data in a newsletter. The curation process regularly involves the subsampling of materials. Each time this occurs, the subsampling process is ideally well documented with photographs, diagrams of subsampling, the weights of both removed and remaining materials, and descriptions of those materials if noteworthy. The process of subsampling larger samples involves exposing new material to the sample surface and to the curatorial environment. These newly exposed materials may require description, and in extreme cases, if they reveal something extraordinary, may necessitate announcement to the scientific community of new sample opportunities. Preliminary examination, unlike initial characterization, will go on as long as there is sample remaining in our collections, and hence sample catalogs are living documents. Who Does PE? Who should be involved in the preliminary examination of samples depends on the mechanism by which the samples were recovered. For spacecraft missions to planetary bodies within our Solar System, there are numerous stakeholders. These stakeholders include the mission science team that orchestrated and successfully executed the sample return mission, the scientific community at large that will also want to study the returned samples, and future generations of scientists that will want to study these samples with technology that has not yet been invented. It is the responsibility of a collection curator to defend all of these stakeholders and to find the right balance between sample consumption and sample conservation that maximizes science returns on the samples over multi-decade timescales. Given the important role of a collection curator for the safety and long-term viability of returned samples, it only makes sense for astromaterials returned from spacecraft missions to be received, opened, processed, and characterized within a sample curation facility by specially trained personnel under the management of a curatorial authority. That said, the people involved in the preliminary examination of returned materials will not be limited to curation personnel. The people involved in the preliminary examination should include some combination of curatorial processors trained to process and document miniscule samples, collection curators, and members of the sample science team. There may be a healthy tension that develops between the curator and members of a sample science team (i.e., conservation vs. consumption, respectively) because the science team has scientific mission requirements to achieve within a fixed period of time, and the curator must think beyond that time frame to the long-term viability and availability of the samples. To minimize such tensions, it is important to have policies in place prior to the samples being returned that outline how much sample can be consumed by the mission science team to achieve the primary science goals. With less controlled sample collection, such as meteorite falls recovered anywhere in the world, the very first steps of preliminary examination (i.e., basic characterization) may take place in the field by trained meteorite hunting/recovery programs such as those run by the US (ANSMET), Japan, Belgium, China, South Korea, and the UK. The ANSMET program, for example, documents the field location of each meteorite recovered with a GPS position, a photograph next to a field number, the percentage of fusion crust visible on the sample, an educated guess at the meteorite type, a general description of any other notable features (i.e., it was found in liquid water, it was found half buried in ice, it was broken in half), and anything else that may require comment (i.e., it was accidentally touched with a glove, a bare hand, a snowmobile) and could possibly affect future analyses. Once the field documentation is completed and the samples are sent to a curation facility, preliminary examination largely occurs by curation personnel, and the samples are made available for request without first being analyzed by a mission science team. Where do We Draw the Line Between Preliminary Examination for Curation and Science? Basic characterization and preliminary examination are, as stated above, the processes by which returned samples are initially documented and sufficiently characterized to provide the appropriate scientific research community with enough information to select and request them for scientific study. A long standing discussion is carried on by curators worldwide as to what constitutes "too much characterization" and where the line is drawn between performing that characterization and conducting research that should be PI-led. The line is particularly difficult to delineate for small particle collections where the entire particle may be <10 μm and almost any observation with an electron beam or laser could alter the sample (e.g., interplanetary dust particles in NASA's Cosmic Dust collection). However, there are also some examples of disagreement among curators as to which instrumentation and analyses are appropriate for preliminary examination on large samples where subsampling does not negatively impact the availability of the material (e.g., oxygen isotopic measurements, X-ray computed tomography, etc.). Nonetheless, the line between characterization and science is going to be different for each sample type, and that line should be optimally placed such that wasted sample consumption resulting from insufficient information about the samples in a catalog (i.e., the consumed sample did not have the phase of interest) is minimized whilst serendipitous discoveries within the samples during scientific investigations can still occur. One of the questions meteorite curators commonly ask, for example, is whether or not oxygen isotopic compositions should be determined for meteorite samples returned from Antarctica or elsewhere. These measurements are very much in the realm of PI-led research, but with meteorites coming from a variety of different Solar System bodies, providing the \(\Delta ^{17}\)O composition gives the curator and requesting scientist the background information needed to identify potentially unique samples or those thought to be from Mars or other potentially unknown sources. Generally, isotope labs in the U.S. have provided the U.S. Antarctic Meteorite Program with these data as needed, but O-isotope analysis should only be used as a characterization tool when other, less destructive, methods cannot be used to uniquely classify the material. Another example is the use of X-ray computed tomography (XCT) as a characterization tool. Having three-dimensional context of samples provides curation personnel with invaluable information about the contents of samples, particularly highly heterogeneous samples like regolith breccias. This information can be used to identify clasts that are not exposed at the surface, and it can be used to make informed decisions about cuts or sample splits during sample processing. However, XCT exposes samples to radiation doses that can have lasting effects on the samples. Thermoluminescence in particular is negatively impacted by XCT analysis of samples (Sears et al. 2016, 2018), and studies are underway to better characterize its effect on organic compounds in samples (Hanna and Ketcham 2017; Friedrich et al. 2019). Most curated astromaterials samples today are room-temperature solids; however, as we advance to collecting materials that require cryogenic storage or gas-phase samples, further questions arise as to how to conduct a preliminary examination, especially if the "shelf-life" of the samples prohibit waiting for the technology of tomorrow to analyze the samples. An alternative approach to a preliminary examination for materials that may have a short shelf-life is to organize a large scale consortium study where all stakeholders in the scientific community are given the opportunity to join, either through an "all are welcome" approach like Stardust or competitively through a proposal process if there is a need to limit the number of investigators. Conservation for the sake of conservation is not a meaningful philosophical approach to astromaterials curation. It is important that the goals of maximizing science returns on samples over their viable lifetimes are an integral part of a long-term sample conservation plan. Advanced curation is a critical function to the success of sample return missions and Earth-based sample acquisition and plays an integral part in enabling the high precision measurements that are often done on astromaterials samples. Looking forward, advanced curation must prepare for sample return missions from any celestial body within the solar system, including planets, moons, asteroids, and/or comets. The direction and scope of advanced curation research is driven by (1) existing strategic knowledge gaps identified through lessons learned from previous sample return missions and Earth-based programs that collect astromaterials; (2) the emerging needs of the scientific community that study astromaterials samples; and (3) the selection of new targets for sample return missions and the associated curation and sample handling requirements of those missions (e.g., Beaty et al. 2019; Haltigin et al. 2018; McLennan et al. 2011; Vander Kaaden et al. 2019). The primary result of advanced curation is to both reduce and quantify contamination to astromaterials and preserve the scientific integrity of all samples from mission inception and through ATLO, sample collection, curation/preliminary examination on Earth, curation/storage, and secure delivery of the samples to Earth-based laboratories for in-depth scientific analysis. Advanced curation is an interdisciplinary field of research and development and also serves as an important science-enabling activity. The collective lessons learned from previous spacecraft missions and the results of advanced curation research will work in tandem to feed forward into better spacecraft designs and enable more stringent requirements for future sample return missions. P.I. Abell, C.H. Draffan, G. Eglinton, J.M. Hayes, J.R. Maxwell, C.T. Pillinger, Organic analysis of the returned Apollo 11 lunar sample, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1757–1773 C.M.O.D. Alexander, G.D. Cody, Y. Kebukawa, R. Bowden, M.L. Fogel, A.L.D. Kilcoyne, L.R. Nittler, C.D.K. Herd, Elemental, isotopic, and structural changes in Tagish Lake insoluble organic matter produced by parent body processes. Meteorit. Planet. Sci. 49, 503–525 (2014) ADS Google Scholar C.C. Allen, F.G. Albert, J. Combie, A. Banin, Y. Yablekovitch, I. Kan, R.J. Bodnar, V.E. Hamilton, B.L. Jolliff, K. Kuebler, A. Wang, D.J. Lindstrom, P.A. Morris, R.V. Morris, R.W. Murray, L.E. Nyquist, P.D. Simpson, A. Steele, S.J. Symes, Effects of sterilizing doses of gamma radiation on Mars analog rocks and minerals. J. Geophys. Res., Planets 104, 27043–27066 (1999) C. Allen, J. Allton, G.E. Lofgren, K. Righter, M. Zolensky, Curating NASA's extraterrestrial samples—Past, present, and future. Chem. Erde 71, 1–20 (2011) J.H. Allton, J.D. Hittle, E.T. Mickelson, E.K. Stansbery, Cleaning Genesis Sample Return Canister for Flight: Lessons for Planetary Sample Return (NASA Johnson Space Center, Houston, 2016), p. 41 T.J. Anchordoquy, M.C. Molina, Preservation of DNA. Cell Preserv. Technol. 5, 180–188 (2007) L.D. Andrade, R. Awasthi, K. Dua, T.D.A. Pinto, Matrix-assisted laser desorption ionization-time of flight mass spectrometry for identification of bacteria isolated from pharmaceutical cleanrooms. Interv. Med. Appl. Sci. 10, 45–53 (2018) J.O. Annexstad, W.A. Cassidy, Collecting and processing Victoria Land meteorites, in Memoirs of National Institute of Polar Research, Special Issue (1980), pp. 14–20 L. Aronowitz, C. Baulknight, J. Berkowitz-Mattuck, A. Buchler, P. Glaser, F. Koch, T. Luzzi, N. Milford, S. Penn, F. Pomilla, Investigation of lunar surface chemical contamination by LEM descent engine and associated equipment Final report. Grumman Aircraft Engineering Corp.; Research Dept.; Bethpage, NY, United States (1966a) p. 219 L. Aronowitz, C. Baulknight, A. Buchler, F. Koch, T. Luzzi, S. Penn, A. Wechsler, D. Weiss, Investigation of Lunar Surface Chemical Contamination by LEM Descent Engine and Associated Equipment. Grumman Aircraft Engineering Corp.; Research Dept.; Bethpage, NY, United States (1966b), p. 55 M. Bashir, M. Ahmed, T. Weinmaier, D. Ciobanu, N. Ivanova, T.R. Pieber, P.A. Vaishampayan, Functional metagenomics of spacecraft assembly cleanrooms: Presence of virulence factors associated with human pathogens. Front. Microbiol. 7, 12 (2016) D. Beaty, C. Allen, D. Bass, K. Buxbaum, J. Cambell, D. Lindstrom, S. Miller, D. Papanastassiou, Planning considerations for a Mars sample receiving facility: Summary and interpretation of three design studies. Astrobiology 9 (2009). https://doi.org/10.1089/ast.2009.0339 D.W. Beaty, M.M. Grady, H.Y. McSween, E. Sefton-Nash, B.L. Carrier, F. Altieri, Y. Amelin, E. Ammannito, M. Anand, L.G. Benning, J.L. Bishop, L.E. Borg, D. Boucher, J.R. Brucato, H. Busemann, K.A. Campbell, A.D. Czaja, V. Debaille, D.J. Des Marais, M. Dixon, B.L. Ehlmann, J.D. Farmer, D.C. Fernandez-Remolar, J. Filiberto, J. Fogarty, D.P. Glavin, Y.S. Goreva, L.J. Hallis, A.D. Harrington, E.M. Hausrath, C.D.K. Herd, B. Horgan, M. Humayun, T. Kleine, J. Kleinhenz, R. Mackelprang, N. Mangold, L.E. Mayhew, J.T. McCoy, F.M. McCubbin, S.M. McLennan, D.E. Moser, F. Moynier, J.F. Mustard, P.B. Niles, G.G. Ori, F. Raulin, P. Rettberg, M.A. Rucker, N. Schmitz, S.P. Schwenzer, M.A. Sephton, R. Shaheen, Z.D. Sharp, D.L. Shuster, S. Siljeström, C.L. Smith, J.A. Spry, A. Steele, T.D. Swindle, I.L. ten Kate, N.J. Tosca, T. Usui, M.J. Van Kranendonk, M. Wadhwa, B.P. Weiss, S.C. Werner, F. Westall, R.M. Wheeler, J. Zipfel, M.P. Zorzano, The potential science and engineering value of samples delivered to Earth by Mars sample return. Meteorit. Planet. Sci. 54, S3–S152 (2019) H.A. Bechtel, C. Allen, S. Bajt, J. Borg, F. Brenker, J. Bridges, D.E. Brownlee, M. Burchell, M. Burghammer, A.L. Butterworth, P. Cloetens, A.M. Davis, C. Floss, G.J. Flynn, D. Frank, Z. Gainsforth, E. Grun, P.R. Heck, J.K. Hillier, P. Hoppe, L. Howard, G.R. Huss, J. Huth, A. Kearsley, A.J. King, B. Lai, J. Leitner, L. Lemelle, H. Leroux, L.R. Nittler, R.C. Ogliore, F. Postberg, M.C. Price, S.A. Sandford, J.A. Sans Tresseras, S. Schmitz, T. Schoonjans, G. Silversmit, A. Simionovici, R. Srama, F.J. Stadermann, T. Stephan, J. Stodolna, R.M. Stroud, S.R. Sutton, R. Toucoulou, M. Trieloff, P. Tsou, A. Tsuchiyama, T. Tyliczszak, B. Vekemans, L. Vincze, A.J. Westphal, M.E. Zolensky et al., FTIR analysis of aerogel keystones from the stardust interstellar dust collector: Assessment of terrestrial organic contamination and X-ray microprobe beam damage, in 42nd Lunar and Planetary Science Conference, Abstract #1971 (2011) J.N. Benardini, K. Venkateswaran, Application of the ATP assay to rapidly assess cleanliness of spacecraft surfaces: A path to set a standard for future missions. AMB Express 6, 8 (2016) A. Bieler, K. Altwegg, H. Balsiger, A. Bar-Nun, J.J. Berthelier, P. Bochsler, C. Briois, U. Calmonte, M. Combi, J. De Keyser, E.F. van Dishoeck, B. Fiethe, S.A. Fuselier, S. Gasc, T.I. Gombosi, K.C. Hansen, M. Hässig, A. Jäckel, E. Kopp, A. Korth, L. Le Roy, U. Mall, R. Maggiolo, B. Marty, O. Mousis, T. Owen, H. Rème, M. Rubin, T. Sémon, C.Y. Tzou, J.H. Waite, C. Walsh, P. Wurz, Abundant molecular oxygen in the coma of comet 67P/Churyumov–Gerasimenko. Nature 526, 678 (2015) P.A. Bland, The Desert Fireball Network. Astron. Geophys. 45, 5.20–5.23 (2004) P.A. Bland, P. Spurny, A.W.R. Bevan, K.T. Howard, M.C. Towner, G.K. Benedix, R.C. Greenwood, L. Shrbeny, I.A. Franchi, G. Deacon, J. Borovicka, Z. Ceplecha, D. Vaughan, R.M. Hough, The Australian Desert Fireball Network: A new era for planetary science. Aust. J. Earth Sci. 59, 177–187 (2012) A.I. Blinova, C.D.K. Herd, M.J.M. Duke, Testing variations within the Tagish Lake meteorite—II: Whole-rock geochemistry of pristine samples. Meteorit. Planet. Sci. 49, 1100–1118 (2014) J. Borovička, Z. Charvát, Meteosat observation of the atmospheric entry of 2008 TC over Sudan and the associated dust cloud. Astron. Astrophys. 507, 1015–1022 (2009) F. Brandstätter, History of the meteorite collection of the Natural History Museum of Vienna. Geol. Soc. (Lond.) Spec. Publ. 256, 123–133 (2006) C.E. Bresky, M. Fries, The Aquarius project: The first student-driven attempt to retrieve meteorites from underwater, in 49th Lunar and Planetary Science Conference, Abstract #3004 (2018) D. Brownlee, P. Tsou, J. Aleon, C.M.O. Alexander, T. Araki, S. Bajt, G.A. Baratta, R. Bastien, P. Bland, P. Bleuet, J. Borg, J.P. Bradley, A. Brearley, F. Brenker, S. Brennan, J.C. Bridges, N.D. Browning, J.R. Brucato, E. Bullock, M.J. Burchell, H. Busemann, A. Butterworth, M. Chaussidon, A. Cheuvront, M.F. Chi, M.J. Cintala, B.C. Clark, S.J. Clemett, G. Cody, L. Colangeli, G. Cooper, P. Cordier, C. Daghlian, Z.R. Dai, L. D'Hendecourt, Z. Djouadi, G. Dominguez, T. Duxbury, J.P. Dworkin, D.S. Ebel, T.E. Economou, S. Fakra, S.A.J. Fairey, S. Fallon, G. Ferrini, T. Ferroir, H. Fleckenstein, C. Floss, G. Flynn, I.A. Franchi, M. Fries, Z. Gainsforth, J.P. Gallien, M. Genge, M.K. Gilles, P. Gillet, J. Gilmour, D.P. Glavin, M. Gounelle, M.M. Grady, G.A. Graham, P.G. Grant, S.F. Green, F. Grossemy, L. Grossman, J.N. Grossman, Y. Guan, K. Hagiya, R. Harvey, P. Heck, G.F. Herzog, P. Hoppe, F. Horz, J. Huth, I.D. Hutcheon, K. Ignatyev, H. Ishii, M. Ito, D. Jacob, C. Jacobsen, S. Jacobsen, S. Jones, D. Joswiak, A. Jurewicz, A.T. Kearsley, L.P. Keller, H. Khodja, A.L.D. Kilcoyne, J. Kissel, A. Krot, F. Langenhorst, A. Lanzirotti, L. Le, L.A. Leshin, J. Leitner, L. Lemelle, H. Leroux, M.C. Liu, K. Luening, I. Lyon, G. MacPherson, M.A. Marcus, K. Marhas, B. Marty, G. Matrajt, K. McKeegan, A. Meibom, V. Mennella, K. Messenger, S. Messenger, T. Mikouchi, S. Mostefaoui, T. Nakamura, T. Nakano, M. Newville, L.R. Nittler, I. Ohnishi, K. Ohsumi, K. Okudaira, D.A. Papanastassiou, R. Palma, M.E. Palumbo, R.O. Pepin, D. Perkins, M. Perronnet, P. Pianetta, W. Rao, F.J.M. Rietmeijer, F. Robert, D. Rost, A. Rotundi, R. Ryan, S.A. Sandford, C.S. Schwandt, T.H. See, D. Schlutter, J. Sheffield-Parker, A. Simionovici, S. Simon, I. Sitnitsky, C.J. Snead, M.K. Spencer, F.J. Stadermann, A. Steele, T. Stephan, R. Stroud, J. Susini, S.R. Sutton, Y. Suzuki, M. Taheri, S. Taylor, N. Teslich, K. Tomeoka, N. Tomioka, A. Toppani, J.M. Trigo-Rodriguez, D. Troadec, A. Tsuchiyama, A.J. Tuzzolino, T. Tyliszczak, K. Uesugi, M. Velbel, J. Vellenga, E. Vicenzi, L. Vincze, J. Warren, I. Weber, M. Weisberg, A.J. Westphal, S. Wirick, D. Wooden, B. Wopenka, P. Wozniakiewicz, I. Wright, H. Yabuta, H. Yano, E.D. Young, R.N. Zare, T. Zega, K. Ziegler, L. Zimmerman, E. Zinner, M. Zolensky, Comet 81P/Wild 2 under a microscope. Science 314, 1711–1716 (2006) A.L. Burlingame, M. Calvin, J. Han, W. Henderson, W. Reed, B.R. Simoneit, Study of carbon compounds in Apollo 11 lunar samples, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1779–1791 A.L. Burlingame, P.T. Holland, W.H. McFadden, B.R. Simoneit, J.T. Wilder, P.C. Wszolek UCB Space Sciences Laboratory Organic Cleanroom and Lunar Material Transfer Facilities. Space Sciences Laboratory Report, University of California, Berkeley (1971) D.S. Burnett, The Genesis solar wind sample return mission: Past, present, and future. Meteorit. Planet. Sci. 48, 2351–2370 (2013) D.S. Burnett, B.L. Barraclough, R. Bennett, M. Neugebauer, L.P. Oldham, C.N. Sasaki, D. Sevilla, N. Smith, E. Stansbery, D. Sweetnam, R.C. Wiens, The Genesis Discovery mission: Return of solar matter to Earth. Space Sci. Rev. 105, 509–534 (2003) A.S. Burton, D.P. Glavin, J.E. Elsila, J.P. Dworkin, P. Jenniskens, Q.Z. Yin, The amino acid composition of the Sutter's Mill CM2 carbonaceous chondrite. Meteorit. Planet. Sci. 49, 2074–2086 (2014) CABRI, Laboratory procedures for microorganisms: Survey of examples of detailed protocols for the different types of preservation methods, in Common Access to Biotechnological Resources and Information Consortium (1998) http://www.cabri.org/guidelines/micro-organisms/M300Ap500.html. Accessed April 24, 2019 M.J. Calaway, Lunar processing cabinet 2.0: Retrofitting gloveboxes into the 21st century, in Lunar and Planetary Science Conference XLVI, March 16–20, The Woodlands, TX, Abstract #1492 (2015) M.J. Calaway, C.C. Allen, Cryogenic curation: Isolated technology and mission operational requirements for sample return, in 76th Annual Meteoritical Society Meeting, Edmonton Alberta, Canada (2013) M.J. Calaway, D.S. Burnett, M.C. Rodriguez, S. Sestak, J.H. Allton, E.K. Stansbery, Decontamination of genesis array materials by UV ozone cleaning, in Lunar and Planetary Science Conference XXXVIII, Houston, TX, Abstract #1627 (2007) M.J. Calaway, M.C. Rodriguez, J.H. Allton, E.K. Stansbery, Decontaminating solar wind samples with the genesis ultra-pure water megasonic cleaner, in Lunar and Planetary Science Conference XL, Woodlands, TX, Abstract #1183 (2009) M.J. Calaway, C.C. Allen, J.H. Allton, Organic Contamination Baseline Study in NASA Johnson Space Center Astromaterials Curation Laboratories, NASA TP-2014-217393, Lyndon B. Johnson Space Center, Houston (2014), p. 108 M.J. Calaway, J.H. Allton, R.A. Zeigler, F.M. McCubbin, 50th anniversary of the world's first extraterrestrial sample receiving laboratory: The Apollo program's lunar receiving laboratory, in 48th Lunar and Planetary Science Conference, Abstract #1224 (2017) Z. Ceplecha, J. Borovicka, W.G. Elford, D.O. Revelle, R.L. Hawkes, V. Porubcan, M. Simek, Meteor phenomena and bodies. Space Sci. Rev. 84, 327–471 (1998) B.C. Clark, Temperature–time issues in bioburden control for planetary protection. Adv. Space Res. 34, 2314–2319 (2004) A. Colaprete, P. Schultz, J. Heldmann, D. Wooden, M. Shirley, K. Ennico, B. Hermalyn, W. Marshall, A. Ricco, R.C. Elphic, D. Goldstein, D. Summy, G.D. Bart, E. Asphaug, D. Korycansky, D. Landis, L. Sollitt, Detection of water in the LCROSS ejecta plume. Science 330, 463–468 (2010) F. Colas, B. Zanda, J. Vaubaillon, S. Bouley, C. Marmo, Y. Audureau, M.K. Kwon, J.-L. Rault, S. Caminade, P. Vernazza, French fireball network FRIPON, in Proceedings of the International Meteor Conference, Mistelbach, Austria (2015), pp. 27–30 W. Cooke, D. Moser, The status of the NASA all sky fireball network, NASA report, in Proceedings of the International Meteor Conference, 30th IMC, Sibiu, Romania (2011) M. Cooper, M.T. La Duc, A. Probst, P. Vaishampayan, C. Stam, J.N. Benardini, Y.M. Piceno, G.L. Andersen, K. Venkateswaran, Comparison of innovative molecular approaches and standard spore assays for assessment of surface cleanliness. Appl. Environ. Microbiol. 77, 5438–5444 (2011) COSPAR, COSPAR planetary protection policy (20 October 2002, as amended to 24 March 2011), in COSPAR/IAU Workshop on Planetary Protection, COSPAR (2011) L.A. Dauphin, B.D. Moser, M.D. Bowen, Evaluation of five commercial nucleic acid extraction kits for their ability to inactivate Bacillus anthracis spores and comparison of DNA yields from spores and spiked environmental samples. J. Microbiol. Methods 76, 30–37 (2009) B.H. Day, P.A. Bland, R. Sayers, Fireballs in the sky: Citizen science with the desert fireball network, in 49th Lunar and Planetary Science Conference, Abstract #2229 (2018a) J.M.D. Day, J. Maria-Benavides, F.M. McCubbin, R.A. Zeigler, The potential for metal contamination during Apollo lunar sample curation. Meteorit. Planet. Sci. 53, 1283–1291 (2018b) S.F. Dermott, J.C. Liou, Detection of asteroidal dust particles from known families in near-Earth orbits, in AIP Conference Proceedings, vol. 3, no. 1 (AIP, New York, 1994) J.P. Dworkin, L.A. Adelman, T. Ajluni, A.V. Andronikov, J.C. Aponte, A.E. Bartels, E. Beshore, E.B. Bierhaus, J.R. Brucato, B.H. Bryan, A.S. Burton, M.P. Callahan, S.L. Castro-Wallace, B.C. Clark, S.J. Clemett, H.C. Connolly, W.E. Cutlip, S.M. Daly, V.E. Elliott, J.E. Elsila, H.L. Enos, D.F. Everett, I.A. Franchi, D.P. Glavin, H.V. Graham, J.E. Hendershot, J.W. Harris, S.L. Hill, A.R. Hildebrand, G.O. Jayne, R.W. Jenkens, K.S. Johnson, J.S. Kirsch, D.S. Lauretta, A.S. Lewis, J.J. Loiacono, C.C. Lorentson, J.R. Marshall, M.G. Martin, L.L. Matthias, H.L. McLain, S.R. Messenger, R.G. Mink, J.L. Moore, K. Nakamura-Messenger, J.A. Nuth, C.V. Owens, C.L. Parish, B.D. Perkins, M.S. Pryzby, C.A. Reigle, K. Righter, B. Rizk, J.F. Russell, S.A. Sandford, J.P. Schepis, J. Songer, M.F. Sovinski, S.E. Stahl, K. Thomas-Keprta, J.M. Vellinga, M.S. Walker, OSIRIS-REx contamination control strategy and implementation. Space Sci. Rev. 214, 53 (2018) ECSS, Space Product Assurance: Microbial Examination of Flight Hardware and Cleanrooms. European Cooperation for Space Standardization, ECSS-Q-ST-70-55C, Noordwijk, The Netherlands (2008) W.N. Edwards, D.W. Eaton, P.G. Brown, Seismic observations of meteors: Coupling theory and observations. Rev. Geophys. 46, 21 (2008) J.E. Elsila, D.P. Glavin, J.P. Dworkin, Cometary glycine detected in samples returned by Stardust. Meteorit. Planet. Sci. 44, 1323–1330 (2009) D.A. Flory, B.R. Simoneit, Terrestrial contamination in Apollo lunar samples. Space Life Sci. 3, 457–468 (1972) G.J. Flynn, L.P. Keller, C. Jacobsen, S. Wirick, An assessment of the amount and types of organic matter contributed to the Earth by interplanetary dust, in Space Life Sciences: Steps Toward Origin(S) of Life, ed. by M.P. Bernstein, M. Kress, R. NavarroGonzalez (Pergamon, Kidlington, 2004), pp. 57–66 J.M. Friedrich, H.L. McLain, J.P. Dworkin, D.P. Glavin, W.H. Towbin, M. Hill, D.S. Ebel, Effect of polychromatic X-ray microtomography imaging on the amino acid content of the Murchison CM chondrite. Meteorit. Planet. Sci. 54, 220–228 (2019) M. Fries, J. Fries, Doppler weather radar as a meteorite recovery tool. Meteorit. Planet. Sci. 45, 1476–1487 (2010) M. Fries, C. Laird, M. Hankey, J. Fries, R. Matson, V. Reddy, Estimation of meteorite fall mass and other properties from weather radar data, in 80th Annual Meeting of the Meteoritical Society, Abstract #6251 (2017) C.W. Gehrke, R.W. Zumwalt, W.A. Aue, D.L. Stalling, A. Duffield, K.A. Kvenvolden, C. Ponnamperuma, Carbon compounds in lunar fines from Mare Tranquillitatis; III, Organosiloxanes in hydrochloric acid hydrolysates, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1845–1856 C.W. Gehrke, R.W. Zumwalt, K. Kuo, W.A. Aue, D.L. Stalling, K.A. Kvenvolden, C. Ponnamperuma, Amino acid analyses of Apollo 14 samples, in Lunar Science Conference, 3rd, Proc., vol. 3 (Pergamon, Oxford, 1972), pp. 2119–2129 G.R. Gladstone, D.M. Hurley, K.D. Retherford, P.D. Feldman, W.R. Pryor, J.-Y. Chaufray, M. Versteeg, T.K. Greathouse, A.J. Steffl, H. Throop, J.W. Parker, D.E. Kaufmann, A.F. Egan, M.W. Davis, D.C. Slater, J. Mukherjee, P.F. Miles, A.R. Hendrix, A. Colaprete, S.A. Stern, LRO-LAMP observations of the LCROSS impact plume. Science 330, 472–476 (2010) M.M. Grady, Meteorites from cold and hot deserts: How many, how big, and what sort, in Workshop on Extraterrestrial Materials from Cold and Hot Deserts. Lunar and Planetary Institute, Houston, Workshop on Extraterrestrial Materials from Cold and Hot Deserts, ed. by L.F.I.A. Schultz, A.M. Reid, M.E. Zolensky (2000), pp. 36–40 M.M. Grady, A.B. Verchovsky, I.A. Franchi, I.P. Wright, C.T. Pillinger, Light element geochemistry of the Tagish Lake CI2 chondrite; comparison with CI1 and CM2 meteorites. Meteorit. Planet. Sci. 37, 713–735 (2002) M. Grimaldo, Decontamination with cold plasma activated ionized hydrogen peroxide: Does it behave like a gas? in 60th Annual Biological Safety Conference, ABSA 1717B (2017) M. Gritsevich, E. Lyytinen, J. Moilanen, T. Kohout, V. Dmitriev, V. Lupovka, V. Midtskogen, N. Kruglikov, A. Ischenko, G. Yakovlev, V. Grokhovsky, J. Haloda, P. Halodova, J. Peltoniemi, A. Aikkila, A. Taavitsainen, J. Lauanne, M. Pekkola, P. Kokko, P. Lahtinen, M. Larionov, First meteorite recovery based on observations by the Finnish Fireball Network, in Proceedings of the International Meteor Conference, Giron, France, 18–21 September 2014 (2014), pp. 162–169 E. Grosjean, G.A. Logan, Incorporation of organic contaminants into geochemical samples and an assessment of potential sources: Examples from Geoscience Australia marine survey S282. Org. Geochem. 38, 853–869 (2007) J. Guardiola, V. Rojo, G. Ramos, Influence of particle size, fluidization velocity and relative humidity on fluidized bed electrostatics. J. Electrost. 37, 1–20 (1995) C.W. Haberle, L.A.J. Garvie, Extraterrestrial formation of oldhamite and portlandite through thermal metamorphism of calcite in the Sutter's Mill carbonaceous chondrite. Am. Mineral. 102, 2415–2421 (2017) I. Halliday, A.T. Blackwell, A.A. Griffin, The Innisfree meteorite and the Canadian camera network. J. R. Astron. Soc. Can. 72, 15–39 (1978) M. Hallworth, Rapid microbiological monitoring in pharmaceutical environments, in Environmental Monitoring: A Comprehensive Handbook, vol. 6, ed. by J. Moldenhaer (PDA, Bethesda, 2012), p. 93 T. Haltigin, C. Lange, R. Mugnolo, C. Smith (co-chairs), H. Amundsen, P. Bousquet, C. Conley, A. Debus, J. Dias, P. Falkner, V. Gass, A-M. Harri, E. Hauber, A.B. Ivanov, A.O. Ivanov, G. Kminek, O. Korablev, D. Koschny, J. Larranaga, B. Marty, S. McLennan, M. Meyer, E. Nilsen, P. Orleanski, R. Orosei, D. Rebuffat, F. Safa, N. Schmitz, S. Siljeström, N. Thomas, J. Vago, A-C. Vandaele, T. Voirin, C. Whetsel, Astrobiology (2018). https://doi.org/10.1089/ast.2018.29027.mars R.D. Hanna, R.A. Ketcham, X-ray computed tomography of planetary materials: A primer and review of recent studies. Chem. Erde 77, 547–572 (2017) P.E. Hare, K. Harada, S.W. Fox, Analyses for amino acids in lunar fines, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1799–1803 R. Harvey, The origin and significance of Antarctic meteorites. Chem. Erde 63, 93–147 (2003) W. Henderson, W.C. Kray, W.A. Newman, W.E. Reed, B.R. Simoneit, M. Calvin, Study of carbon compounds in Apollo 11 and Apollo 12 returned lunar samples, in Lunar Science Conference, 2nd, Proc., vol. 2 (Pergamon, Oxford, 1971), pp. 1901–1912 C.D.K. Herd, A. Blinova, D.N. Simkus, Y. Huang, R. Tarozo, C.M.O. Alexander, F. Gyngard, L.R. Nittler, G.D. Cody, M.L. Fogel, Y. Kebukawa, A.L.D. Kilcoyne, R.W. Hilts, G.F. Slater, D.P. Glavin, J.P. Dworkin, M.P. Callahan, J.E. Elsila, B. De Gregorio, R.M. Stroud, Origin and evolution of prebiotic organic matter as inferred from the Tagish Lake meteorite. Science 332, 1304–1307 (2011) C.D.K. Herd, R.W. Hilts, A.W. Skelhorne, D.N. Simkus, Cold curation of pristine astromaterials: Insights from the Tagish Lake meteorite. Meteorit. Planet. Sci. 51, 499–519 (2016) R.W. Hilts, C.D.K. Herd, D.N. Simkus, G.F. Slater, Soluble organic compounds in the Tagish Lake meteorite. Meteorit. Planet. Sci. 49, 526–549 (2014) K.B. Hindley, M.A. Houlden, The British Fireball Network. Meteoritics 12, 257 (1977) G.W. Hodgson, E. Bunnenberg, B. Halpern, E. Peterson, K.A. Kvenvolden, C. Ponnamperuma, Carbon compounds in lunar fines from Mare Tranquillitatis; II, Search for porphyrins, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1829–1844 G.W. Hodgson, E. Bunnenberg, B. Halpern, E. Peterson, K.A. Kvenvolden, C. Ponnamperuma, Lunar pigments; porphyrin-like compounds from an Apollo 12 sample, in Lunar Science Conference, 2nd, Proc., vol. 2 (Pergamon, Oxford, 1971), pp. 1865–1874 J.M.C. Holt, J.C. Bridges, J. Vrublevskis, F. Gaubert, Double walled isolator technology for Mars sample return facilities, in Proceedings of the 50th Lunar and Planetary Science Conference, Woodlands, TX, #2408 (2019) B. Hubad, A. Lapanje, The efficient method for simultaneous monitoring of the culturable as well as nonculturable airborne microorganisms. PLoS ONE 8, 9 (2013) L.A. Hug, B.J. Baker, K. Anantharaman, C.T. Brown, A.J. Probst, C.J. Castelle, C.N. Butterfield, A.W. Hernsdorf, Y. Amano, K. Ise, Y. Suzuki, N. Dudek, D.A. Relman, K.M. Finstad, R. Amundson, B.C. Thomas, J.F. Banfield, A new view of the tree of life. Nat. Microbiol. 1, 6 (2016) IEST-STD-CC1246E, Product cleanliness levels—Applications, requirements, and determination. Institute of Environmental Sciences and Technology, Schaumburg, IL, USA (2013) ISO14698, Cleanrooms and associated controlled environments—Biocontamination control—Part 1: General Principles and Methods, ISO 14698-1:2003 (2003) A. Jambon, Bronze Age iron: Meteoritic or not? A chemical strategy. J. Archaeol. Sci. 88, 47–53 (2017) P. Jenniskens, M.D. Fries, Q.Z. Yin, M. Zolensky, A.N. Krot, S.A. Sandford, D. Sears, R. Beauford, D.S. Ebel, J.M. Friedrich, K. Nagashima, J. Wimpenny, A. Yamakawa, K. Nishiizumi, Y. Hamajima, M.W. Caffee, K.C. Welten, M. Laubenstein, A.M. Davis, S.B. Simon, P.R. Heck, E.D. Young, I.E. Kohl, M.H. Thiemens, M.H. Nunn, T. Mikouchi, K. Hagiya, K. Ohsumi, T.A. Cahill, J.A. Lawton, D. Barnes, A. Steele, P. Rochette, K.L. Verosub, J. Gattacceca, G. Cooper, D.P. Glavin, A.S. Burton, J.P. Dworkin, J.E. Elsila, S. Pizzarello, R. Ogliore, P. Schmitt-Kopplin, M. Harir, N. Hertkorn, A. Verchovsky, M. Grady, K. Nagao, R. Okazaki, H. Takechi, T. Hiroi, K. Smith, E.A. Silber, P.G. Brown, J. Albers, D. Klotz, M. Hankey, R. Matson, J.A. Fries, R.J. Walker, I. Puchtel, C.T.A. Lee, M.E. Erdman, G.R. Eppich, S. Roeske, Z. Gabelica, M. Lerche, M. Nuevo, B. Girten, S.P. Worden (Sutter's Mill Meteorite C.), Radar-enabled recovery of the Sutter's Mill meteorite, a carbonaceous chondrite regolith breccia. Science 338, 1583–1587 (2012) P. Jenniskens, J. Albers, C.E. Tillier, S.F. Edgington, R.S. Longenbaugh, S.J. Goodman, S.D. Rudlosky, A.R. Hildebrand, L. Hanton, F. Ciceri, R. Nowell, E. Lyytinen, D. Hladiuk, D. Free, N. Moskovitz, L. Bright, C.O. Johnston, E. Stern, Detection of meteoroid impacts by the Geostationary Lightning Mapper on the GOES-16 satellite. Meteorit. Planet. Sci. 53, 2445–2469 (2018) L. Jimenez, Molecular applications to pharmaceutical processes and cleanroom environments. PDA J. Pharm. Sci. Technol. 65, 242–253 (2011) A.J.G. Jurewicz, D.S. Burnett, R.C. Wiens, T.A. Friedmann, C.C. Hays, R.J. Hohlfelder, K. Nishiizumi, J.A. Stone, D.S. Woolum, R. Becker, A.L. Butterworth, A.J. Campbell, M. Ebihara, I.A. Franchi, V. Heber, C.M. Hohenberg, M. Humayun, K.D. McKeegan, K. McNamara, A. Meshik, R.O. Pepin, D. Schlutter, R. Wieler, The Genesis solar-wind collector materials. Space Sci. Rev. 105, 535–560 (2003) Y. Kebukawa, S. Nakashima, T. Otsuka, K. Nakamura-Messenger, M.E. Zolensky, Rapid contamination during storage of carbonaceous chondrites prepared for micro FTIR measurements. Meteorit. Planet. Sci. 44, 545–557 (2009) C.G. Keller, R.T. Howe, Hexsil tweezers for teleoperated micro-assembly, in Proceedings IEEE the Tenth Annual International Workshop on Micro Electro Mechanical Systems. An Investigation of Micro Structures, Sensors, Actuators, Machines and Robots (IEEE, Nagoya, 1997), pp. 72–77 W. Kinard, R. O'Neal, Long Duration Exposure Facility (LDEF) results, in 29th Aerospace Sciences Meeting, Abstract #1971 (American Institute of Aeronautics and Astronautics, Washington, 1991) B.V. King, I.V. Veryovkin, A.V. Zinovev, C.E. Tripa, M.J. Pellin, N. Toyoda, M. Schmeling, Ion beam removal of surface contamination in Genesis samples, in Lunar and Planetary Science Conference XXXXI, Abstract #1975 (2010) G.I. Kokhirova, J. Borovička, Observations of the 2009 Leonid activity by the Tajikistan fireball network. Astron. Astrophys. 533, 6 (2011) K.R. Kuhlman, M.C. Rodriquez, C.P. Gonzalez, J.H. Allton, D.S. Burnett, Cleaning study of genesis sample 60487, in Lunar and Planetary Science Conference XXXXIV, Abstract #2930 (2013) K.A. Kvenvolden, S. Chang, J.W. Smith, J. Flores, K. Pering, C. Saxinger, F. Woeller, K. Keil, I. Breger, C. Ponnamperuma, Carbon compounds in lunar fines from Mare Tranquillitatis; I, Search for molecules of biological significance, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1813–1828 K. Kwan, M. Cooper, M.T. La Duc, P. Vaishampayan, C. Stam, J.N. Benardini, G. Scalzi, C. Moissl-Eichinger, K. Venkateswaran, Evaluation of procedures for the collection, processing, and analysis of biomolecules from low-biomass surfaces. Appl. Environ. Microbiol. 77, 2943–2953 (2011) M.T. La Duc, A. Dekas, S. Osman, C. Moissl, D. Newcombe, K. Venkateswaran, Isolation and characterization of bacteria capable of tolerating the extreme conditions of cleanroom environments. Appl. Environ. Microbiol. 73, 2600–2611 (2007) M.T. La Duc, S. Osman, P. Vaishampayan, Y. Piceno, G. Andersen, J.A. Spry, K. Venkateswaran, Comprehensive census of bacteria in cleanrooms by using DNA microarray and cloning methods. Appl. Environ. Microbiol. 75, 6559–6567 (2009) M.T. La Duc, K. Venkateswaran, C.A. Conley, A genetic inventory of spacecraft and associated surfaces. Astrobiology 14, 15–23 (2014) D.S. Lauretta, S.S. Balram-Knutson, E. Beshore, W.V. Boynton, C. Drouet d'Aubigny, D.N. DellaGiustina, H.L. Enos, D.R. Golish, C.W. Hergenrother, E.S. Howell, C.A. Bennett, E.T. Morton, M.C. Nolan, B. Rizk, H.L. Roper, A.E. Bartels, B.J. Bos, J.P. Dworkin, D.E. Highsmith, D.A. Lorenz, L.F. Lim, R. Mink, M.C. Moreau, J.A. Nuth, D.C. Reuter, A.A. Simon, E.B. Bierhaus, B.H. Bryan, R. Ballouz, O.S. Barnouin, R.P. Binzel, W.F. Bottke, V.E. Hamilton, K.J. Walsh, S.R. Chesley, P.R. Christensen, B.E. Clark, H.C. Connolly, M.K. Crombie, M.G. Daly, J.P. Emery, T.J. McCoy, J.W. McMahon, D.J. Scheeres, S. Messenger, K. Nakamura-Messenger, K. Righter, S.A. Sandford, OSIRIS-REx: Sample return from Asteroid (101955) Bennu. Space Sci. Rev. 212, 925–984 (2017) L. Le Roy, K. Altwegg, H. Balsiger, J.-J. Berthelier, A. Bieler, C. Briois, U. Calmonte, M.R. Combi, J. De Keyser, F. Dhooghe, B. Fiethe, S.A. Fuselier, S. Gasc, T.I. Gombosi, M. Hässig, A. Jäckel, M. Rubin, C.-Y. Tzou, Inventory of the volatiles on comet 67P/Churyumov-Gerasimenko from Rosetta/ROSINA. Astron. Astrophys. 583, A1 (2015) S.R. Lipsky, R.J. Cushley, C.G. Horvath, W.J. McMurray, Analysis of lunar material for organic compounds, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1871–1873 J.T. Lonsdale, The Pena Blanca Spring meteorite, Brewster County, Texas. Am. Mineral. 32, 354–364 (1947) M.D.J. Lynch, J.D. Neufeld, Ecology and exploration of the rare biosphere. Nat. Rev. Microbiol. 13, 217–229 (2015) A. Mahnert, P. Vaishampayan, A.J. Probst, A. Auerbach, C. Moissl-Eichinger, K. Venkateswaran, G. Berg, Cleanroom maintenance significantly reduces abundance but not diversity of indoor microbiomes. PLoS ONE 10, 20 (2015) P.M. Martin, A.A. Mills, Size and shape of chondrules in the Bjurböle and Chainpur meteorites. Earth Planet. Sci. Lett. 33, 239–248 (1976) U.B. Marvin, Meteorites in history: An overview from the Renaissance to the 20th century. Geol. Soc. (Lond.) Spec. Publ. 256, 15–71 (2006) S. Matsusaka, H. Maruyama, T. Matsuyama, M. Ghadiri, Triboelectric charging of powders: A review. Chem. Eng. Sci. 65, 5781–5807 (2010) G.J.H. McCall, A.J. Bowden, R.J. Howarth, The history of meteoritics—Overview. Geol. Soc. (Lond.) Spec. Publ. 256, 1–13 (2006) T.J. McCoy, Hopewell meteoritic metal beads: Clues to trade 2,000 years ago. Elements 14, 360–361 (2018) T.J. McCoy, A.E. Marquardt, J.T. Wasson, R.D. Ash, E.P. Vicenzi, The Anoka, Minnesota iron meteorite as parent to Hopewell meteoritic metal beads from Havana, Illinois. J. Archaeol. Sci. 81, 13–22 (2017) J.C. McLane, E.A. King, D.A. Flory, K.A. Richardson, J.P. Dawson, W.W. Kemmerer, Lunar receiving laboratory. Science 155, 525–529 (1967) S.M. McLennan, M.A. Sephton, C. Allen, A.C. Allwood, R. Barbieri, D.W. Beaty, P. Boston, M. Carr, M. Grady, J. Grant, V.S. Heber, C.D.K. Herd, B. Hofmann, P. King, N. Mangold, G.G. Ori, A.P. Rossi, F. Raulin, S.W. Ruff, B. Sherwood Lollar, S. Symes, M.G. Wilson, Planning for Mars returned sample science: Final report of the MSR End-to-End International Science Analysis Group (E2E-iSAG). Astrobiology 12, 175–230 (2011) K.M. McNamara, E.K. Stansbery, Analysis of molecular contamination on genesis collectors through spectroscopic ellipsometry, in 36th Lunar and Planetary Science Conference, Abstract #2402 (2005) R.M. Mecikalski, L.D. Carey, Radar reflectivity and altitude distributions of lightning flashes as a function of three main storm types. J. Geophys. Res., Atmos. 123, 12814–12828 (2018) W.G. Meinschein, T.J. Jackson, J.M. Mitchell, E. Cordes, V.J. Shiner Jr., Search for alkanes of 15–30 carbon atom length in lunar fines, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1875–1877 S. Messenger, Opportunities for the stratospheric collection of dust from short-period comets. Meteorit. Planet. Sci. 37, 1491–1505 (2002) S. Messenger, K. Nakamura-Messenger, L.P. Keller, S.J. Clemett, Pristine stratospheric collection of interplanetary dust on an oil-free polyurethane foam substrate. Meteorit. Planet. Sci. 50, 1468–1485 (2015) E.T. Mickelson, Cleaning and Cleanliness Verification Techniques for Mars Returned Sample Handling. An MRSH Document, NASA JSC 29738 (2002a) E.T. Mickelson, Molecular Contamination Control: A Unified Cleaning and Verification Strategy for a Sample Receiving Facility. An MRSH Document, NASA JSC 29689 (2002b) MIL-STD-1246C, Military Standard: Product Cleanliness Levels and Contamination Control Program. Department of Defense (1994) S.D. Miller, W.C. Straka, A.S. Bachmeier, T.J. Schmit, P.T. Partain, Y.J. Noh, Earth-viewing satellite perspectives on the Chelyabinsk meteor event. Proc. Natl. Acad. Sci. 110, 18092–18097 (2013) J.J. Minich, Q.Y. Zhu, S. Janssen, R. Hendrickson, A. Amir, R. Vetter, J. Hyde, M.M. Doty, K. Stillwell, J. Benardini, J.H. Kim, E.E. Allen, K. Venkateswaran, R. Knight, KatharoSeq enables high-throughput microbiome analysis from low-biomass samples. mSystems 3, 16 (2018) K.R. Mitchell, C.D. Takacs-Vesbach, A comparison of methods for total community DNA preservation and extraction from various thermal environments. J. Ind. Microbiol. Biotech. 35, 1139–1147 (2008) C. Moissl, J.C. Bruckner, K. Venkateswaran, Archaeal diversity analysis of spacecraft assembly cleanrooms. ISME J. 2, 115–119 (2008) C. Moissl-Eichinger, Archaea in artificial environments: Their presence in global spacecraft cleanrooms and impact on planetary protection. ISME J. 5, 209–219 (2011) C. Moissl-Eichinger, A.K. Auerbach, A.J. Probst, A. Mahnert, L. Tom, Y. Piceno, G.L. Andersen, K. Venkateswaran, P. Rettberg, S. Barczyk, R. Pukall, G. Berg, Quo vadis? Microbial profiling revealed strong effects of cleanroom maintenance and routes of contamination in indoor environments. Sci. Rep. 5, 13 (2015) M. Mora, A. Mahnert, K. Koskinen, M.R. Pausan, L. Oberauner-Wappis, R. Krause, A.K. Perras, G. Gorkiewicz, G. Berg, C. Moissi-Eichinger, Microorganisms in confined habitats: Microbial monitoring and control of intensive care units, operating rooms, cleanrooms and the international space station. Front. Microbiol. 7, 20 (2016) M.E. Murphy, V.E. Modzeleski, B. Nagy, W.M. Scott, M. Young, C.M. Drew, P.B. Hamilton, H.C. Urey, Analysis of Apollo 11 lunar samples by chromatography and mass spectrometry; pyrolysis products, hydrocarbons, sulfur, amino acids, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1879–1890 T. Nakamura, T. Noguchi, A. Tsuchiyama, H. Yurimoto, K. Nagao, M. Ebihara, R. Okazaki, F. Kitajima, H. Naraoka, M. Abe, A. Fujimura, T. Yada, T. Okada, Y. Ishibashi, K. Shirai, T. Mukai, H. Yano, T. Yamada, H. Kuninaka, M. Yoshikawa, J. Kawaguchi, The first recovery of asteroidal samples by the Hayabusa mission, in AGU Fall Meeting Abstracts (2011) K. Nakamura-Messenger, S. Messenger, L.P. Keller, S.J. Clemett, M.E. Zolensky, Organic globules in the Tagish Lake meteorite: Remnants of the protosolar disk. Science 314, 1439–1442 (2006) H. Naraoka, H. Mita, K. Hamase, M. Mita, H. Yabuta, K. Saito, K. Fukushima, F. Kitajima, S.A. Sandford, T. Nakamura, T. Noguchi, R. Okazaki, K. Nagao, M. Ebihara, H. Yurimoto, A. Tsuchiyama, M. Abe, K. Shirai, M. Ueno, T. Yada, Y. Ishibashi, T. Okada, A. Fujimura, T. Mukai, M. Yoshikawa, J. Kawaguchi, Preliminary organic compound analysis of microparticles returned from Asteroid 25143 Itokawa by the Hayabusa mission. Geochem. J. 46, 61–72 (2012) NASA, in Summer Conference on Lunar Exploration and Science, Falmouth, Massachusetts, July 19–31, 1965 (Scientific and Technical Information Division, National Aeronautics and Space Administration, Washington, 1965), 421 pp. NASA, in Handbook for the Microbial Examination of Space Hardware. NASA Technical Handbook, NASA-HDBK-6022 (2010), 52 pp. NASA, in Planetary Protection Provisions for Robotic Extraterrestrial Missions, NASA Interim Directive, NID 8020.109A (2017), 49 pp. N.D. Novikova, Review of the knowledge of microbial contamination of the Russian manned spacecraft. Microb. Ecol. 47, 127–132 (2004) J. Oberst, S. Molau, D. Heinlein, C. Gritzner, M. Schindler, P. Spurny, Z. Ceplecha, J. Rendtel, H. Betlem, The "European Fireball Network": Current status and future prospects. Meteorit. Planet. Sci. 33, 49–56 (1998) R. Okazaki, K. Nagao, Y.N. Miura, T. Osawa, K. Bajo, S. Matsuda, T. Nakamura, K. Shirai, M. Abe, T. Yada, T. Noguchi, Y. Ishibashi, A. Fujimura, T. Mukai, M. Ueno, T. Okada, M. Yoshikawa, J. Kawaguchi, Noble gases recovered from the Hayabusa sample container, in Lunar and Planetary Science Conference (2011) p. 1653 R. Okazaki, H. Sawada, S. Yamanouchi, S. Tachibana, Y. Miura, K. Sakamoto, Y. Takano, M. Abe, S. Itoh, K. Yamada, H. Yabuta, C. Okamoto, H. Yano, T. Noguchi, T. Nakamura, K. Nagao, Hayabusa2 sample catcher and container: Metal-seal system for vacuum encapsulation of returned samples with volatiles and organic compounds recovered from C-type asteroid Ryugu. Space Sci. Rev. 208, 107–124 (2017) J. Oro, W.S. Updegrove, J. Gibert, J. McReynolds, E. Gil-Av, J. Ibanez, A. Zlatkis, D.A. Flory, R.L. Levy, C.J. Wolf, Organogenic elements and compounds in type C and D lunar samples from Apollo 11, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1901–1920 V.I. Oyama, E.L. Merek, M.P. Silverman, A search for viable organisms in a lunar sample, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1921–1927 V.I. Oyama, E.L. Merek, M.P. Silverman, C.W. Boylen, Search for viable organisms in lunar samples; further biological studies on Apollo 11 core, Apollo 12 bulk, and Apollo 12 core samples, in Lunar Science Conference, 2nd, Proc., vol. 2 (Pergamon, Oxford, 1971), pp. 1931–1937 D. Papanastassiou, J.M.D. Day, D.P. Glavin, G.R. Huss, R.L. Korotev, L.E. Nyquist, M. Wadhwa, T.J. Zega, Lunar Curation Task Force Report. NASA report to the Curation and Analysis Planning Team for Extraterrestrial Materials (2015), p. 14 K.E. Peters, C.C. Walters, J.M. Moldowan, The Biomarker Guide: Volume 1. Biomarkers and Isotopes in the Environment and Human History (Cambridge University Press, Cambridge, 2005), 492 pp. O.P. Popova, P. Jenniskens, V. Emel'yanenko, A. Kartashova, E. Biryukov, S. Khaibrakhmanov, V. Shuvalov, Y. Rybnov, A. Dudorov, V.I. Grokhovsky, D.D. Badyukov, Q.Z. Yin, P.S. Gural, J. Albers, M. Granvik, L.G. Evers, J. Kuiper, V. Kharlamov, A. Solovyov, Y.S. Rusakov, S. Korotkiy, I. Serdyuk, A.V. Korochantsev, M.Y. Larionov, D. Glazachev, A.E. Mayer, G. Gisler, S.V. Gladkovsky, J. Wimpenny, M.E. Sanborn, A. Yamakawa, K.L. Verosub, D.J. Rowland, S. Roeske, N.W. Botto, J.M. Friedrich, M.E. Zolensky, L. Le, D. Ross, K. Ziegler, T. Nakamura, I. Ahn, J.I. Lee, Q. Zhou, X.H. Li, Q.L. Li, Y. Liu, G.Q. Tang, T. Hiroi, D. Sears, I.A. Weinstein, A.S. Vokhmintsev, A.V. Ishchenko, P. Schmitt-Kopplin, N. Hertkorn, K. Nagao, M.K. Haba, M. Komatsu, T. Mikouchi, C. Chelyabinsk Airburst, Chelyabinsk airburst, damage assessment, meteorite recovery, and characterization. Science 342, 1069–1073 (2013) G. Preti, R.C. Murphy, K. Biemann, The search for organic compounds in various Apollo 12 samples by mass spectrometry, in Lunar Science Conference, 2nd, Proc., vol. 2 (Pergamon, Oxford, 1971), pp. 1879–1889 M. Prinz, K. Keil, P.F. Hlava, J.L. Berkley, C.B. Gomes, W.S. Curvello, Studies of Brazilian Meteorites: 3. Origin and history of Angra-Dos-Reis achondrite. Earth Planet. Sci. Lett. 35, 317–330 (1977) A. Probst, P. Vaishampayan, S. Osman, C. Moissl-Eichinger, G.L. Andersen, K. Venkateswaran, Diversity of anaerobic microbes in spacecraft assembly cleanrooms. Appl. Environ. Microbiol. 76, 2837–2845 (2010) M. Ramstorp, Introduction to Contamination Control and Cleanroom Technology (Wiley, New York, 2000) M.S. Rappe, S.J. Giovannoni, The uncultured microbial majority. Annu. Rev. Microbiol. 57, 369–394 (2003) J.S. Raval, E. Koch, A.D. Donnenberg, Real-time monitoring of non-viable airborne particles correlates with airborne colonies and represents an acceptable surrogate for daily assessment of cell-processing cleanroom performance. Cytotherapy 14, 1144–1150 (2012) J.A. Reuter, D.V. Spacek, M.P. Snyder, High-throughput sequencing technologies. Mol. Cell 58, 586–597 (2015) M.A. Reynolds, N.L. Turner, J.C. Hurgeton, M.F. Barbee, D.A. Flory, B.R. Simoneit, Environmental control of lunar samples in the lunar receiving laboratory, in Analytical Methods Developed for Application to Lunar Samples Analyses, American Society for Testing and Materials, ASTM STP, vol. 539 (1973), pp. 3–15 J.H. Rho, A.J. Bauman, T.F. Yen, J. Bonner, Fluorometric examination of the returned lunar fines from Apollo 11, in Apollo 11 Lunar Science Conference, Proc., vol. 2 (Pergamon, New York, 1970), pp. 1929–1932 J.H. Rho, A.J. Bauman, J. Bonner, Absence of porphyrins in an Apollo 12 lunar surface sample, in Lunar Science Conference, 2nd, Proc., vol. 2 (Pergamon, Oxford, 1971), pp. 1875–1877 J.H. Rho, E.A. Cohen, A.J. Bauman, Spectrofluorometric search for porphyrins in Apollo 14 surface fines, in Lunar Science Conference, 3rd, Proc., vol. 3 (Pergamon, Oxford, 1972), pp. 2149–2155 K. Righter, C.M. Corrigan, T.J. McCoy, R.P. Harvey, 35 Seasons of US Antarctic Meteorites (Am. Geophys. Union, Washington, 2014) A.J. Rissanen, E. Kurhela, T. Aho, T. Oittinen, M. Tiirola, Storage of environmental samples for guaranteeing nucleic acid yields for molecular microbiological studies. Appl. Microbiol. Biotechnol. 88, 977–984 (2010) B. Robert, A low-power micromanipulator and microdissector. J. Sci. Instrum. 28, 65 (1951) H.L. Rose, C.A. Dewey, M.S. Ely, S.L. Willoughby, T.M. Parsons, V. Cox, P.M. Spencer, S.A. Weller, Comparison of eight methods for the extraction of bacillus atrophaeus spore DNA from eleven common interferents and a common swab. PLoS ONE 6, e22668 (2011) J. Rummel, M. Race, D. DeVincenzi, P. Schad, P. Stabekis, M. Viso, S. Acevedo, A Draft Test Protocol for Detecting Possible Biohazards in Martian Samples Returned to Earth (2002). NASA technical publication 211842 V.E. Sakraida, Cleanroom Design in 10 Easy Steps, Engineered Systems Magazine, Troy, MI (2008) S.A. Sandford, D.E. Brownlee, Response to comment on "Organics captured from Comet 81P/Wild 2 by the Stardust spacecraft". Science 317, 1 (2007) S.A. Sandford, J. Aleon, C.M.O. Alexander, T. Araki, S. Bajt, G.A. Baratta, J. Borg, J.P. Bradley, D.E. Brownlee, J.R. Brucato, M.J. Burchell, H. Busemann, A. Butterworth, S.J. Clemett, G. Cody, L. Colangeli, G. Cooper, L. D'Hendecourt, Z. Djouadi, J.P. Dworkin, G. Ferrini, H. Fleckenstein, G.J. Flynn, I.A. Franchi, M. Fries, M.K. Gilles, D.P. Glavin, M. Gounelle, F. Grossemy, C. Jacobsen, L.P. Keller, A.L.D. Kilcoyne, J. Leitner, G. Matrajt, A. Meibom, V. Mennella, S. Mostefaoui, L.R. Nittler, M.E. Palumbo, D.A. Papanastassiou, F. Robert, A. Rotundi, C.J. Snead, M.K. Spencer, F.J. Stadermann, A. Steele, T. Stephan, P. Tsou, T. Tyliszczak, A.J. Westphal, S. Wirick, B. Wopenka, H. Yabuta, R.N. Zare, M.E. Zolensky, Organics captured from comet 81P/Wild 2 by the Stardust spacecraft. Science 314, 1720–1724 (2006) S.A. Sandford, S. Bajt, S.J. Clemett, G.D. Cody, G. Cooper, B.T. Degregorio, V. de Vera, J.P. Dworkin, J.E. Elsila, G.J. Flynn, D.P. Glavin, A. Lanzirotti, T. Limero, M.P. Martin, C.J. Snead, M.K. Spencer, T. Stephan, A. Westphal, S. Wirick, R.N. Zare, M.E. Zolensky, Assessment and control of organic and other contaminants associated with the Stardust sample return from comet 81P/Wild 2. Meteorit. Planet. Sci. 45, 406–433 (2010) T. Sandle, Environmental monitoring: A practical approach, in Environmental Monitoring: A Comprehensive Handbook, vol. 6 (2012), pp. 29–54 T. Sandle, K. Skinner, Study of psychrophilic and psychrotolerant micro-organisms isolated in cold rooms used for pharmaceutical processing. J. Appl. Microbiol. 114, 1166–1174 (2013) E.K. Sansom, P. Bland, J. Paxman, M. Towner, A novel approach to fireball modeling: The observable and the calculated. Meteorit. Planet. Sci. 50, 1423–1435 (2015) H. Sawada, R. Okazaki, S. Tachibana, K. Sakamoto, Y. Takano, C. Okamoto, H. Yano, Y. Miura, M. Abe, S. Hasegawa, T. Noguchi, Hayabusa2 sampler: Collection of asteroidal surface material. Space Sci. Rev. 208, 81–106 (2017) M. Schmeling, D.S. Burnett, J.H. Allton, M. Rodriquez, C.E. Tripa, I.V. Veryovkin, Application of CO2 snow jet cleaning in conjunction with laboratory based total reflection X-ray fluorescence, in Lunar and Planetary Science Conference XXXXIV, Abstract #2465 (2013) P. Schwendner, C. Moissl-Eichinger, S. Barczyk, M. Bohmeier, R. Pukall, P. Rettberg, Insights into the microbial diversity and bioburden in a South American spacecraft assembly cleanroom. Astrobiology 13, 1140–1154 (2013) D.W.G. Sears, H. Sears, D.S. Ebel, S. Wallace, J.M. Friedrich, X-ray computed tomography imaging: A not-so-nondestructive technique. Meteorit. Planet. Sci. 51, 833–838 (2016) D.W.G. Sears, A. Sehlke, J.M. Friedrich, M.L. Rivers, D.S. Ebel, X-ray computed tomography of extraterrestrial rocks eradicates their natural radiation record and the information it contains. Meteorit. Planet. Sci. 53, 2624–2631 (2018) L.S. Sherman, J.R. Waldbauer, R.E. Summons, Improved methods for isolating and validating indigenous biomarkers in Precambrian rocks. Org. Geochem. 38, 1987–2000 (2007) Y. Shiba, C. Shimoda, T. Maruyama, S. Okumura, M. Tomita, A. Murasawa, K. Ohtsuka, H. Tomioka, E. Hidaka, Photographic observations of the 1996 Leonid fireballs in Japan. Earth Moon Planets 77, 47–54 (1997) S. Shokralla, J.L. Spall, J.F. Gibson, M. Hajibabaei, Next-generation sequencing technologies for environmental DNA research. Mol. Ecol. 21, 1794–1805 (2012) B.R. Simoneit, D.A. Flory, Apollo 11, 12, and 13 Organic Contamination Monitoring History. Lunar and Earth Sciences Division Internal Note, NASA Document MSC-04350 (1971) B.R. Simoneit, D.A. Flory, M.A. Reynolds, Organic contamination monitoring and control in the lunar receiving laboratory, in Analytical Methods Developed for Application to Lunar Samples Analyses, American Society for Testing and Materials, ASTM STP, vol. 539 (1973), pp. 16–34 M.K. Spencer, R.N. Zare, Comment on "Organics Captured from Comet 81P/Wild 2 by the Stardust Spacecraft". Science 317, 1680 (2007) M.K. Spencer, S.J. Clemett, S.A. Sandford, D.S. McKay, R.N. Zare, Organic compound alteration during hypervelocity collection of carbonaceous materials in aerogel. Meteorit. Planet. Sci. 44, 15–24 (2009) E.K. Stansbery, K.M. McNamara, Genesis preliminary examination—Ellipsometry overview, in 42nd Lunar and Planetary Science Conference, Abstract #2145 (2005) E.K. Stansbery, G.R.P. Team, Genesis recovery processing, in 36th Lunar and Planetary Science Conference, Abstract #2179 (2005) A. Steele, F.M. McCubbin, M. Fries, M. Glamoclija, L. Kater, H. Nekvasil, Graphite in an Apollo 17 impact melt breccia. Science 329, 51 (2010) D. Stuart, D. Eagleson, R. Lloyd, C. Hersey, D. Eagleson, Analysis of the class III biological safety cabinet integrity test. Appl. Biosafety 17, 128–131 (2012) G.M. Sullivan, D.I. Klebe, Searching for fireballs, in American Astronomical Society Meeting, Abstracts #204 (2004) G.R. Taylor, W. Ellis, P.H. Johnson, K. Kropp, T. Groves, Microbial assay of lunar samples, in Lunar Science Conference, 2nd, Proc., vol. 2 (Pergamon, Oxford, 1971), pp. 1939–1948 M. Tessler, J.S. Neumann, E. Afshinnekoo, M. Pineda, R. Hersch, L.F.M. Velho, B.T. Segovia, F.A. Lansac-Toha, M. Lemke, R. DeSalle, C.E. Mason, M.R. Brugler, Large-scale differences in microbial biodiversity discovery between 16S amplicon and shotgun sequencing. Sci. Rep. 7, 14 (2017) S.M. Tikoo, B.P. Weiss, W.S. Cassata, D.L. Shuster, J. Gattacceca, E.A. Lima, C. Suavet, F. Nimmo, M.D. Fuller, Decline of the lunar core dynamo. Earth Planet. Sci. Lett. 404, 89–97 (2014) J. Toporski, A. Steele, Observations from a 4-year contamination study of a sample depth profile through the martian meteorite Nakhla. Astrobiology 7, 389–401 (2007) J. Tremblay, K. Singh, A. Fern, E.S. Kirton, S.M. He, T. Woyke, J. Lee, F. Chen, J.L. Dangl, S.G. Tringe, Primer and platform effects on 16S rRNA tag sequencing. Front. Microbiol. 6 (2015) J.M. Trigo-Rodriguez, J. Llorca, A.J. Castro-Tirado, J.L. Ortiz, J.A. Docobo, J. Fabregat, The Spanish fireball network. Astron. Geophys. 47, 26–28 (2006) P. Tsou, D.E. Brownlee, S.A. Sandford, F. Horz, M.E. Zolensky, Wild 2 and interstellar sample collection and Earth return. J. Geophys. Res., Planets 108, 21 (2003) USP, Microbiological Evaluation of Cleanrooms and Other Controlled Environments. USP (United States Pharmacopeia) General Chapter <1116>, in USP 36 (United States Pharmacopeia Convention, North Bethesda, 2013) (2013), http://ftp.uspbpep.com/v29240/usp29nf24s0_c1116.html. Accessed 24 April 2019 P. Vaishampayan, A.J. Probst, M.T. La Duc, E. Bargoma, J.N. Benardini, G.L. Andersen, K. Venkateswaran, New perspectives on viable microbial communities in low-biomass cleanroom environments. ISME J. 7, 312–324 (2013) K.E. Vander Kaaden, F.M. McCubbin, P.K. Byrne, N.L. Chabot, C.M. Ernst, C.L. Johnson, M.S. Thompson, Revolutionizing our understanding of the Solar System via sample return from Mercury. Space Sci. Rev. (2019, in press). https://doi.org/10.1007/s11214-019-0614-x D. Vaniman, B.M. French, H.H. Schmitt, G.H. Heiken, Lunar Sourcebook; A User's Guide to the Moon (Cambridge University Press, Cambridge, 1991) J.B. Vaught, M.K. Henderson, in Biological Sample Collection, Processing, Storage and Information Management. IARC Sci. Publ., vol. 163 (2011), pp. 23–42 K. Venkateswaran, N. Hattori, M.T. La Duc, R. Kern, ATP as a biomarker of viable microorganisms in clean-room facilities. J. Microbiol. Methods 52, 367–377 (2003) D. Venton, Inner workings: Networks of cameras are tracking meteorites with unprecedented precision. Proc. Natl. Acad. Sci. 114, 7472 (2017) T. Větrovský, P. Baldrian, The variability of the 16S rRNA gene in bacterial genomes and its consequences for bacterial community analyses. PLoS ONE 8, 10 (2013) J. Veverka, Comet Surface Sample Return (CSSR) Mission Concept Study (2010a), pp. 1–33 J. Veverka, Cryogenic Comet Nucleus Sample Return (CNSR) Mission Technology Study (2010b), pp. 1–45 J. Vrublevskis, L. Berthoud, Y. McColluch, P. Bowman, J. Holt, J. Bridges, A. Bennett, F. Gaubert, L. Duvet, Description of European Space Agency (ESA) double walled isolator (DWI) breadboard currently under development for demonstration of critical technology foreseen to be used in the Mars sample receiving facility (MSRF), in Proceedings of the Second International Mars Sample Return Conference, Berlin, Germany, #6009 (2018a) J. Vrublevskis, S. Duncan, L. Berthoud, P. Bowman, R. Hills, Y. McCulloch, D. Pisla, C. Vaida, B. Gherman, M. Hofbaur, B. Dieber, N. Neythalath, C. Smith, M. van Winnendael, L. Duvet, Description of European Space Agency (ESA) remote manipulator (RM) system breadboard currently under development for demonstration of critical technology foreseen to be used in the Mars sample receiving facility (MSRF), in Proceedings of the Second International Mars Sample Return Conference, Berlin, Germany, #6010 (2018b) R.J. Walker, M.F. Horan, C.K. Shearer, J.J. Papike, Low abundances of highly siderophile elements in the lunar mantle: Evidence for prolonged late accretion. Earth Planet. Sci. Lett. 224, 399–413 (2004) W. Watson, Enhancements to the Sentinel fireball network video software, in Society for Astronomical Sciences Annual Symposium, vol. 28 (2009), p. 43 R. Webb, A fast track to zero environmental pathogens using novel ionized hydrogen peroxide technology. Infection Control Today. February 1 issue (2011) T. Weinmaier, A.J. Probst, M.T. La Duc, D. Ciobanu, J.F. Cheng, N. Ivanova, T. Rattei, P. Vaishampayan, A viability-linked metagenomic analysis of cleanroom environments: Eukarya, prokaryotes, and viruses. Microbiome 3, 14 (2015) R.J. Weryk, P.G. Brown, A. Domokos, W.N. Edwards, Z. Krzeminski, S.H. Nudds, D.L. Welch, The southern Ontario all-sky meteor camera network, in Advances in Meteoroid and Meteor Science, ed. by J.M. Trigo-Rodríguez, F.J.M. Rietmeijer, J. Llorca, D. Janches (Springer, New York, 2008), pp. 241–246 A.J. Westphal, R.K. Bastien, J. Borg, J.C. Bridges, D.E. Brownlee, M.J. Burchell, A.F. Cheng, B.C. Clark, Z. Djouadi, C. Floss, I. Franchi, Z. Gainsforth, G.A. Graham, S.F. Green, P.R. Heck, M. Horanyi, P. Hoppe, F.P. Hoerz, J. Huth, A.T. Kearsley, H. Leroux, K. Marhas, K. Nakamura-Messenger, S.A. Sandford, T.H. See, F.J. Stadermann, N.E. Teslich, S. Tsitrin, J.L. Warren, P.J. Wozniakiewicz, M.E. Zolensky, Discovery of non-random spatial distribution of impacts in the Stardust cometary collector. Meteorit. Planet. Sci. 43, 415–429 (2008) G.W. Wetherill, D.O. Revelle, Which fireballs are meteorites? A study of the Prairie Network photographic meteor data. Icarus 48, 308–328 (1981) A.F. Whitaker, D. Dooling, LDEF materials results for spacecraft applications: Executive summary. NASA STI/Recon Technical Report N (1995), p. 23549 D.R. White, Lunar sample Processing in the Lunar Receiving Laboratory High-Vacuum Complex. Apollo Experience Report, NASA Technical Note D-8298, Johnson Space Center, Houston, TX (1976) W. Whyte, Cleanroom Design, 2nd edn. (Wiley, New York, 2001) W. Whyte, Cleanroom Technology: Fundamentals of Design, Testing and Operation, 2nd edn. (Wiley, New York, 2010) M. Wiśniewski, P. Zoladek, A. Olech, Z. Tyminski, M. Maciejewski, K. Fietkiewicz, R. Rudawska, M. Gozdalski, M.P. Gawronski, T. Suchodolski, M. Myszkiewicz, M. Stolarz, K. Polakowski, Current status of Polish Fireball Network. Planet. Space Sci. 143, 12–20 (2017) I.P. Wright, S.S. Russell, S.R. Boyd, C. Meyer, C.T. Pillinger, Xylan: A potential contaminant for lunar samples and Antarctic meteorites, in Proceedings of the 22nd Lunar and Planetary Science Conference, vol. 22 (1992), pp. 449–458 T. Yada, A. Fujimura, M. Abe, T. Nakamura, T. Noguchi, R. Okazaki, K. Nagao, Y. Ishibashi, K. Shirai, M.E. Zolensky, S. Sandford, T. Okada, M. Uesugi, Y. Karouji, M. Ogawa, S. Yakame, M. Ueno, T. Mukai, M. Yoshikawa, J. Kawaguchi, Hayabusa-returned sample curation in the planetary material sample curation facility of JAXA. Meteorit. Planet. Sci. 49, 135–153 (2014) Y. Zhang, C-x. Xin, X. Wang, Y.-L. Deng, Detection of microorganism from China's spacecraft assembly cleanroom. Acta Astronaut. (2018, in press). https://doi.org/10.1016/j.actaastro.2018.08.024 S. Zielińska, P. Radkowski, A. Blendowska, A. Ludwig-Galezowska, J.M. Los, M. Los, The choice of the DNA extraction method may influence the outcome of the soil microbial community structure analysis. MicrobiologyOpen 6, 11 (2017) K.-P. Zimmer, Optical designs for stereomicroscopes, in International Optical Design Conference (SPIE, Bellingham, 1998), p. 8 B. Zohuri, Physics of Cryogenics: An Ultralow Temperature Phenomenon (Elsevier, Amsterdam, 2017) M.E. Zolensky, T. Girard, Stardust Spacecraft Program Contamination Control Plan. Astromaterials Research and Exploration Science, NASA Johnson Space Center, JSC-27954 (1997), p. 26 M.E. Zolensky, K. Nakamura, M. Gounelle, T. Mikouchi, T. Kasama, O. Tachikawa, E. Tonui, Mineralogy of Tagish Lake; an ungrouped type 2 carbonaceous chondrite. Meteorit. Planet. Sci. 37, 737–761 (2002) M. Zolensky, P. Bland, P. Brown, I. Halliday, Flux of extraterrestrial materials, in Meteorites and the Early Solar System II, ed. by D.S. Lauretta, H.Y. McSween (University of Arizona Press, Tucson, 2006a), pp. 869–888 M.E. Zolensky, T.J. Zega, H. Yano, S. Wirick, A.J. Westphal, M.K. Weisberg, I. Weber, J.L. Warren, M.A. Velbel, A. Tsuchiyama, P. Tsou, A. Toppani, N. Tomioka, K. Tomeoka, N. Teslich, M. Taheri, J. Susini, R. Stroud, T. Stephan, F.J. Stadermann, C.J. Snead, S.B. Simon, A. Simionovici, T.H. See, F. Robert, F.J.M. Rietmeijer, W. Rao, M.C. Perronnet, D.A. Papanastassiou, K. Okudaira, K. Ohsumi, I. Ohnishi, K. Nakamura-Messenger, T. Nakamura, S. Mostefaoui, T. Mikouchi, A. Meibom, G. Matrajt, M.A. Marcus, H. Leroux, L. Lemelle, L. Le, A. Lanzirotti, F. Langenhorst, A.N. Krot, L.P. Keller, A.T. Kearsley, D. Joswiak, D. Jacob, H. Ishii, R. Harvey, K. Hagiya, L. Grossman, J.N. Grossman, G.A. Graham, M. Gounelle, P. Gillet, M.J. Genge, G. Flynn, T. Ferroir, S. Fallon, D.S. Ebel, Z.R. Dai, P. Cordier, B. Clark, M.F. Chi, A.L. Butterworth, D.E. Brownlee, J.C. Bridges, S. Brennan, A. Brearley, J.P. Bradley, P. Bleuet, P.A. Bland, R. Bastien, Report—Mineralogy and petrology of comet 81P/Wild 2 nucleus samples. Science 314, 1735–1739 (2006b) H.A. Zook, Spacecraft measurements of the cosmic dust flux, in Accretion of Extraterrestrial Matter Throughout Earth's History, ed. by B. Peucker-Ehrenbrink, B. Schmitz (Springer, Boston, 2001), pp. 75–92 This work was supported, in part, by NASA's Science Mission Directorate. We are grateful for the editorial handling of this manuscript by Sara Russell, and this work was improved on the basis of input from the editor as well as two anonymous reviewers. This work is dedicated to the women and men that have worked tirelessly to make sample return missions possible, to the countless astromaterials curation personnel that have processed and cared for astromaterials collections, and to the members of the scientific community that study these carefully curated astromaterials. NASA Johnson Space Center, Mailcode XI2, 2101 NASA Parkway, Houston, TX, 77058, USA Francis M. McCubbin, Aurore Hutzler, Judith H. Allton, Marc D. Fries, Andrea D. Harrington, Julie L. Mitchell, Aaron B. Regberg, Kevin Righter, Michael E. Zolensky & Ryan A. Zeigler Department of Earth and Atmospheric Sciences, University of Alberta, 1-26 Earth Sciences Building, Edmonton, Alberta, T6G 2E3, Canada Christopher D. K. Herd JAXA, Sagamihara, 252-5210, Japan Toru Yada Jacobs, NASA Johnson Space Center, Mail Code XI2/JETS, 2101 NASA Parkway, Houston, TX, 77058, USA Michael J. Calaway Department of Mineral Sciences, National Museum of Natural History, Smithsonian Institution, Washington, DC, USA Cari M. Corrigan & Timothy J. McCoy Texas State University – Jacobs JETS Contract, NASA Johnson Space Center, Mail Code XI2/JETS, 2101 NASA Parkway, Houston, TX, 77058, USA Christopher J. Snead Department of Natural History, Royal Ontario Museum, 100 Queen's Park, Toronto, Ontario, M5S 2C6, Canada Kimberly T. Tait Francis M. McCubbin Aurore Hutzler Judith H. Allton Cari M. Corrigan Marc D. Fries Andrea D. Harrington Timothy J. McCoy Julie L. Mitchell Aaron B. Regberg Kevin Righter Michael E. Zolensky Ryan A. Zeigler Edited by Mahesh Anand, Sara Russell, Yangting Lin, Meenakshi Wadhwa, Kuljeet Kaur Marhas and Shogo Tachibana McCubbin, F.M., Herd, C.D.K., Yada, T. et al. Advanced Curation of Astromaterials for Planetary Science. Space Sci Rev 215, 48 (2019). https://doi.org/10.1007/s11214-019-0615-9
CommonCrawl
Existence of nonstationary periodic solutions for $\Gamma$-symmetric Lotka-Volterra type systems DCDS Home On smooth conjugacy of expanding maps in higher dimensions August 2011, 30(3): 699-708. doi: 10.3934/dcds.2011.30.699 Equilibrium states of the pressure function for products of matrices De-Jun Feng 1, and Antti Käenmäki 2, Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong Department of Mathematics and Statistics, P.O. Box 35 (MaD), FI-40014, University of Jyväskylä, Finland Received April 2010 Revised October 2010 Published March 2011 Let $\{M_i\}_{i=1}^l$ be a non-trivial family of $d\times d$ complex matrices, in the sense that for any $n\in \N$, there exists $i_1\cdots i_n\in \{1,\ldots, l\}^n$ such that $M_{i_1}\cdots M_{i_n}\ne $0. Let P : $(0,\infty)\to \R$ be the pressure function of $\{M_i\}_{i=1}^l$. We show that for each $q>0$, there are at most $d$ ergodic $q$-equilibrium states of $P$, and each of them satisfies certain Gibbs property. Keywords: Thermodynamical formalism, Products of matrices., Equilibrium states. Mathematics Subject Classification: Primary: 37D35; Secondary: 34D2. Citation: De-Jun Feng, Antti Käenmäki. Equilibrium states of the pressure function for products of matrices. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 699-708. doi: 10.3934/dcds.2011.30.699 P. Bougerol and J. Lacroix, "Products of Random Matrices with Applications to Schrödinger Operators,", Birkhäuser, (1985). Google Scholar R. Bowen, "Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms,", Lecture notes in Math., 470 (1975). Google Scholar Y. L. Cao, D. J. Feng and W. Huang, The thermodynamical formalism for submultiplicative potentials,, Discrete Contin. Dyn. Syst., 20 (2008), 639. Google Scholar K. J. Falconer, The Hausdorff dimension of self-affine fractals,, Math. Proc. Cambridge Philos. Soc., 103 (1988), 339. doi: 10.1017/S0305004100064926. Google Scholar K. Falconer and A. Sloan, Continuity of subadditive pressure for self-affine sets,, Real Analysis Exchange, 34 (2009), 413. Google Scholar D. J. Feng, Lyapunov exponents for products of matrices and multifractal analysis. Part I: Positive matrices,, Israel J. Math., 138 (2003), 353. doi: 10.1007/BF02783432. Google Scholar D. J. Feng, The variational principle for products of non-negative matrices,, Nonlinearity, 17 (2004), 447. doi: 10.1088/0951-7715/17/2/004. Google Scholar D. J. Feng, Lyapunov exponents for products of matrices and multifractal analysis, part II: General matrices,, Israel J. Math., 170 (2009), 355. doi: 10.1007/s11856-009-0033-x. Google Scholar D. J. Feng, Equilibrium states for factor maps between subshifts,, Adv. Math., 226 (2011), 2470. doi: i:10.1016/j.aim.2010.09.012. Google Scholar D. J. Feng and W. Huang, Lyapunov spectrum of asymptotically sub-additive potentials,, Comm. Math. Phys., 297 (2010), 1. doi: 10.1007/s00220-010-1031-x. Google Scholar D. J. Feng and K. S. Lau, The pressure function for products of non-negative matrices,, Math. Res. Lett., 9 (2002), 363. Google Scholar H. Furstenberg and H. Kesten, Products of random matrices,, Ann. Math. Statist., 31 (1960), 457. doi: 10.1214/aoms/1177705909. Google Scholar Y. Guivarc'h and E. Le Page, Simplicité de spectres de Lyapounov et propriété d'isolation spectrale pour une famille d'opérateurs de transfert sur l'espace projectif,, in, (2004), 181. Google Scholar Y. Heurteaux, Estimations de la dimension inférieure et de la dimension supérieure des mesures,, Ann. Inst. Henri Poincaré, 34 (1998), 309. doi: 10.1016/S0246-0203(98)80014-9. Google Scholar A. Käenmäki, On natural invariant measures on generalised iterated function systems,, Ann. Acad. Sci. Fenn. Math., 29 (2004), 419. Google Scholar A. Käenmäki and M. Vilppolainen, Dimension and measures on sub-self-affine sets,, Monatsh. Math., 161 (2010), 271. doi: 10.1007/s00605-009-0144-9. Google Scholar E. Le Page, "Théorèmes Limites pour les Produits de Matrices Aléatoires,", Lecture Notes in Math., 928 (1982). Google Scholar D. Ruelle, "Thermodynamic Formalism. The Mathematical Structures of Classical Equilibrium Statistical Mechanics,", in, 5 (1978). Google Scholar P. Walters, "An Introduction to Ergodic Theory,'', Springer-Verlag, (1982). Google Scholar Imen Bhouri, Houssem Tlili. On the multifractal formalism for Bernoulli products of invertible matrices. Discrete & Continuous Dynamical Systems - A, 2009, 24 (4) : 1129-1145. doi: 10.3934/dcds.2009.24.1129 Renaud Leplaideur. From local to global equilibrium states: Thermodynamic formalism via an inducing scheme. Electronic Research Announcements, 2014, 21: 72-79. doi: 10.3934/era.2014.21.72 Luis Barreira. Nonadditive thermodynamic formalism: Equilibrium and Gibbs measures. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 279-305. doi: 10.3934/dcds.2006.16.279 Omri M. Sarig. Bernoulli equilibrium states for surface diffeomorphisms. Journal of Modern Dynamics, 2011, 5 (3) : 593-608. doi: 10.3934/jmd.2011.5.593 Dominic Veconi. Equilibrium states of almost Anosov diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2020, 40 (2) : 767-780. doi: 10.3934/dcds.2020061 V. M. Gundlach, Yu. Kifer. Expansiveness, specification, and equilibrium states for random bundle transformations. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 89-120. doi: 10.3934/dcds.2000.6.89 Alexander Arbieto, Luciano Prudente. Uniqueness of equilibrium states for some partially hyperbolic horseshoes. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 27-40. doi: 10.3934/dcds.2012.32.27 Ivan Werner. Equilibrium states and invariant measures for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 1285-1326. doi: 10.3934/dcds.2015.35.1285 Roger M. Nisbet, Kurt E. Anderson, Edward McCauley, Mark A. Lewis. Response of equilibrium states to spatial environmental heterogeneity in advective systems. Mathematical Biosciences & Engineering, 2007, 4 (1) : 1-13. doi: 10.3934/mbe.2007.4.1 Vítor Araújo. Semicontinuity of entropy, existence of equilibrium states and continuity of physical measures. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 371-386. doi: 10.3934/dcds.2007.17.371 Jisang Yoo. Decomposition of infinite-to-one factor codes and uniqueness of relative equilibrium states. Journal of Modern Dynamics, 2018, 13: 271-284. doi: 10.3934/jmd.2018021 Xiaolin Xu, Xiaoqiang Cai. Price and delivery-time competition of perishable products: Existence and uniqueness of Nash equilibrium. Journal of Industrial & Management Optimization, 2008, 4 (4) : 843-859. doi: 10.3934/jimo.2008.4.843 Jana Kopfová. Thermodynamical consistency - a mystery or?. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 757-767. doi: 10.3934/dcdss.2015.8.757 Ruikuan Liu, Tian Ma, Shouhong Wang, Jiayan Yang. Thermodynamical potentials of classical and quantum systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1411-1448. doi: 10.3934/dcdsb.2018214 Vincent Pavan. Thermodynamical considerations implying wall/particles scattering kernels. Kinetic & Related Models, 2014, 7 (1) : 133-168. doi: 10.3934/krm.2014.7.133 Benjamin Couéraud, François Gay-Balmaz. Variational discretization of thermodynamical simple systems on Lie groups. Discrete & Continuous Dynamical Systems - S, 2020, 13 (4) : 1075-1102. doi: 10.3934/dcdss.2020064 Vaughn Climenhaga. A note on two approaches to the thermodynamic formalism. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 995-1005. doi: 10.3934/dcds.2010.27.995 Zenonas Navickas, Rasa Smidtaite, Alfonsas Vainoras, Minvydas Ragulskis. The logistic map of matrices. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 927-944. doi: 10.3934/dcdsb.2011.16.927 Eduardo Martínez. Classical field theory on Lie algebroids: Multisymplectic formalism. Journal of Geometric Mechanics, 2018, 10 (1) : 93-138. doi: 10.3934/jgm.2018004 Jordan Emme. Hermodynamic formalism and k-bonacci substitutions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3701-3719. doi: 10.3934/dcds.2017157 HTML views (0) De-Jun Feng Antti Käenmäki
CommonCrawl
Fueling of a marine-terrestrial ecosystem by a major seabird colony J. Hentati-Sundberg1, C. Raymond2, M. Sköld1, O. Svensson2, B. Gustafsson3,4 & S. Bonaglia2,5 Scientific Reports volume 10, Article number: 15455 (2020) Cite this article Element cycles Seabirds redistribute nutrients between different ecosystem compartments and over vast geographical areas. This nutrient transfer may impact both local ecosystems on seabird breeding islands and regional biogeochemical cycling, but these processes are seldom considered in local conservation plans or biogeochemical models. The island of Stora Karlsö in the Baltic Sea hosts the largest concentration of piscivorous seabirds in the region, and also hosts a large colony of insectivorous House martins Delichon urbicum adjacent to the breeding seabirds. We show that a previously reported unusually high insectivore abundance was explained by large amounts of chironomids—highly enriched in δ15N—that feed on seabird residues as larvae along rocky shores to eventually emerge as flying adults. Benthic ammonium and phosphate fluxes were up to 163% and 153% higher close to the colony (1,300 m distance) than further away (2,700 m) and the estimated nutrient release from the seabirds at were in the same order of magnitude as the loads from the largest waste-water treatment plants in the region. The trophic cascade impacting insectivorous passerines and the substantial redistribution of nutrients suggest that seabird nutrient transfer should be increasingly considered in local conservation plans and regional nutrient cycling models. Animals can act as powerful biological pumps and transfer nutrients, trace elements and environmental contaminants between ecosystems1,2,3,4,5. Seabirds are arguably the most pertinent present-day vectors of compounds from marine to terrestrial ecosystems6,7 and have been demonstrated to enhance production and alter dynamics of local ecosystems adjacent to breeding colonies8,9,10. This general marine-terrestrial ecosystem coupling can have a number of cascade effects on biological communities. For example, seabird derived nutrients have been shown to result in complex changes on spiders and insects, and general productivity increases in terrestrial plants10,11,12,13. Other trophic cascades involving terrestrial-marine linkages are insectivorous passerines benefitting from salmon runs in Canadian rivers14, and the introduction on foxes which has diminished seabird populations and thereby changed plant communities on the Aleutian islands15. The diversity of pathways and often surprising effects in the above-mention studies calls for further empirical research, with a future goal of a general understanding of the role of seabirds in local ecological cascade effects and regional nutrient cycling. The ecological significance of seabird driven nutrient fluxes have often been studied in nutrient poor terrestrial ecosystems where the addition of marine nitrogen (N) and phosphorus (P) have led to increases in productivity and diversity in plant communities10,11,16. The majority of the global population of seabirds breed in areas with low terrestrial productivity, and nutrient driven productivity increases have hence been interpreted to have positive effects on nutrient-limited terrestrial ecosystems10,11. Less focus has been given to areas where nutrients may leak to surrounding coastal ecosystems17,18 and especially in systems where the point sources of seabirds may aggravate already existing problems with eutrophication. Eutrophication is a widespread global problem in lakes and semi-enclosed seas19, and many areas affected by coastal eutrophication problems also have important seabird colonies, such as Japan and the countries around the Baltic Sea and North Sea7. Combatting eutrophication is an issue high on the political agenda in many areas in the world, and effective measures require a solid scientific background on bio-geochemical dynamics including the role of seabirds as nutrient vectors19,20. Feather samples from juvenile piscivorous seabirds (Common murres, Uria aalge) and insectivorous passerines (House martins, Delichon urbica) collected in the largest seabird colony in the Baltic Sea, the island of Stora Karlsö have previously been shown to have a striking similarity in their δ13C isotopic ratio, suggesting a common (marine) provenance of nutrients21. The density of House martins on the island is also one of the highest recorded throughout the species' distribution—and the suggested pathway for the high density has been hyperabundance of chironomids21. Chironomids, or non-biting midges, are insects whose larvae live in the aquatic environment and feed on microalgae, detritus and organic matter (such as seabird excrements), eventually emerging as flying adults and thus becoming available as food to insectivorous birds22. In this study, we clarify the ecological and bio-geochemical pathways contributing to the trophic cascade from seabirds to insectivores, by asking the following questions: - What is the pathway by which seabird excrements feed the terrestrial foodweb? - How does seabird excrements affect nutrient fluxes in soft bottom sediments? - How does seabird N and P loading to the pelagic ecosystem compare to other point sources in the region? We hypothesized that seabird nutrients would be traceble in both near-shore rocky and deep-water soft-bottom habitats, that chironomid larvae are growing up in both habitats, and that nutrient redistributed by seabirds are comparable to anthropogenic nutrient point sources in the region. The study contributes to the knowledge on ecosystem effects of seabird colonies by: (1) Exploring biological and biogeochemical pathways in which nutrient release can lead to cascade effects on terrestrial ecosystem, and (2) Describing and quantifying effects of seabird colony nutrient release in the context of an ecosystem suffering from eutrophication. The study was performed on the island of Stora Karlsö (57°17′N, 17°58′E) (Fig. 1), which is the largest seabird colony in the Baltic Sea23. The two most abundant seabird species on Stora Karlsö are Common Murre Uria aalge (15,700 breeding pairs) and Razorbill Alca torda (12,000 breeding pairs)23 of which approximately 9,000 pairs breed along a 300 m long narrow cliff edge indicated as V1 in Fig. 1c. The two species forage on sprat Sprattus sprattus and herring Clupea harengus and have a foraging range during the breeding period of approximately 2,000 km224. The breeding period is from mid-April to early August and takes place on limestone cliffs 5–40 m above sea level 0–15 m from the shoreline. House martins breed on the lighthouse building on Stora Karlsö, 20 m from the cliff edge under which the seabird ledges are distributed. The number of breeding pairs in recent years have been around 150, which is one of the largest colonies in Europe of this declining species21. Study area. (a) Baltic Sea, (b) Island of Gotland, with Stora Karlsö indicated as a green rectangle, and sites for reference samples indicated as blue asterisks, (c) Island of Stora Karlsö with sediment sampling stations S1–S4 and rocky shore sampling stations V1 and V2. Guano samples were collected on the cliffs just above the V1 station. Water and sediment samples Water and sediment samples were taken from onboard R/V Electra, a 24 m research vessel from Stockholm University, in April 2017. To locate suitable areas for sediment samples, two scientific echo-sounders were used, a multibeam Kongsberg EM2040, 0.4° × 0.7°, 200–400 kHz and a Kongsberg Topas PS40, 24 channels, parametric (35–45 kHz/1–10 kHz). The bottom substrate around the colony mainly consisted of hard bottom (sedimentary rocky and gravelly shores) with smaller hollows where soft sediment could accumulate. Such soft sediment areas were in the size range of 5,000 and 25,000 m2 and located at 1,300–2,700 m distance from the main seabird colony, and at depths of 65–69 m, and sediment sampling stations (S1–S4, Fig. 1) were located within these areas. CTD cast was performed at S1–S4 to record oxygen concentrations, temperature and salinity in the whole water column including the bottom water in situ. Collection and preparation of biological samples Guano samples were taken from active breeding ledges of Common Murres in May 2017 in the Karlsö Auk lab (a man-made breeding facility for Murres and Razorbills)25, by scraping off material using a spatula, and then frozen to − 18 °C until preparation for isotopic analysis. Efforts to sample macrofauna in the soft bottom sediments were performed in April 2017 on stations S1–S4. One van Veen grab (0.1 m2) was taken at each station and sieved through two fractions of 1 mm sieves following the European standard (ISO 16665:2014). No macrofauna was found in the samples. Samples from the rocky shore habitat (Stations V1 and V2, Fig. 1c) were collected in April 2017 and samples at three reference sites (Fig. 1b) were collected in July and September 2018. Samples were collected by a snorkeler that scraped macroalgae into a bag (mesh size < 0.5 mm) taking 6–14 subsamples from each station and preserving the contents in 90% ethanol. The major macroalgae taxa were Fucus vesiculosus and Cladophora glomerata. Chironomid larvae was sorted out from the macroalgae subsamples in the laboratory for isotopic analyses, where the isotopic signal of each subsample was analysed separately to obtain averages by station. Stations V1 and V2 were at a distance of 10 and 500 m from the main seabird colony, respectively, whereas the distance to the reference stations all exceeded 15,000 m. All samples were collected at a distance from shore of 2 – 10 m. The Baltic Sea has no tide. Sediment coring procedures Sediments at S1–S4 were sampled by means of a Kajak corer (tube length: 50 cm, internal diameter: 8 cm), which provided nearly undisturbed sediment surface. At each station, one bottom water sample was taken from the Kajak core and the sediment was then immediately sliced at the following resolution: 1 cm slices for the first 4 cm, and then 2 cm slices from 4 to 10 cm. Each sediment slice was transferred into a 50 mL Falcon tube and centrifuged at 670×g (2,500 rpm) for 15 min to extract porewater. The bottom water and porewater samples were collected with a clean plastic syringe and filtered (0.45 μm polyethersulfone filter) into a 10 mL polypropylene tube. The tubes were stored at − 20 °C until later analyses of dissolved ammonium (NH4+), phosphate (PO43−) and nitrate plus nitrite (NO3− + NO2−). At each station, a second Kajak core was sliced with resolution of 1 cm slices for the first 4 cm and samples were stored at -20 °C for later determination of organic geochemistry parameters (org C, N, δ13C, δ15N signatures and sediment porosity). Isotopic ratio (δ13C and δ15N) analyses of biological and sediment samples Sediment and biological (bird faeces and chironomid larvae) samples were freeze dried, ground, homogenized and weighed in tin capsules. Analyses on biological samples were performed with a PDZ Europa ANCA-GSL elemental analyzer (1,000 °C combustion) connected to a PDZ Europa 20-20 isotope ratio mass spectrometer (IRMS, Sercon Ltd.), while sediment samples were analyzed on an Elementar Vario EL Cube elemental analyzer (Elementar Analysensysteme GmbH) (1,080 °C combustion) connected to the same IRMS system. Isotopic compositions were reported using the conventional δ notation26, which reports the isotopic composition of a sample as the ‰ deviation of a sample relative to Vienna Peedee belemnite (VPDB) for δ13C and to atmospheric N2 for δ15N. Samples were regularly interspersed with two different laboratory standards, which were previously calibrated against NIST Standard Reference Materials (IAEA-600, USGS-40, USGS-41, USGS-42, USGS-43, USGS-61, USGS-64 and USGS-65). Based on the analyses of these standards the analytical precision was ± 0.2 ‰ for δ13C and ± 0.3 ‰ for δ15N. Porewater analyses and diffusive flux calculations Porewater samples were thawed, diluted 1:10 and NH4+, NOx (= NO3− + NO2−) and PO43− were analyzed on a segmented flow autoanalyzer system (ALPKEM, Flow Solution IV) following the standard methods for seawater analyses27. Precision was ± 0.036 µmol L−1 for NH4+, ± 0.014 µmol L−1 for NOx and ± 0.016 µmol L−1 for PO43−. As there was no macrofauna in the sediment, porewater profile shapes could be easily modeled to calculate diffusive fluxes of NH4+ and PO43−, while NOx profiles did not show any generalizable trend and were not modeled. Profiles were modeled with the numerical interpretation by Berg et al.28, which provides the best fit to a measured concentration profile assuming steady state conditions and returns diffusive fluxes between the sediment–water interface (SWI) as a function of depth. We assumed that biological diffusivity (movement of solutes due to bioturbation) was zero as macrofauna was absent, so that diffusive sediment–water fluxes (J) could be calculated according to Fick's First Law of diffusion: $$J=-\varphi Ds\frac{\delta C}{\delta x}$$ where φ is sediment porosity, Ds is molecular diffusivity in sediment, C is the solute concentration determined analytically and x is sediment depth. Porosity was estimated from sediment water content, which was calculated by measuring the wet and dry weight of 5 mL sediment aliquots after drying them at 105 °C. Ds was calculated according to the equations reported by Iversen and Jørgensen29. Quantification of total nutrient emissions We compiled literature values on guano production as well as nitrogen and phosphorus concentration in guano to calculate the nitrogen and phosphorus emissions from the colony: $${\text{Em}}_{i} = {\text{Pop}}*{\text{DG}}*{\text{PC}}*{\text{Conc}}_{i}$$ where Em is the total emissions of nutrient i (nitrogen and phosphorus) [g], Pop is the population size of the two seabird species, DG is the Daily guano output per individual and day [g day−1 dry mass], PC is the time each seabird individual spent in the colony, and Conc is the concentration of nutrient i in bird faeces. Population sizes were taken from a recent census23 and time spent in the colony was estimated from observation studies of adult birds feeding chicks30. Daily guano production was taken from previously reported data on Thick-billed murre, a closely related species of the same breeding ecology and size as Common murre31. Nutrient content in guano was taken from a number of published studies on seabirds (Table S1) where we used the median, 5th and 95th percentiles to calculated confidence intervals for the daily seabird nutrient emissions. Seabird impacts on chironomid abundance The samples of macroalgae near the seabird colony included large numbers of Chironomidae larvae which were highly enriched in δ15N (Fig. 2a). Also the δ13C signal was seen as a gradient with increasing values at increasing distance from the colony (Fig. 2b). The deep-water soft bottom sediments did not host any living macrofauna at all. We observed no effect of distance to the bird colony on the δ15N signal in the sediments (Fig. 2c) whereas the δ13C signal was seen as a gradient with decreasing values at incresing distance from the colony (Fig. 2d). Although the oxygen concentrations measured would not prevent the presence of typical Baltic Sea macrofauna in the sediments, monthly data on dissolved oxygen from a nearby oceanographic sampling station (57.116 N, 17.667 E) revealed anoxic events in early 2016 and possibly also in mid 2016, i.e. 1–1.5 year prior to the benthic sampling in this study (Fig. S1). Stable isotopes as a function of distance to the seabird colony. (a) δ15N in chironomids (rocky shore habitat), (b) δ13C in chironomids (rocky shore habitat), (c) δ15N in sediments, and (d) δ13C in sediments. "Source" refers to seabird guano sampled inside the colony. "Ref." refers to reference samples (island of Gotland, Fig. 1b). Black triangles indicate raw data at different stations and red circles denote mean by station. For station description see Fig. 1c. All δ values are given as ‰. Porewater nutrient profiles and fluxes in soft-bottom sediments Porewater NH4+ and PO43− concentrations increased with depth in the sediment and were highly elevated at the sediment station closest to the seabird colony (S1) reaching ca. 300 and 80 µmol L−1 at 9 cm depth, respectively (Fig. 3). Concentrations of the two solutes were on a lower level at the three other stations (S2‒S4), where they did not exceed 170 and 40 µmol L−1 at 9 cm depth, respectively (Fig. 3). Porewater NOx− concentrations were negligible and even in the top oxidized layer (0.5 cm layer) they were < 1.5 µmol L−1 (data not shown). Bottom waters at all sediment stations were low in oxygen (1.7–2.4 ml L−1) but not anoxic. The four stations had similar salinity (8.1–8.2‰) and water temperature (5.0–5.1 °C). Nutrient concentrations in sediments at different sediment depths. (a) NH4+ and (b) PO43−. − 0.5 cm (top data value) refers to bottom water. The diffusive fluxes of both NH4+ and PO43− were higher at S1 compared to the other three stations (S2‒S4) (Fig. 4), suggesting a stronger release of nutrients in proximity of the colony compared to the more offshore stations. Modelled fluxes in soft-bottom sediments. (a) NH4+, and (b) PO43−. Nutrient release from seabirds Daily release from seabirds in the colony were estimated at 393 kg N day−1, 95% C.I. [166–483], and 37 kg P day−1 95% C.I. [8–69] (Fig. 5). The wide confidence intervals are caused by the large variation in the estimations of N and P concentrations of seabird guano (Table S1). Nevertheless, the numbers indicate that the order of magnitude of the seabirds' P emissions are similar to those of the largest waste water treatment plants and N emissions are in the same magnitude as a number of mid size treatment plants in the Baltic Sea (Fig. 5). Estimated daily nutrient release from the Stora Karlsö seabird colony compared to other major Baltic Sea point sources (Waste-water treatment plants, WWTP). (a) nitrogen, and (b) phosphorus. WWTP data were obtained from32. Abbreviations in x-labels: SE = Sweden, PL = Poland, LT = Lithuania, DE = Germany. Seabirds can act as powerful biological pumps by moving chemical compounds across wide distances and are thereby significant drivers of biogeochemical cycling7 and biovectors of environmental contaminants6. We show that N and P release from a seabird colony in the eutrophic Baltic Sea is in the same order of magnitude as waste water treatment plants32, and thereby a driver of regional nutrient cycling. These significant releases have constrasting effects in two habitats. Along rocky shores, the release lead to high production of chironomid larvae, and thereby enhanced food supply to insectivorous House martins breeding on the island that feed on adult (flying) chironomids21. In deep-water sediments surrounding the colony, nutrient release did not support macrofaunal production, probably because of seasonal anoxia, and these sediments acted as sources of dissolved N and P to the water column. These contrasting effects on different ecosystem compartments arise from dynamics at different spatial and temporal scales that interact in a complex manner (Fig. 6). Conceptual visualization of the main processes investigated in this paper. The Stora Karlsö lighthouse with house martin nests are shown at the top right, with the seabird colony located at the cliff edges at the shoreside. Illustration by Fredrik Saarkoppel / Kobolt Media AB. How do seabirds support increased insect production? This study was originally conceived based on an observation of the unusually large colony of House martins (i.e., insectivorous passerines) on the lighthouse adjacent to the seabird colony (Fig. 6), something that later was linked to the seabirds via isotopic analysis of feather samples21. Here we expand on these findings by investigating the pathways by which the seabird derived nutrients affect the surrounding terrestrial and marine ecosystems. We found that the nutrients can be traced both along rocky shores and in deep-water sediments, but it was only along the rocky shores that the nutrients supported chironomid production. The isotopic signature of N (δ15N) in chironomids in the rocky shores show a decreasing trend with distance from the colony, which indicates seabird derived nutrients in the chironomid production. Our visual observations from the island suggest extremely high concentrations of flying chironomids during spring months, which is probably supporting not only House martins but insectivore bird species on the island in general. The seabirds are thus increasing the availability of nutrients in support of local biological production in the near-shore marine habitats, which cascades back to the terrestrial (island) ecosystem. Our results reinforce earlier studies that have shown strong and sometimes unexpected effects of seabirds in local flora and fauna10,11,12,13. We believe that such seabird mediated cross-scale ecosystem interactions are often overlooked, and should be considered more generally, e.g., in constructing management plans of protected areas33,34,35. Seabird effects on sediment fluxes The deep-water sediments near the seabird colony were strongly enriched in nutrients but did not support any macrofaunal production. Although our measured oxygen concentrations would not prevent the presence of typical Baltic Sea macrofauna, regularly returning anoxic events preceeding the sampling was the probable reason behind the lack of macrofauna in this habitat. Since we sieved the sediments with 1.0 mm sieves, we cannot exclude that smaller invertebrates (e.g., meiofauna, larval stages of macrofaunal organisms, etc.) were actually living in these sediments, but it is not the focus of this study as they cannot explain the link between the marine habitat and terrestrial birds. Another potential explanation for the lack of chironomids in the sediments would be that larval growth occurs later in the season, however, chironomids are known to have extended growing seasons with overlapping generation and are thus expected to be detected throughout the year36, and the emergence of adults in the Baltic Sea occurs in May and June37 which means that larvae, if present in the habitat, would be detected in April. The nutrient data alone cannot say whether there was a direct fertilization effect by the birds or if the enrichment was due to some indirect effects. However, the δ13C signals in the guano were very similar to those in the sediment samples close to the station, which strongly indicate a direct C fertilization effect. The observed gradient with decreasing δ13C at increasing distance from the colony could theoretically be an effect of decreasing input of terrestrial dissolved organic matter (DOM) 38,39. However, the type of benthic ecosystem at our offshore study site is thought to be much more affected by benthic-pelagic coupling than by terrestrial DOM input40 by which we can be relatively certain that the δ13C is a function of colony distance rather than general DOM input variability. The δ15N signals in the sediments (ca. + 3 ‰) were lower than those from Baltic coastal settings affected by human sewage discharge (+ 7‒8‰) as reported by Bonaglia et al.41. The low values of the present study are likely supported by high contribution of surface N2-fixing cyanobacterial blooms and subsequent deposition of this biomass. There is a strong link between hypoxia and cyanobacterial blooms in the Baltic Sea42. Altogether, this suggests that N fertilization from bird guano was not reflected in N burial, but was rather supporting N recycling, which exacerbates hypoxia and cyanobacteria blooms. Our nutrient data indicate that the near-colony sediments were acting as stronger sources of both nitrogen and phosphorus to the water mass than sediments further away. This was due to the combination of high ammonium and phosphate concentrations in the porewater environment, and low oxygen. With more oxygen, the essential nutrients such as phosphate would remain in the sediments and possibly lead to biological production including chironomid larvae, i.e. a strengthened link between the seabirds and the surrounding benthic and terrestrial ecosystem compartments43. With even less oxygen than under present conditions, there would be more phosphate and ammonium present both in the sediment porewater and in the water column as these compounds would be prevented from precipitating (PO43−), oxidizing (NH4+) or binding to sediment particles (NH4+), and they would tend to leave even more from the sediment to the water phase than under present conditions44. In Baltic Sea sediments affected by hypoxia, NH4+ is generally produced at high rates in the sediment and efficiently exchanged to the water column because of high rates of dissimilatory reduction to ammonium (DNRA), anaerobic mineralization of organic matter and ammonification41. In these conditions, additionally, NH4+ adsorption to sediment particles is generally limited since water content is extremely high and the sites of NH4+ exchange likely saturated45 . In hypoxic Baltic sediments, phosphate desorption from iron oxyhydroxides leads to high sediment–water fluxes of PO43− and generally high PO43− concentrations in the bottom water42. Seabird fertilization in relation to regional nutrient cycling The majority of studies on seabird colony nutrient enrichment has been performed in nutrient poor ecosystems10,11,16. We study a strongly eutrophic system, an offshore area in the Baltic Sea, where under normal circumstances a high proportion of the biological production in the pelagic ecosystem sinks to the bottom40, and contributes to oxygen consumption at depths of 60‒70 m and below46. Seabirds' foraging movements thus release the effect of eutrophication from their foraging areas by removing pelagic fish biomass and at the same time contributing to eutrophication around the colony (Fig. 6). The estimated daily release from the seabird colony of 393 kg N and 37 kg P day−1 assumes that all the N and P excreted as guano by the birds ends up in the Baltic Sea. However, a part of these nutrients, especially the very reactive N, will be lost via volatilization of ammonia and especially by denitrification in the anoxic, deposited faeces on the island7. In case of significant denitrification happening in the guano, the N content should be < 10% according to a study from cave guano47. However, the N% of our guano samples and especially the average value (11.2%) was above that threshold. We thus conclude that overall decomposition and N loss were negligible if compared to the quantity of N leaking into the Baltic Sea. The total emission from the colony is in the same order of magnitude as the major anthropogenic point sources in the Baltic Sea, keeping in mind that the birds do not add nutrients to the system but concentrate them to small and distinct areas. Nevertheless, internal nutrient fluxes in the Baltic Sea are of similar magnitude or even larger than external inputs48. Thus, our findings suggest the need for considering previously overlooked dynamics between nutrient cycling such as seabird foraging especially when studying processes on smaller (< 10,000 km2) spatial scales. Colonial seabirds forage over vast geographic areas but release the majority of their excrements at their colonies, leading in our case to local nutrient release in the order of magnitude as major point sources considered in management plans to mitigate eutrophication. We can track the effect of this nutrient release in two completely different habitats adjacent to the colony (macroalgae growing along rocky shores and muddy deep-water sediments), where they lead to contrasting effects on biological production. The magnitude of the seabird nutrient transfer and local enrichment motivates increased consideration of these processes in regional biogeochemical modelling. Furthermore, our results suggest that the success of terrestrial biodiversity conservation on seabird islands may be conditioned by the supply of marine derived nutrients, which calls for a better integration between marine and terrestrial management and conservation plans. Modelling studies suggest that seabird's nutrients transfers are globally significant, but the ramifications of these cross-scale ecosystem linkages for terrestrial ecosystem management and conservation are yet to be described and quantified. The datasets generated from this study are available in the Mendeley data repository, https://doi.org/10.17632/vzt7cj9th5.2 and https://doi.org/10.17632/dj9pnpdv8d.1 Blais, J. M. et al. Arctic seabirds transport marine-derived contaminants. Science 309, 445 (2005). Qin, X. et al. From sea to land: assessment of the bio-transport of phosphorus by penguins in Antarctica. Chin. J. Oceanol. Limnol. 32, 148–154 (2014). Nicol, S. et al. Southern Ocean iron fertilization by baleen whales and Antarctic krill. Fish Fish. 11, 203–209 (2010). Doughty, C. E. et al. Global nutrient transport in a world of giants. Proc. Natl. Acad. Sci. 113, 868–873 (2016). ADS CAS PubMed Article Google Scholar Macavoy, S. E., Garman, G. C. & Macko, S. A. Anadromous fish as marine nutrient vectors. Fish. Bull. 107, 165–174 (2009). Michelutti, N. et al. Seabird-driven shifts in Arctic pond ecosystems. Proc. R. Soc. B Biol. Sci. 276, 591–596 (2009). Otero, X. L., De La Peña-Lastra, S., Pérez-Alberti, A., Ferreira, T. O. & Huerta-Diaz, M. A. Seabird colonies as important global drivers in the nitrogen and phosphorus cycles. Nat. Commun. 9, 246 (2018). Ellis, J. R., Fariña, J. M. & Witman, J. D. Nutrient transfer from sea to land: the case of gulls and cormorants in the Gulf of Maine. J. Anim. Ecol. 75, 565–574 (2006). PubMed Article PubMed Central Google Scholar Kolb, G. S., Ekholm, J. & Hambäck, P. A. Effects of seabird nesting colonies on algae and aquatic invertebrates in coastal waters. Mar. Ecol. Prog. Ser. 417, 287–300 (2010). Anderson, W. & Polis, G. Nutrient fluxes from water to land: seabirds affect plant nutrient status on Gulf of California islands. Oecologia 118, 324–332 (1999). ADS PubMed Article Google Scholar Zwolicki, A., Zmudczyńska-Skarbek, K. M., Iliszko, L. & Stempniewicz, L. Guano deposition and nutrient enrichment in the vicinity of planktivorous and piscivorous seabird colonies in Spitsbergen. Polar Biol. 36, 363–372 (2013). Duda, M. P. et al. Long-term changes in terrestrial vegetation linked to shifts in a colonial seabird population. Ecosystems https://doi.org/10.1007/s10021-020-00494-8 (2020). Kolb, G. S., Jerling, L. & Hambäck, P. A. The impact of cormorants on plant-arthropod food webs on their nesting Islands. Ecosystems 13, 353–366 (2010). Christie, K. S., Hocking, M. D. & Reimchen, T. E. Tracing salmon nutrients in riparian food webs: isotopic evidence in a ground-foraging passerine. Can. J. Zool. 86, 1317–1323 (2008). Maron, J. L. et al. An introduced predator alters Aleutian Island plant communities by thwarting nutrient subsidies. Ecol. Monogr. 76, 3–24 (2006). Wainright, S. C., Haney, J. C., Kerr, C., Golovkin, A. N. & Flint, M. V. Utilization of nitrogen derived from seabird guano by terrestrial and marine plants at St. Paul, Pribilof Islands, Bering Sea, Alaska. Mar. Biol. 131, 63–71 (1998). Lorrain, A. et al. Seabirds supply nitrogen to reef-building corals on remote Pacific islets. Sci. Rep. 7, 1–11 (2017). Gagnon, K., Rothäusler, E., Syrjänen, A., Yli-Renko, M. & Jormalainen, V. Seabird guano fertilizes Baltic Sea littoral food webs. PLoS ONE 8, e61284 (2013). ADS CAS PubMed PubMed Central Article Google Scholar Diaz, R. J. & Rosenberg, R. Spreading dead zones and consequences for marine ecosystems. Science 321, 926–929 (2008). Gustafsson, B. et al. Reconstructing the development of Baltic sea eutrophication 1850–2006. Ambio 41, 534–548 (2012). Cross, A. D. P., Hentati-Sundberg, J., Österblom, H., McGill, R. A. R. & Furness, R. W. Isotopic analysis of island House Martins Delichon urbica indicates marine provenance of nutrients. Ibis 156, 676–681 (2014). Armitage, P. D., Cranston, P. S. & Pinder, L. C. V. The Chironomidae. Biology and Ecology of Non-biting Midges (Springer, New York, 1995). Olsson, O. & Hentati-Sundberg, J. Population trends and status of four seabird species (Uria aalge, Alca torda, Larus fuscus, Larus argentatus) at Stora Karlsö in the Baltic Sea. Ornis Svecica 27, 64–93 (2017). Hentati-Sundberg, J. et al. Fish and seabird spatial distribution and abundance around the largest seabird colony in the baltic sea. Mar. Ornithol. 46, 61–68 (2018). Hentati-Sundberg, J., Österblom, H., Kadin, M., Jansson, Å & Olsson, O. The Karlsö Murre lab methodology can stimulate innovative seabird research. Mar. Ornithol. 40, 11–16 (2012). Bond, A. L. & Hobson, K. A. Reporting stable-isotope ratios in ecology: recommended terminology, guidelines and best practices. Waterbirds 35, 324–331 (2012). Grasshoff, K., Kremling, K. & Ehrhardt, M. Methods of Seawater Analysis. Wiley, Hoboken. https://doi.org/10.1016/0043-1354(85)90057-0 (2009). Berg, P., Risgaard-Petersen, N. & Rysgaard, S. Interpretation of measured concentration profiles in sediment pore water. Limnol. Oceanogr. 43, 1500–1510 (1998). Iversen, N. & Jørgensen, B. B. Diffusion coefficients of sulfate and methane in marine sediments: Influence of porosity. Geochim. Cosmochim. Acta 57, 571–578 (1993). Berglund, P. A. Evaluating ten years of ecological seabird research in the Baltic Sea. (MSc thesis, Stockholm University, 2016). Brekke, B. & Gabrielsen, G. W. Assimilation efficiency of adult Kittiwakes and Brünnich's Guillemots fed Capelin and Arctic Cod. Polar Biol. 14, 279–284 (1994). HELCOM. Sources and pathways of nutrients to the Baltic Sea. Balt. Sea Environ. Proc. 153, 48 (2018). Lescroël, A. et al. Seeing the ocean through the eyes of seabirds: a new path for marine conservation?. Mar. Policy 68, 212–220 (2016). Yorio, P. Marine protected areas, spatial scales, and governance: implications for the conservation of breeding seabirds. Conserv. Lett. 2, 171–178 (2009). Länsstyrelsen Gotlands Län. Bevarandeplan för Natura 2000-området SE0340023 Stora Karlsö. (2018). Pinder, L. C. V. Biology of freshwater chironomidae. Annu. Rev. Entomol. 31, 1–23 (1986). Hirvenoja, M., Palmén, E. & Hirvenoja, E. The emergence of Halocladius variabilis (Staeger) (Diptera: Chironomidae) in the surroundings of the Tvärminne Biological Station in the northern Baltic Sea. Entomol. Fenn. 17, 87–89 (2006). Voss, M., Larsen, B., Leivuori, M. & Vallius, H. Stable isotope signals of eutrophication in Baltic Sea sediments. J. Mar. Syst. 25, 287–298 (2000). Deutsch, B., Alling, V., Humborg, C., Korth, F. & Mörth, C. M. Tracing inputs of terrestrial high molecular weight dissolved organic matter within the Baltic Sea ecosystem. Biogeosciences 9, 4465–4475 (2012). Griffiths, J. R. et al. The importance of benthic-pelagic coupling for marine ecosystem functioning in a changing world. Glob. Chang. Biol. 23, 2179–2196 (2017). ADS PubMed Article PubMed Central Google Scholar Bonaglia, S., Deutsch, B., Bartoli, M., Marchant, H. K. & Brchert, V. Seasonal oxygen, nitrogen and phosphorus benthic cycling along an impacted Baltic Sea estuary: regulation and spatial patterns. Biogeochemistry 119, 139–160 (2014). Bianchi, T. S. et al. Cyanobacterial blooms in the Baltic Sea: Natural or human-induced?. Limnol. Oceanogr. 45, 716–726 (2000). Gunnars, A. & Blomqvist, S. Phosphate exchange across the sediment-water interface when shifting from anoxic to oxic conditions: an experimental comparison of freshwater and brackish-marine systems. Biogeochemistry 37, 203–226 (1997). Brook, B. W., Ellis, E. C., Perring, M. P., Mackay, A. W. & Blomqvist, L. Does the terrestrial biosphere have planetary tipping points?. Trends Ecol. Evol. 28, 396–401 (2013). Mackin, J. E. & Aller, R. C. Ammonium adsorption in marine sediments. Limnol. Oceanogr. 29, 250–257 (1984). Carstensen, J., Andersen, J. H., Gustafsson, B. & Conley, D. J. Deoxygenation of the Baltic Sea during the last century. Proc. Natl. Acad. Sci. 111, 5628–5633 (2014). ADS CAS PubMed Article PubMed Central Google Scholar Cleary, D. M., Onac, B. P., Forray, F. L. & Wynn, J. G. Effect of diet, anthropogenic activity, and climate on δ15N values of cave bat guano. Palaeogeogr. Palaeoclimatol. Palaeoecol. 461, 87–97 (2016). Vahtera, E. et al. Internal ecosystem feedbacks enhance nitrogen-fixing cyanobacteria blooms and complicate management in the Baltic Sea. Ambio 36, 186–194 (2007). CAS PubMed Article PubMed Central Google Scholar The authors would like to thank the crew onboard R/V Electra for support during field work. Reference biological samples were collected by Susanne Qvarfordt at DEEP, Stockholm University for the Swedish National Monitoring Program funded by the Swedish Agency for Marine and Water Management, and Aron Hejdström. Karlsö Jagt- och Djurskyddsförenings AB supported field work. Field sampling within the Nature Reserve Stora Karlsö was regulated through a decision by the Gotland County Administrative Board, diary number 521-3763-2017. The abiotic data in Fig. S1 originate from the publicly available Swedish Oceanographic Archive (SHARK) database hosted by the Swedish Agency for Marine and Water Management and the Swedish Meteorological and Hydrological Institute (https://sharkweb.smhi.se/), data accessed 2020-03-25. Funding was provided by the Stockholm University Baltic Sea Science Centre through the project "Baltic Ecosystem Adaptive Management" (BEAM). S.B. was supported by funding from the Swedish Research Council Formas (Grant No. 2017-01513). Open access funding provided by Swedish University of Agricultural Sciences. Department of Aquatic Resources, Institute of Marine Research, Swedish University of Agricultural Sciences, Turistgatan 5, 45330, Lysekil, Sweden J. Hentati-Sundberg & M. Sköld Department of Ecology, Environment and Plant Sciences, Stockholm University, Stockholm, Sweden C. Raymond, O. Svensson & S. Bonaglia Baltic Nest Institute, Baltic Sea Centre, Stockholm University, Stockholm, Sweden B. Gustafsson Tvärminne Zoological Station, University of Helsinki, Hankko, Finland Department of Marine Sciences, University of Gothenburg, Gothenburg, Sweden S. Bonaglia J. Hentati-Sundberg C. Raymond M. Sköld O. Svensson Conceived of or designed study (J.H.S., C.R., S.B.); Performed research (J.H.S., C.R., M.S., O.S., S.B.); Analyzed data (C.R., O.S., S.B.); Wrote the paper (J.H.S., C.R., M.S., O.S., B.G., S.B.). Correspondence to J. Hentati-Sundberg. Supplementary file1 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Hentati-Sundberg, J., Raymond, C., Sköld, M. et al. Fueling of a marine-terrestrial ecosystem by a major seabird colony. Sci Rep 10, 15455 (2020). https://doi.org/10.1038/s41598-020-72238-6 Accepted: 25 August 2020
CommonCrawl
Kurt Gödel Kurt Friedrich Gödel (/ˈɡɜːrdəl/ GUR-dəl,[2] German: [kʊʁt ˈɡøːdl̩] (listen); April 28, 1906 – January 14, 1978) was a logician, mathematician, and philosopher. Considered along with Aristotle and Gottlob Frege to be one of the most significant logicians in history, Gödel had an immense effect upon scientific and philosophical thinking in the 20th century, a time when others such as Bertrand Russell,[3] Alfred North Whitehead,[3] and David Hilbert were using logic and set theory to investigate the foundations of mathematics, building on earlier work by the likes of Richard Dedekind, Georg Cantor and Frege. Kurt Gödel Gödel c. 1926 Born Kurt Friedrich Gödel (1906-04-28)April 28, 1906 Brünn, Austria-Hungary (now Brno, Czech Republic) DiedJanuary 14, 1978(1978-01-14) (aged 71) Princeton, New Jersey, U.S. Citizenship • Austria • Czechoslovakia • Germany • United States Alma materUniversity of Vienna (PhD, 1930) Known for   • Gödel's incompleteness theorems • Gödel's completeness theorem • Gödel's constructible universe • Gödel metric (closed timelike curve) • Gödel logic • Gödel–Dummett logic • Gödel's β function • Gödel numbering • Gödel operation • Gödel's speed-up theorem • Gödel's ontological proof • Gödel–Gentzen translation • Gödel–McKinsey–Tarski translation • Von Neumann–Bernays–Gödel set theory • ω-consistent theory • The consistency of the continuum hypothesis with ZFC • Axiom of constructibility • Compactness theorem • Condensation lemma • Diagonal lemma • Dialectica interpretation • Ordinal definable set • Slingshot argument Spouse Adele Nimbursky ​ (m. 1938)​ Awards • Albert Einstein Award (1951) • ForMemRS (1968)[1] • National Medal of Science (1974) Scientific career FieldsMathematics, mathematical logic, analytic philosophy, physics InstitutionsInstitute for Advanced Study ThesisÜber die Vollständigkeit des Logikkalküls (1929) Doctoral advisorHans Hahn Influences • Leibniz • Kant • Husserl • Plato Signature Gödel's discoveries in the foundations of mathematics led to the proof of his completeness theorem in 1929 as part of his dissertation to earn a doctorate at the University of Vienna, and the publication of Gödel's incompleteness theorems two years later, in 1931. The first incompleteness theorem states that for any ω-consistent recursive axiomatic system powerful enough to describe the arithmetic of the natural numbers (for example, Peano arithmetic), there are true propositions about the natural numbers that can be neither proved nor disproved from the axioms.[4] To prove this, Gödel developed a technique now known as Gödel numbering, which codes formal expressions as natural numbers. The second incompleteness theorem, which follows from the first, states that the system cannot prove its own consistency.[5] Gödel also showed that neither the axiom of choice nor the continuum hypothesis can be disproved from the accepted Zermelo–Fraenkel set theory, assuming that its axioms are consistent. The former result opened the door for mathematicians to assume the axiom of choice in their proofs. He also made important contributions to proof theory by clarifying the connections between classical logic, intuitionistic logic, and modal logic. Early life and education Childhood Gödel was born April 28, 1906, in Brünn (now Brno), Austria-Hungary (now the Czech Republic), into the German-speaking family of Rudolf Gödel (1874–1929), the managing director and part owner of a major textile firm, and Marianne Gödel (née Handschuh, 1879–1966).[6] At the time of his birth the city had a German-speaking majority which included his parents.[7] His father was Catholic and his mother was Protestant and the children were raised Protestant. The ancestors of Kurt Gödel were often active in Brünn's cultural life. For example, his grandfather Joseph Gödel was a famous singer in his time and for some years a member of the Brünner Männergesangverein (Men's Choral Union of Brünn).[8] Gödel automatically became a citizen of Czechoslovakia at age 12 when the Austro-Hungarian Empire collapsed following its defeat in the First World War. According to his classmate Klepetař, like many residents of the predominantly German Sudetenländer, "Gödel considered himself always Austrian and an exile in Czechoslovakia".[9] In February 1929, he was granted release from his Czechoslovakian citizenship and then, in April, granted Austrian citizenship.[10] When Germany annexed Austria in 1938, Gödel automatically became a German citizen at age 32. In 1948, after World War II, at the age of 42, he became an American citizen.[11] In his family, the young Gödel was nicknamed Herr Warum ("Mr. Why") because of his insatiable curiosity. According to his brother Rudolf, at the age of six or seven, Kurt suffered from rheumatic fever; he completely recovered, but for the rest of his life he remained convinced that his heart had suffered permanent damage. Beginning at age four, Gödel suffered from "frequent episodes of poor health", which would continue for his entire life.[12] Gödel attended the Evangelische Volksschule, a Lutheran school in Brünn from 1912 to 1916, and was enrolled in the Deutsches Staats-Realgymnasium from 1916 to 1924, excelling with honors in all his subjects, particularly in mathematics, languages and religion. Although Gödel had first excelled in languages, he later became more interested in history and mathematics. His interest in mathematics increased when in 1920 his older brother Rudolf (born 1902) left for Vienna, where he attended medical school at the University of Vienna. During his teens, Gödel studied Gabelsberger shorthand, Goethe's Theory of Colours and criticisms of Isaac Newton, and the writings of Immanuel Kant. Studies in Vienna At the age of 18, Gödel joined his brother at the University of Vienna. By that time, he had already mastered university-level mathematics.[13] Although initially intending to study theoretical physics, he also attended courses on mathematics and philosophy.[14] During this time, he adopted ideas of mathematical realism. He read Kant's Metaphysische Anfangsgründe der Naturwissenschaft, and participated in the Vienna Circle with Moritz Schlick, Hans Hahn, and Rudolf Carnap. Gödel then studied number theory, but when he took part in a seminar run by Moritz Schlick which studied Bertrand Russell's book Introduction to Mathematical Philosophy, he became interested in mathematical logic. According to Gödel, mathematical logic was "a science prior to all others, which contains the ideas and principles underlying all sciences."[15] Attending a lecture by David Hilbert in Bologna on completeness and consistency in mathematical systems may have set Gödel's life course. In 1928, Hilbert and Wilhelm Ackermann published Grundzüge der theoretischen Logik (Principles of Mathematical Logic), an introduction to first-order logic in which the problem of completeness was posed: "Are the axioms of a formal system sufficient to derive every statement that is true in all models of the system?" This problem became the topic that Gödel chose for his doctoral work. In 1929, at the age of 23, he completed his doctoral dissertation under Hans Hahn's supervision. In it, he established his eponymous completeness theorem regarding the first-order predicate calculus. He was awarded his doctorate in 1930, and his thesis (accompanied by some additional work) was published by the Vienna Academy of Science. Career Incompleteness theorems Kurt Gödel's achievement in modern logic is singular and monumental—indeed it is more than a monument, it is a landmark which will remain visible far in space and time. ... The subject of logic has certainly completely changed its nature and possibilities with Gödel's achievement. — John von Neumann[16] In 1930 Gödel attended the Second Conference on the Epistemology of the Exact Sciences, held in Königsberg, 5–7 September. Here he delivered his incompleteness theorems.[17] Gödel published his incompleteness theorems in Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme (called in English "On Formally Undecidable Propositions of Principia Mathematica and Related Systems"). In that article, he proved for any computable axiomatic system that is powerful enough to describe the arithmetic of the natural numbers (e.g., the Peano axioms or Zermelo–Fraenkel set theory with the axiom of choice), that: 1. If a (logical or axiomatic formal) system is omega-consistent, it cannot be syntactically complete. 2. The consistency of axioms cannot be proved within their own system. These theorems ended a half-century of attempts, beginning with the work of Gottlob Frege and culminating in Principia Mathematica and Hilbert's Program, to find a non-relatively consistent axiomatization sufficient for number theory (that was to serve as the foundation for other fields of mathematics). In hindsight, the basic idea at the heart of the incompleteness theorem is rather simple. Gödel essentially constructed a formula that claims that it is unprovable in a given formal system. If it were provable, it would be false. Thus there will always be at least one true but unprovable statement. That is, for any computably enumerable set of axioms for arithmetic (that is, a set that can in principle be printed out by an idealized computer with unlimited resources), there is a formula that is true of arithmetic, but which is not provable in that system. To make this precise, however, Gödel needed to produce a method to encode (as natural numbers) statements, proofs, and the concept of provability; he did this using a process known as Gödel numbering. In his two-page paper Zum intuitionistischen Aussagenkalkül (1932) Gödel refuted the finite-valuedness of intuitionistic logic. In the proof, he implicitly used what has later become known as Gödel–Dummett intermediate logic (or Gödel fuzzy logic). Mid-1930s: further work and U.S. visits Gödel earned his habilitation at Vienna in 1932, and in 1933 he became a Privatdozent (unpaid lecturer) there. In 1933 Adolf Hitler came to power in Germany, and over the following years the Nazis rose in influence in Austria, and among Vienna's mathematicians. In June 1936, Moritz Schlick, whose seminar had aroused Gödel's interest in logic, was assassinated by one of his former students, Johann Nelböck. This triggered "a severe nervous crisis" in Gödel.[18] He developed paranoid symptoms, including a fear of being poisoned, and spent several months in a sanitarium for nervous diseases.[19] In 1933, Gödel first traveled to the U.S., where he met Albert Einstein, who became a good friend.[20] He delivered an address to the annual meeting of the American Mathematical Society. During this year, Gödel also developed the ideas of computability and recursive functions to the point where he was able to present a lecture on general recursive functions and the concept of truth. This work was developed in number theory, using Gödel numbering. In 1934, Gödel gave a series of lectures at the Institute for Advanced Study (IAS) in Princeton, New Jersey, titled On undecidable propositions of formal mathematical systems. Stephen Kleene, who had just completed his PhD at Princeton, took notes of these lectures that have been subsequently published. Gödel visited the IAS again in the autumn of 1935. The travelling and the hard work had exhausted him and the next year he took a break to recover from a depressive episode. He returned to teaching in 1937. During this time, he worked on the proof of consistency of the axiom of choice and of the continuum hypothesis; he went on to show that these hypotheses cannot be disproved from the common system of axioms of set theory. He married Adele Nimbursky (née Porkert, 1899–1981), whom he had known for over 10 years, on September 20, 1938. Gödel's parents had opposed their relationship because she was a divorced dancer, six years older than he was. Subsequently, he left for another visit to the United States, spending the autumn of 1938 at the IAS and publishing Consistency of the axiom of choice and of the generalized continuum-hypothesis with the axioms of set theory,[21] a classic of modern mathematics. In that work he introduced the constructible universe, a model of set theory in which the only sets that exist are those that can be constructed from simpler sets. Gödel showed that both the axiom of choice (AC) and the generalized continuum hypothesis (GCH) are true in the constructible universe, and therefore must be consistent with the Zermelo–Fraenkel axioms for set theory (ZF). This result has had considerable consequences for working mathematicians, as it means they can assume the axiom of choice when proving the Hahn–Banach theorem. Paul Cohen later constructed a model of ZF in which AC and GCH are false; together these proofs mean that AC and GCH are independent of the ZF axioms for set theory. Gödel spent the spring of 1939 at the University of Notre Dame.[22] Princeton, Einstein, U.S. citizenship After the Anschluss on 12 March 1938, Austria had become a part of Nazi Germany. Germany abolished the title Privatdozent, so Gödel had to apply for a different position under the new order. His former association with Jewish members of the Vienna Circle, especially with Hahn, weighed against him. The University of Vienna turned his application down. His predicament intensified when the German army found him fit for conscription. World War II started in September 1939. Before the year was up, Gödel and his wife left Vienna for Princeton. To avoid the difficulty of an Atlantic crossing, the Gödels took the Trans-Siberian Railway to the Pacific, sailed from Japan to San Francisco (which they reached on March 4, 1940), then crossed the US by train to Princeton. There Gödel accepted a position at the Institute for Advanced Study (IAS), which he had previously visited during 1933–34.[23] Albert Einstein was also living at Princeton during this time. Gödel and Einstein developed a strong friendship, and were known to take long walks together to and from the Institute for Advanced Study. The nature of their conversations was a mystery to the other Institute members. Economist Oskar Morgenstern recounts that toward the end of his life Einstein confided that his "own work no longer meant much, that he came to the Institute merely ... to have the privilege of walking home with Gödel".[24] Gödel and his wife, Adele, spent the summer of 1942 in Blue Hill, Maine, at the Blue Hill Inn at the top of the bay. Gödel was not merely vacationing but had a very productive summer of work. Using Heft 15 [volume 15] of Gödel's still-unpublished Arbeitshefte [working notebooks], John W. Dawson Jr. conjectures that Gödel discovered a proof for the independence of the axiom of choice from finite type theory, a weakened form of set theory, while in Blue Hill in 1942. Gödel's close friend Hao Wang supports this conjecture, noting that Gödel's Blue Hill notebooks contain his most extensive treatment of the problem. On December 5, 1947, Einstein and Morgenstern accompanied Gödel to his U.S. citizenship exam, where they acted as witnesses. Gödel had confided in them that he had discovered an inconsistency in the U.S. Constitution that could allow the U.S. to become a dictatorship; this has since been dubbed Gödel's Loophole. Einstein and Morgenstern were concerned that their friend's unpredictable behavior might jeopardize his application. The judge turned out to be Phillip Forman, who knew Einstein and had administered the oath at Einstein's own citizenship hearing. Everything went smoothly until Forman happened to ask Gödel if he thought a dictatorship like the Nazi regime could happen in the U.S. Gödel then started to explain his discovery to Forman. Forman understood what was going on, cut Gödel off, and moved the hearing on to other questions and a routine conclusion.[25][26] Gödel became a permanent member of the Institute for Advanced Study at Princeton in 1946. Around this time he stopped publishing, though he continued to work. He became a full professor at the Institute in 1953 and an emeritus professor in 1976.[27] During his time at the institute, Gödel's interests turned to philosophy and physics. In 1949, he demonstrated the existence of solutions involving closed timelike curves, to Einstein's field equations in general relativity.[28] He is said to have given this elaboration to Einstein as a present for his 70th birthday.[29] His "rotating universes" would allow time travel to the past and caused Einstein to have doubts about his own theory. His solutions are known as the Gödel metric (an exact solution of the Einstein field equation). He studied and admired the works of Gottfried Leibniz, but came to believe that a hostile conspiracy had caused some of Leibniz's works to be suppressed.[30] To a lesser extent he studied Immanuel Kant and Edmund Husserl. In the early 1970s, Gödel circulated among his friends an elaboration of Leibniz's version of Anselm of Canterbury's ontological proof of God's existence. This is now known as Gödel's ontological proof. Awards and honours Gödel was awarded (with Julian Schwinger) the first Albert Einstein Award in 1951, and was also awarded the National Medal of Science, in 1974.[31] Gödel was elected a resident member of the American Philosophical Society in 1961 and a Foreign Member of the Royal Society (ForMemRS) in 1968.[32][1] He was a Plenary Speaker of the ICM in 1950 in Cambridge, Massachusetts.[33] Later life and death Later in his life, Gödel suffered periods of mental instability and illness. Following the assassination of his close friend Moritz Schlick,[34] Gödel developed an obsessive fear of being poisoned, and would eat only food prepared by his wife Adele. Adele was hospitalized beginning in late 1977, and in her absence Gödel refused to eat;[35] he weighed 29 kilograms (65 lb) when he died of "malnutrition and inanition caused by personality disturbance" in Princeton Hospital on January 14, 1978.[36] He was buried in Princeton Cemetery. Adele died in 1981.[37] Religious views Gödel believed that God[38] was personal, and called his philosophy "rationalistic, idealistic, optimistic, and theological".[39] Gödel believed in an afterlife, saying, "Of course this supposes that there are many relationships which today's science and received wisdom haven't any inkling of. But I am convinced of this [the afterlife], independently of any theology." It is "possible today to perceive, by pure reasoning" that it "is entirely consistent with known facts." "If the world is rationally constructed and has meaning, then there must be such a thing [as an afterlife]."[40] In an unmailed answer to a questionnaire, Gödel described his religion as "baptized Lutheran (but not member of any religious congregation). My belief is theistic, not pantheistic, following Leibniz rather than Spinoza."[41] Of religion(s) in general, he said: "Religions are, for the most part, bad—but religion is not".[42] According to his wife Adele, "Gödel, although he did not go to church, was religious and read the Bible in bed every Sunday morning",[43] while of Islam, he said, "I like Islam: it is a consistent [or consequential] idea of religion and open-minded."[44] Legacy Douglas Hofstadter wrote the 1979 book Gödel, Escher, Bach to celebrate the work and ideas of Gödel, M. C. Escher and Johann Sebastian Bach. It partly explores the ramifications of the fact that Gödel's incompleteness theorem can be applied to any Turing-complete computational system, which may include the human brain. The Kurt Gödel Society, founded in 1987, was named in his honor. It is an international organization for the promotion of research in logic, philosophy, and the history of mathematics. The University of Vienna hosts the Kurt Gödel Research Center for Mathematical Logic. The Association for Symbolic Logic has held an annual Gödel Lecture each year since 1990. Gödel's Philosophical Notebooks Archived May 14, 2019, at the Wayback Machine are edited at the Kurt Gödel Research Centre Archived May 14, 2019, at the Wayback Machine which is situated at the Berlin-Brandenburg Academy of Sciences and Humanities in Germany. Lou Jacobi plays Gödel in the 1994 film I.Q. Five volumes of Gödel's collected works have been published. The first two include his publications; the third includes unpublished manuscripts from his Nachlass, and the final two include correspondence. In 2005 John Dawson published a biography of Gödel, Logical Dilemmas: The Life and Work of Kurt Gödel (A. K. Peters, Wellesley, MA, ISBN 1-56881-256-6). Stephen Budiansky's book about Gödel's life, Journey to the Edge of Reason: The Life of Kurt Gödel (W. W. Norton & Company, New York City, NY, ISBN 978-0-393-35820-9), was a New York Times Critics' Top Book of 2021.[45] Gödel was also one of four mathematicians examined in David Malone's 2008 BBC documentary Dangerous Knowledge.[46] The Gödel Prize, an annual prize for outstanding papers in the area of theoretical computer science, is named after him. Bibliography Important publications In German: • 1930, "Die Vollständigkeit der Axiome des logischen Funktionenkalküls." Monatshefte für Mathematik und Physik 37: 349–60. • 1931, "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I." Monatshefte für Mathematik und Physik 38: 173–98. • 1932, "Zum intuitionistischen Aussagenkalkül", Anzeiger Akademie der Wissenschaften Wien 69: 65–66. In English: • 1940. The Consistency of the Axiom of Choice and of the Generalized Continuum Hypothesis with the Axioms of Set Theory. Princeton University Press. • 1947. "What is Cantor's continuum problem?" The American Mathematical Monthly 54: 515–25. Revised version in Paul Benacerraf and Hilary Putnam, eds., 1984 (1964). Philosophy of Mathematics: Selected Readings. Cambridge Univ. Press: 470–85. • 1950, "Rotating Universes in General Relativity Theory." Proceedings of the international Congress of Mathematicians in Cambridge, Vol. 1, pp. 175–81. In English translation: • Kurt Gödel, 1992. On Formally Undecidable Propositions Of Principia Mathematica And Related Systems, tr. B. Meltzer, with a comprehensive introduction by Richard Braithwaite. Dover reprint of the 1962 Basic Books edition. • Kurt Gödel, 2000.[47] On Formally Undecidable Propositions Of Principia Mathematica And Related Systems, tr. Martin Hirzel • Jean van Heijenoort, 1967. A Source Book in Mathematical Logic, 1879–1931. Harvard Univ. Press. • 1930. "The completeness of the axioms of the functional calculus of logic," 582–91. • 1930. "Some metamathematical results on completeness and consistency," 595–96. Abstract to (1931). • 1931. "On formally undecidable propositions of Principia Mathematica and related systems," 596–616. • 1931a. "On completeness and consistency," 616–17. • Collected Works: Oxford University Press: New York. Editor-in-chief: Solomon Feferman. • Volume I: Publications 1929–1936 ISBN 978-0-19-503964-1 / Paperback: ISBN 978-0-19-514720-9, • Volume II: Publications 1938–1974 ISBN 978-0-19-503972-6 / Paperback: ISBN 978-0-19-514721-6, • Volume III: Unpublished Essays and Lectures ISBN 978-0-19-507255-6 / Paperback: ISBN 978-0-19-514722-3, • Volume IV: Correspondence, A–G ISBN 978-0-19-850073-5, • Volume V: Correspondence, H–Z ISBN 978-0-19-850075-9. • Philosophische Notizbücher / Philosophical Notebooks: De Gruyter: Berlin/München/Boston. Editor: Eva-Maria Engelen. • Volume 1: Philosophie I Maximen 0 / Philosophy I Maxims 0 ISBN 978-3-11-058374-8. • Volume 2: Zeiteinteilung (Maximen) I und II / Time Management (Maxims) I and II ISBN 978-3-11-067409-5. • Volume 3: Maximen III / Maxims III ISBN 978-3-11-075325-7 See also • Gödel machine • Gödel fuzzy logic • Gödel–Löb logic • Gödel Prize • Gödel's ontological proof • Infinite-valued logic • List of Austrian scientists • List of pioneers in computer science • Mathematical Platonism • Original proof of Gödel's completeness theorem • Primitive recursive functional • Strange loop • Tarski's undefinability theorem • World Logic Day Notes 1. Kreisel, G. (1980). "Kurt Godel. 28 April 1906–14 January 1978". Biographical Memoirs of Fellows of the Royal Society. 26: 148–224. doi:10.1098/rsbm.1980.0005. 2. "Gödel". Merriam-Webster Dictionary. 3. For instance, in their "Principia Mathematica " (Stanford Encyclopedia of Philosophy edition). 4. Smullyan, R. M. (1992). Gödel's Incompleteness Theorems. New York, Oxford: Oxford University Press, ch. V. 5. Smullyan, R. M. (1992). Gödel's Incompleteness Theorems. New York, Oxford: Oxford University Press, ch. IX. 6. Dawson 1997, pp. 3–4. 7. Dawson 1997, p. 12 8. Procházka 2008, pp. 30–34. 9. Dawson 1997, p. 15. 10. Gödel, Kurt (1986). Collected works. Feferman, Solomon. Oxford. p. 37. ISBN 0-19-503964-5. OCLC 12371326.{{cite book}}: CS1 maint: location missing publisher (link) 11. Balaguer, Mark. "Kurt Godel". Britannica School High. Encyclopædia Britannica, Inc. Retrieved June 3, 2019. 12. Kim, Alan (January 1, 2015). Zalta, Edward N. (ed.). Johann Friedrich Herbart (Winter 2015 ed.). Metaphysics Research Lab, Stanford University. 13. Dawson 1997, p. 24. 14. At the University of Vienna, Kurt Gödel attended several mathematics and philosophy courses side by side with Hermann Broch, who was then in his early forties. See: Sigmund, Karl; Dawson Jr., John W.; Mühlberger, Kurt (2007). Kurt Gödel: Das Album - The Album. Springer-Verlag. p. 27. ISBN 978-3-8348-0173-9. 15. Gleick, J. (2011) The Information: A History, a Theory, a Flood, London, Fourth Estate, p. 181. 16. Halmos, P.R. (April 1973). "The Legend of von Neumann". The American Mathematical Monthly. 80 (4): 382–94. doi:10.1080/00029890.1973.11993293. 17. Stadler, Friedrich (2015). The Vienna Circle: Studies in the Origins, Development, and Influence of Logical Empiricism. Springer. ISBN 978-3-319-16561-5. 18. Casti, John L.; Depauli, Werner; Koppe, Matthias; Weismantel, Robert (2001). Gödel : a life of logic. p. 147. arXiv:math/0410111. doi:10.1287/moor.1050.0169. ISBN 978-0-7382-0518-2. S2CID 9054486. {{cite book}}: |journal= ignored (help). From p. 80, which quotes Rudolf Gödel, Kurt's brother and a medical doctor. The words "a severe nervous crisis", and the judgement that the Schlick assassination was its trigger, are from the Rudolf Gödel quote. Rudolf knew Kurt well in those years. 19. Dawson 1997, pp. 110–12 20. Hutchinson Encyclopedia (1988), p. 518 21. Gödel, Kurt (November 9, 1938). "The Consistency of the Axiom of Choice and of the Generalized Continuum-Hypothesis". Proceedings of the National Academy of Sciences of the United States of America. 24 (12): 556–57. Bibcode:1938PNAS...24..556G. doi:10.1073/pnas.24.12.556. ISSN 0027-8424. PMC 1077160. PMID 16577857. 22. Dawson, John W. Jr. "Kurt Gödel at Notre Dame" (PDF). p. 4. the Mathematics department at the University of Notre Dame was host ... for a single semester in the spring of 1939 [to] Kurt Gödel 23. "Kurt Gödel". Institute for Advanced Study. December 9, 2019. 24. Goldstein 2005, p. 33 25. Dawson 1997, pp. 179–80. The story of Gödel's citizenship hearing is repeated in many versions. Dawson's account is the most carefully researched, but was written before the rediscovery of Morgenstern's written account. Most other accounts appear to be based on Dawson, hearsay or speculation. 26. Oskar Morgenstern (September 13, 1971). "History of the Naturalization of Kurt Gödel" (PDF). Retrieved April 16, 2019. 27. "Kurt Gödel – Institute for Advanced Study". Retrieved December 1, 2015. 28. Gödel, Kurt (July 1, 1949). "An Example of a New Type of Cosmological Solutions of Einstein's Field Equations of Gravitation". Rev. Mod. Phys. 21 (447): 447–450. Bibcode:1949RvMP...21..447G. doi:10.1103/RevModPhys.21.447. 29. "Das Genie & der Wahnsinn". Der Tagesspiegel (in German). January 13, 2008. 30. Dawson, John W. Jr. (2005). Logical Dilemmas: The Life and Work of Kurt Gödel. A K Peters. p. 166. ISBN 978-1-56881-256-4. 31. "The President's National Medal of Science: Recipient Details | NSF – National Science Foundation". www.nsf.gov. Retrieved September 17, 2016. 32. "APS Member History". search.amphilsoc.org. Retrieved January 28, 2021. 33. Gödel, Kurt (1950). "Rotating universes in general relativity theory" (PDF). In: Proceedings of the International Congress of Mathematicians, Cambridge, Massachusetts, August 30–September 6, 1950. Vol. 1. pp. 175–81. Archived from the original (PDF) on December 28, 2013. Retrieved December 4, 2017. 34. "Tragic deaths in science: Kurt Gödel - looking over the edge of reason - Paperpile". 35. Davis, Martin (May 4, 2005). "Gödel's universe". Nature. 435 (7038): 19–20. Bibcode:2005Natur.435...19D. doi:10.1038/435019a. 36. Toates, Frederick; Olga Coschug Toates (2002). Obsessive Compulsive Disorder: Practical Tried-and-Tested Strategies to Overcome OCD. Class Publishing. p. 221. ISBN 978-1-85959-069-0. 37. Dawson, John W. (June 1, 2006). "Gödel and the limits of logic". Plus. University of Cambridge. Retrieved November 1, 2020. 38. Tucker McElroy (2005). A to Z of Mathematicians. Infobase Publishing. p. 118. ISBN 978-0-8160-5338-4. Gödel had a happy childhood, and was called "Mr. Why" by his family, due to his numerous questions. He was baptized as a Lutheran, and re-mained a theist (a believer in a personal God) throughout his life. 39. Wang 1996, p. 8. 40. Wang 1996, p. 104-105. 41. Gödel's answer to a special questionnaire sent him by the sociologist Burke Grandjean. This answer is quoted directly in Wang 1987, p. 18, and indirectly in Wang 1996, p. 112. It's also quoted directly in Dawson 1997, p. 6, who cites Wang 1987. The Grandjean questionnaire is perhaps the most extended autobiographical item in Gödel's papers. Gödel filled it out in pencil and wrote a cover letter, but he never returned it. "Theistic" is italicized in both Wang 1987 and Wang 1996. It is possible that this italicization is Wang's and not Gödel's. The quote follows Wang 1987, with two corrections taken from Wang 1996. Wang 1987 reads "Baptist Lutheran" where Wang 1996 has "baptized Lutheran". Wang 1987 has "rel. cong.", which in Wang 1996 is expanded to "religious congregation". 42. Wang 1996, p. 316. 43. Wang 1996, p. 51. 44. Wang 1996, p. 148, 4.4.3. It is one of Gödel's observations, made between 16 November and 7 December 1975, which Wang found hard to classify under the main topics considered elsewhere in the book. 45. "Times Critics' Top Books of 2021". The New York Times. December 15, 2021. Retrieved July 5, 2022. 46. "Dangerous Knowledge". BBC. June 11, 2008. Retrieved October 6, 2009. 47. Kurt Godel (1931). "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I" [On formally undecidable propositions of Principia Mathematica and related systems I] (PDF). Monatshefte für Mathematik und Physik. 38: 173–98. doi:10.1007/BF01700692. S2CID 197663120. References • Dawson, John W (1997), Logical dilemmas: The life and work of Kurt Gödel, Wellesley, MA: AK Peters. • Goldstein, Rebecca (2005), Incompleteness: The Proof and Paradox of Kurt Gödel, New York: W.W. Norton & Co, ISBN 978-0-393-32760-1. • Wang, Hao (1987), Reflections on Kurt Gödel, Cambridge: MIT Press, ISBN 0-262-73087-1 • Wang, Hao (1996), A Logical Journey: From Gödel to Philosophy, Cambridge: MIT Press, ISBN 0-262-23189-1 Further reading • Stephen Budiansky, 2021. Journey to the Edge of Reason: The Life of Kurt Gödel. W.W. Norton & Company. • Casti, John L; DePauli, Werner (2000), Gödel: A Life of Logic, Cambridge, MA: Basic Books (Perseus Books Group), ISBN 978-0-7382-0518-2. • Dawson, John W, Jr (1996), Logical Dilemmas: The Life and Work of Kurt Gödel, AK Peters{{citation}}: CS1 maint: multiple names: authors list (link). • Dawson, John W, Jr (1999), "Gödel and the Limits of Logic", Scientific American, 280 (6): 76–81, Bibcode:1999SciAm.280f..76D, doi:10.1038/scientificamerican0699-76, PMID 10048234{{citation}}: CS1 maint: multiple names: authors list (link). • Franzén, Torkel (2005), Gödel's Theorem: An Incomplete Guide to Its Use and Abuse, Wellesley, MA: AK Peters. • Ivor Grattan-Guinness, 2000. The Search for Mathematical Roots 1870–1940. Princeton Univ. Press. • Hämeen-Anttila, Maria (2020). Gödel on Intuitionism and Constructive Foundations of Mathematics (Ph.D. thesis). Helsinki: University of Helsinki. ISBN 978-951-51-5922-9. • Jaakko Hintikka, 2000. On Gödel. Wadsworth. • Douglas Hofstadter, 1980. Gödel, Escher, Bach. Vintage. • Stephen Kleene, 1967. Mathematical Logic. Dover paperback reprint c. 2001. • Stephen Kleene, 1980. Introduction to Metamathematics. North Holland ISBN 0-7204-2103-9 (Ishi Press paperback. 2009. ISBN 978-0-923891-57-2) • J.R. Lucas, 1970. The Freedom of the Will. Clarendon Press, Oxford. • Ernest Nagel and Newman, James R., 1958. Gödel's Proof. New York Univ. Press. • Ed Regis, 1987. Who Got Einstein's Office? Addison-Wesley Publishing Company, Inc. • Raymond Smullyan, 1992. Godel's Incompleteness Theorems. Oxford University Press. • Olga Taussky-Todd, 1983. Remembrances of Kurt Gödel. Engineering & Science, Winter 1988. • Yourgrau, Palle, 1999. Gödel Meets Einstein: Time Travel in the Gödel Universe. Chicago: Open Court. • Yourgrau, Palle, 2004. A World Without Time: The Forgotten Legacy of Gödel and Einstein. Basic Books. ISBN 978-0-465-09293-2. (Reviewed by John Stachel in the Notices of the American Mathematical Society (54 (7), pp. 861–68). External links Wikimedia Commons has media related to Kurt Gödel. Wikiquote has quotations related to Kurt Gödel. • Weisstein, Eric Wolfgang (ed.). "Gödel, Kurt (1906–1978)". ScienceWorld. • Kennedy, Juliette. "Kurt Gödel". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. • Time Bandits: an article about the relationship between Gödel and Einstein by Jim Holt • Notices of the AMS, April 2006, Volume 53, Number 4 Kurt Gödel Centenary Issue • Paul Davies and Freeman Dyson discuss Kurt Godel (transcript) • "Gödel and the Nature of Mathematical Truth" Edge: A Talk with Rebecca Goldstein on Kurt Gödel. • It's Not All In The Numbers: Gregory Chaitin Explains Gödel's Mathematical Complexities. • Gödel photo gallery. (archived) • Kurt Gödel MacTutor History of Mathematics archive page • National Academy of Sciences Biographical Memoir Set theory Overview • Set (mathematics) Axioms • Adjunction • Choice • countable • dependent • global • Constructibility (V=L) • Determinacy • Extensionality • Infinity • Limitation of size • Pairing • Power set • Regularity • Union • Martin's axiom • Axiom schema • replacement • specification Operations • Cartesian product • Complement (i.e. set difference) • De Morgan's laws • Disjoint union • Identities • Intersection • Power set • Symmetric difference • Union • Concepts • Methods • Almost • Cardinality • Cardinal number (large) • Class • Constructible universe • Continuum hypothesis • Diagonal argument • Element • ordered pair • tuple • Family • Forcing • One-to-one correspondence • Ordinal number • Set-builder notation • Transfinite induction • Venn diagram Set types • Amorphous • Countable • Empty • Finite (hereditarily) • Filter • base • subbase • Ultrafilter • Fuzzy • Infinite (Dedekind-infinite) • Recursive • Singleton • Subset · Superset • Transitive • Uncountable • Universal Theories • Alternative • Axiomatic • Naive • Cantor's theorem • Zermelo • General • Principia Mathematica • New Foundations • Zermelo–Fraenkel • von Neumann–Bernays–Gödel • Morse–Kelley • Kripke–Platek • Tarski–Grothendieck • Paradoxes • Problems • Russell's paradox • Suslin's problem • Burali-Forti paradox Set theorists • Paul Bernays • Georg Cantor • Paul Cohen • Richard Dedekind • Abraham Fraenkel • Kurt Gödel • Thomas Jech • John von Neumann • Willard Quine • Bertrand Russell • Thoralf Skolem • Ernst Zermelo United States National Medal of Science laureates Behavioral and social science 1960s 1964 Neal Elgar Miller 1980s 1986 Herbert A. Simon 1987 Anne Anastasi George J. Stigler 1988 Milton Friedman 1990s 1990 Leonid Hurwicz Patrick Suppes 1991 George A. Miller 1992 Eleanor J. Gibson 1994 Robert K. Merton 1995 Roger N. Shepard 1996 Paul Samuelson 1997 William K. Estes 1998 William Julius Wilson 1999 Robert M. Solow 2000s 2000 Gary Becker 2003 R. Duncan Luce 2004 Kenneth Arrow 2005 Gordon H. Bower 2008 Michael I. Posner 2009 Mortimer Mishkin 2010s 2011 Anne Treisman 2014 Robert Axelrod 2015 Albert Bandura Biological sciences 1960s 1963 C. B. van Niel 1964 Theodosius Dobzhansky Marshall W. Nirenberg 1965 Francis P. Rous George G. Simpson Donald D. Van Slyke 1966 Edward F. Knipling Fritz Albert Lipmann William C. Rose Sewall Wright 1967 Kenneth S. Cole Harry F. Harlow Michael Heidelberger Alfred H. Sturtevant 1968 Horace Barker Bernard B. Brodie Detlev W. Bronk Jay Lush Burrhus Frederic Skinner 1969 Robert Huebner Ernst Mayr 1970s 1970 Barbara McClintock Albert B. Sabin 1973 Daniel I. Arnon Earl W. Sutherland Jr. 1974 Britton Chance Erwin Chargaff James V. Neel James Augustine Shannon 1975 Hallowell Davis Paul Gyorgy Sterling B. Hendricks Orville Alvin Vogel 1976 Roger Guillemin Keith Roberts Porter Efraim Racker E. O. Wilson 1979 Robert H. Burris Elizabeth C. Crosby Arthur Kornberg Severo Ochoa Earl Reece Stadtman George Ledyard Stebbins Paul Alfred Weiss 1980s 1981 Philip Handler 1982 Seymour Benzer Glenn W. Burton Mildred Cohn 1983 Howard L. Bachrach Paul Berg Wendell L. Roelofs Berta Scharrer 1986 Stanley Cohen Donald A. Henderson Vernon B. Mountcastle George Emil Palade Joan A. Steitz 1987 Michael E. DeBakey Theodor O. Diener Harry Eagle Har Gobind Khorana Rita Levi-Montalcini 1988 Michael S. Brown Stanley Norman Cohen Joseph L. Goldstein Maurice R. Hilleman Eric R. Kandel Rosalyn Sussman Yalow 1989 Katherine Esau Viktor Hamburger Philip Leder Joshua Lederberg Roger W. Sperry Harland G. Wood 1990s 1990 Baruj Benacerraf Herbert W. Boyer Daniel E. Koshland Jr. Edward B. Lewis David G. Nathan E. Donnall Thomas 1991 Mary Ellen Avery G. Evelyn Hutchinson Elvin A. Kabat Robert W. Kates Salvador Luria Paul A. Marks Folke K. Skoog Paul C. Zamecnik 1992 Maxine Singer Howard Martin Temin 1993 Daniel Nathans Salome G. Waelsch 1994 Thomas Eisner Elizabeth F. Neufeld 1995 Alexander Rich 1996 Ruth Patrick 1997 James Watson Robert A. Weinberg 1998 Bruce Ames Janet Rowley 1999 David Baltimore Jared Diamond Lynn Margulis 2000s 2000 Nancy C. Andreasen Peter H. Raven Carl Woese 2001 Francisco J. Ayala George F. Bass Mario R. Capecchi Ann Graybiel Gene E. Likens Victor A. McKusick Harold Varmus 2002 James E. Darnell Evelyn M. Witkin 2003 J. Michael Bishop Solomon H. Snyder Charles Yanofsky 2004 Norman E. Borlaug Phillip A. Sharp Thomas E. Starzl 2005 Anthony Fauci Torsten N. Wiesel 2006 Rita R. Colwell Nina Fedoroff Lubert Stryer 2007 Robert J. Lefkowitz Bert W. O'Malley 2008 Francis S. Collins Elaine Fuchs J. Craig Venter 2009 Susan L. Lindquist Stanley B. Prusiner 2010s 2010 Ralph L. Brinster Rudolf Jaenisch 2011 Lucy Shapiro Leroy Hood Sallie Chisholm 2012 May Berenbaum Bruce Alberts 2013 Rakesh K. Jain 2014 Stanley Falkow Mary-Claire King Simon Levin Chemistry 1960s 1964 Roger Adams 1980s 1982 F. Albert Cotton Gilbert Stork 1983 Roald Hoffmann George C. Pimentel Richard N. Zare 1986 Harry B. Gray Yuan Tseh Lee Carl S. Marvel Frank H. Westheimer 1987 William S. Johnson Walter H. Stockmayer Max Tishler 1988 William O. Baker Konrad E. Bloch Elias J. Corey 1989 Richard B. Bernstein Melvin Calvin Rudolph A. Marcus Harden M. McConnell 1990s 1990 Elkan Blout Karl Folkers John D. Roberts 1991 Ronald Breslow Gertrude B. Elion Dudley R. Herschbach Glenn T. Seaborg 1992 Howard E. Simmons Jr. 1993 Donald J. Cram Norman Hackerman 1994 George S. Hammond 1995 Thomas Cech Isabella L. Karle 1996 Norman Davidson 1997 Darleane C. Hoffman Harold S. Johnston 1998 John W. Cahn George M. Whitesides 1999 Stuart A. Rice John Ross Susan Solomon 2000s 2000 John D. Baldeschwieler Ralph F. Hirschmann 2001 Ernest R. Davidson Gábor A. Somorjai 2002 John I. Brauman 2004 Stephen J. Lippard 2005 Tobin J. Marks 2006 Marvin H. Caruthers Peter B. Dervan 2007 Mostafa A. El-Sayed 2008 Joanna Fowler JoAnne Stubbe 2009 Stephen J. Benkovic Marye Anne Fox 2010s 2010 Jacqueline K. Barton Peter J. Stang 2011 Allen J. Bard M. Frederick Hawthorne 2012 Judith P. Klinman Jerrold Meinwald 2013 Geraldine L. Richmond 2014 A. Paul Alivisatos Engineering sciences 1960s 1962 Theodore von Kármán 1963 Vannevar Bush John Robinson Pierce 1964 Charles S. Draper Othmar H. Ammann 1965 Hugh L. Dryden Clarence L. Johnson Warren K. Lewis 1966 Claude E. Shannon 1967 Edwin H. Land Igor I. Sikorsky 1968 J. Presper Eckert Nathan M. Newmark 1969 Jack St. Clair Kilby 1970s 1970 George E. Mueller 1973 Harold E. Edgerton Richard T. Whitcomb 1974 Rudolf Kompfner Ralph Brazelton Peck Abel Wolman 1975 Manson Benedict William Hayward Pickering Frederick E. Terman Wernher von Braun 1976 Morris Cohen Peter C. Goldmark Erwin Wilhelm Müller 1979 Emmett N. Leith Raymond D. Mindlin Robert N. Noyce Earl R. Parker Simon Ramo 1980s 1982 Edward H. Heinemann Donald L. Katz 1983 Bill Hewlett George Low John G. Trump 1986 Hans Wolfgang Liepmann Tung-Yen Lin Bernard M. Oliver 1987 Robert Byron Bird H. Bolton Seed Ernst Weber 1988 Daniel C. Drucker Willis M. Hawkins George W. Housner 1989 Harry George Drickamer Herbert E. Grier 1990s 1990 Mildred Dresselhaus Nick Holonyak Jr. 1991 George H. Heilmeier Luna B. Leopold H. Guyford Stever 1992 Calvin F. Quate John Roy Whinnery 1993 Alfred Y. Cho 1994 Ray W. Clough 1995 Hermann A. Haus 1996 James L. Flanagan C. Kumar N. Patel 1998 Eli Ruckenstein 1999 Kenneth N. Stevens 2000s 2000 Yuan-Cheng B. Fung 2001 Andreas Acrivos 2002 Leo Beranek 2003 John M. Prausnitz 2004 Edwin N. Lightfoot 2005 Jan D. Achenbach 2006 Robert S. Langer 2007 David J. Wineland 2008 Rudolf E. Kálmán 2009 Amnon Yariv 2010s 2010 Shu Chien 2011 John B. Goodenough 2012 Thomas Kailath Mathematical, statistical, and computer sciences 1960s 1963 Norbert Wiener 1964 Solomon Lefschetz H. Marston Morse 1965 Oscar Zariski 1966 John Milnor 1967 Paul Cohen 1968 Jerzy Neyman 1969 William Feller 1970s 1970 Richard Brauer 1973 John Tukey 1974 Kurt Gödel 1975 John W. Backus Shiing-Shen Chern George Dantzig 1976 Kurt Otto Friedrichs Hassler Whitney 1979 Joseph L. Doob Donald E. Knuth 1980s 1982 Marshall H. Stone 1983 Herman Goldstine Isadore Singer 1986 Peter Lax Antoni Zygmund 1987 Raoul Bott Michael Freedman 1988 Ralph E. Gomory Joseph B. Keller 1989 Samuel Karlin Saunders Mac Lane Donald C. Spencer 1990s 1990 George F. Carrier Stephen Cole Kleene John McCarthy 1991 Alberto Calderón 1992 Allen Newell 1993 Martin David Kruskal 1994 John Cocke 1995 Louis Nirenberg 1996 Richard Karp Stephen Smale 1997 Shing-Tung Yau 1998 Cathleen Synge Morawetz 1999 Felix Browder Ronald R. Coifman 2000s 2000 John Griggs Thompson Karen Uhlenbeck 2001 Calyampudi R. Rao Elias M. Stein 2002 James G. Glimm 2003 Carl R. de Boor 2004 Dennis P. Sullivan 2005 Bradley Efron 2006 Hyman Bass 2007 Leonard Kleinrock Andrew J. Viterbi 2009 David B. Mumford 2010s 2010 Richard A. Tapia S. R. Srinivasa Varadhan 2011 Solomon W. Golomb Barry Mazur 2012 Alexandre Chorin David Blackwell 2013 Michael Artin Physical sciences 1960s 1963 Luis W. Alvarez 1964 Julian Schwinger Harold Urey Robert Burns Woodward 1965 John Bardeen Peter Debye Leon M. Lederman William Rubey 1966 Jacob Bjerknes Subrahmanyan Chandrasekhar Henry Eyring John H. Van Vleck Vladimir K. Zworykin 1967 Jesse Beams Francis Birch Gregory Breit Louis Hammett George Kistiakowsky 1968 Paul Bartlett Herbert Friedman Lars Onsager Eugene Wigner 1969 Herbert C. Brown Wolfgang Panofsky 1970s 1970 Robert H. Dicke Allan R. Sandage John C. Slater John A. Wheeler Saul Winstein 1973 Carl Djerassi Maurice Ewing Arie Jan Haagen-Smit Vladimir Haensel Frederick Seitz Robert Rathbun Wilson 1974 Nicolaas Bloembergen Paul Flory William Alfred Fowler Linus Carl Pauling Kenneth Sanborn Pitzer 1975 Hans A. Bethe Joseph O. Hirschfelder Lewis Sarett Edgar Bright Wilson Chien-Shiung Wu 1976 Samuel Goudsmit Herbert S. Gutowsky Frederick Rossini Verner Suomi Henry Taube George Uhlenbeck 1979 Richard P. Feynman Herman Mark Edward M. Purcell John Sinfelt Lyman Spitzer Victor F. Weisskopf 1980s 1982 Philip W. Anderson Yoichiro Nambu Edward Teller Charles H. Townes 1983 E. Margaret Burbidge Maurice Goldhaber Helmut Landsberg Walter Munk Frederick Reines Bruno B. Rossi J. Robert Schrieffer 1986 Solomon J. Buchsbaum H. Richard Crane Herman Feshbach Robert Hofstadter Chen-Ning Yang 1987 Philip Abelson Walter Elsasser Paul C. Lauterbur George Pake James A. Van Allen 1988 D. Allan Bromley Paul Ching-Wu Chu Walter Kohn Norman Foster Ramsey Jr. Jack Steinberger 1989 Arnold O. Beckman Eugene Parker Robert Sharp Henry Stommel 1990s 1990 Allan M. Cormack Edwin M. McMillan Robert Pound Roger Revelle 1991 Arthur L. Schawlow Ed Stone Steven Weinberg 1992 Eugene M. Shoemaker 1993 Val Fitch Vera Rubin 1994 Albert Overhauser Frank Press 1995 Hans Dehmelt Peter Goldreich 1996 Wallace S. Broecker 1997 Marshall Rosenbluth Martin Schwarzschild George Wetherill 1998 Don L. Anderson John N. Bahcall 1999 James Cronin Leo Kadanoff 2000s 2000 Willis E. Lamb Jeremiah P. Ostriker Gilbert F. White 2001 Marvin L. Cohen Raymond Davis Jr. Charles Keeling 2002 Richard Garwin W. Jason Morgan Edward Witten 2003 G. Brent Dalrymple Riccardo Giacconi 2004 Robert N. Clayton 2005 Ralph A. Alpher Lonnie Thompson 2006 Daniel Kleppner 2007 Fay Ajzenberg-Selove Charles P. Slichter 2008 Berni Alder James E. Gunn 2009 Yakir Aharonov Esther M. Conwell Warren M. Washington 2010s 2011 Sidney Drell Sandra Faber Sylvester James Gates 2012 Burton Richter Sean C. Solomon 2014 Shirley Ann Jackson Analytic philosophy Related articles Areas of focus • Metaphysics • Epistemology • Language • Mathematics • Science Turns • Aretaic • Linguistic Logic • Classical • Deviant • Mathematical • Non-classical • Paraconsistent • Philosophical Theories • Anti-realism • Australian realism • Causal theory of reference • Descriptivism • Emotivism • Feminism • Functionalism • Logical atomism • Logical positivism • Marxism • Neurophilosophy • Ordinary language • Pragmatism • Quietism • Scientific structuralism • Sense data • Analytic theology • Analytical Thomism Concepts • Analysis (paradox of analysis) • Analytic–synthetic distinction • Counterfactual • Natural kind • Reflective equilibrium • Supervenience Modality • Actualism • Necessity • Possibility • Possible world • Realism • Rigid designator Philosophers • Noam Chomsky • William Lane Craig • Keith Donnellan • Paul Feyerabend • Gottlob Frege • Edmund Gettier • Jaakko Hintikka • Giuseppe Peano • Karl Popper • Nathan Salmon • Russ Shafer-Landau • Ernest Sosa • Barry Stroud • Alfred Tarski • Jan Łukasiewicz • Michael Walzer • Nicholas Wolterstorff Australian • David Malet Armstrong • David Chalmers • J. L. Mackie • Peter Singer • J. J. C. Smart Cambridge • Arif Ahmed • Charlie Broad • Casimir Lewy • Norman Malcolm • G. E. Moore • Graham Priest • Bertrand Russell • Frank P. Ramsey • Henry Sidgwick • Ludwig Wittgenstein Oxford • G. E. M. Anscombe • J. L. Austin • Michael Dummett • Antony Flew • Philippa Foot • Peter Geach • Paul Grice • R. M. Hare • Alasdair MacIntyre • Derek Parfit • Gilbert Ryle • John Searle • P. F. Strawson • Richard Swinburne • Charles Taylor • Bernard Williams • Timothy Williamson Logical positivists • A. J. Ayer • Ernest Nagel Berlin Circle • Carl Gustav Hempel • Hans Reichenbach Vienna Circle • Rudolf Carnap • Kurt Gödel • Otto Neurath • Moritz Schlick Harvard • Roderick Chisholm • Donald Davidson • Daniel Dennett • Nelson Goodman • Christine Korsgaard • Thomas Kuhn • Thomas Nagel • Robert Nozick • Hilary Putnam • W. V. O. Quine • John Rawls Notre Dame • Robert Audi • Peter van Inwagen • Alvin Plantinga Pittsburgh School • Robert Brandom • Patricia Churchland • Paul Churchland • Adolf Grünbaum • John McDowell • Ruth Millikan • Alexander Pruss • Nicholas Rescher • Wilfrid Sellars • Bas van Fraassen Pragmatism • Susan Haack • Nicholas Rescher • Morton White Princeton • Alonzo Church • Jerry Fodor • David Lewis • Jaegwon Kim • Saul Kripke • Richard Rorty Quietism • James F. Conant • Alice Crary • Cora Diamond Stanford School • Nancy Cartwright • John Dupré • Peter Galison • Ian Hacking • Patrick Suppes • Category • Index Platonists Ancient Academics Old • Plato • Aristotle • Eudoxus • Philip of Opus • Aristonymus • Coriscus and Erastus of Scepsis • Demetrius of Amphipolis • Euaeon of Lampsacus • Heraclides and Python of Aenus • Hestiaeus of Perinthus • Lastheneia of Mantinea • Timolaus of Cyzicus • Speusippus • Axiothea of Phlius • Heraclides Ponticus • Menedemus of Pyrrha • Xenocrates • Crantor • Polemon • Crates of Athens Skeptics Middle • Arcesilaus • Diocles of Cnidus • Lacydes • Telecles and Evander • Hegesinus New • Carneades • Hagnon of Tarsus • Metrodorus of Stratonicea • Clitomachus • Charmadas • Aeschines of Neapolis • Philo of Larissa • Cicero • Dio of Alexandria Middle Platonists • Antiochus • Eudorus of Alexandria • Philo of Alexandria • Plutarch • Justin Martyr • Gaius • Albinus • Alcinous • Alexander Peloplaton • Apuleius • Atticus • Maximus of Tyre • Numenius of Apamea • Ammonius Saccas • Longinus • Clement of Alexandria • Origen • Origen the Pagan • Calcidius Neoplatonists • Plotinus • Students • Amelius • Porphyry • Iamblichus • Sopater • Eustathius of Cappadocia • Sosipatra • Aedesius • Dexippus • Chrysanthius • Theodorus of Asine • Julian • Sallustius • Maximus of Ephesus • Eusebius of Myndus • Priscus of Epirus • Antoninus • Hypatia • Gaius Marius Victorinus • Augustine • Macrobius • Boethius Academy • Plutarch of Athens • Asclepigenia • Hierocles • Syrianus • Hermias • Aedesia • Proclus • Marinus • Isidore • Ammonius Hermiae • Asclepiodotus • Hegias • Zenodotus • Agapius • Damascius • Simplicius • Priscian • John Philoponus • Olympiodorus • David the Invincible • Pseudo-Dionysius the Areopagite Medieval • John Scotus Eriugena • Al-Farabi • Anselm • Peter Abelard • Bernard • Gilbert • Thierry • Henry of Ghent • Bonaventure • Theodoric of Freiberg • Meister Eckhart • Berthold of Moosburg • Paul of Venice Modern Renaissance Florentine Academy • Plethon • Marsilio Ficino • Cristoforo Landino • Giovanni Pico della Mirandola • Petrus Ramus • Giordano Bruno • Blaise Pascal Cambridge • Ralph Cudworth • Henry More • Anne Conway • Emanuel Swedenborg • Thomas Taylor • Ralph Waldo Emerson • Josiah Royce • Bernard Bolzano • Aleksei Losev Contemporary Analytic • Gottlob Frege • G. E. Moore • Kurt Gödel • Alonzo Church • Roderick Chisholm • Michael Dummett • W. V. O. Quine • David Kaplan • Saul Kripke • Alvin Plantinga • Peter van Inwagen • Nicholas Wolterstorff • Crispin Wright • Edward N. Zalta Continental • Henri Bergson • Edmund Husserl • Roman Ingarden • Leo Strauss Time 100: The Most Important People of the 20th Century Leaders & revolutionaries • David Ben-Gurion • Winston Churchill • Mahatma Gandhi • Mikhail Gorbachev • Adolf Hitler • Ho Chi Minh • Pope John Paul II • Ruhollah Khomeini • Martin Luther King Jr. • Vladimir Lenin • Nelson Mandela • Mao Zedong • Ronald Reagan • Eleanor Roosevelt • Franklin D. Roosevelt • Theodore Roosevelt • Margaret Sanger • Margaret Thatcher • Unknown Tiananmen Square rebel • Lech Wałęsa Artists & entertainers • Louis Armstrong • Lucille Ball • The Beatles • Marlon Brando • Coco Chanel • Charlie Chaplin • Le Corbusier • Bob Dylan • T. S. Eliot • Aretha Franklin • Martha Graham • Jim Henson • James Joyce • Pablo Picasso • Richard Rodgers and Oscar Hammerstein • Bart Simpson • Frank Sinatra • Steven Spielberg • Igor Stravinsky • Oprah Winfrey Builders & titans • Stephen Bechtel Sr. • Leo Burnett • Willis Carrier • Walt Disney • Henry Ford • Bill Gates • Amadeo Giannini • Ray Kroc • Estée Lauder • William Levitt • Lucky Luciano • Louis B. Mayer • Charles E. Merrill • Akio Morita • Walter Reuther • Pete Rozelle • David Sarnoff • Juan Trippe • Sam Walton • Thomas J. Watson Jr. Scientists & thinkers • Leo Baekeland • Tim Berners-Lee • Rachel Carson • Albert Einstein • Philo Farnsworth • Enrico Fermi • Alexander Fleming • Sigmund Freud • Robert H. Goddard • Kurt Gödel • Edwin Hubble • John Maynard Keynes • Leakey family • Jean Piaget • Jonas Salk • William Shockley • Alan Turing • Francis Crick & James Watson • Ludwig Wittgenstein • Wright brothers Heroes & icons • Muhammad Ali • The American G.I. • Lady Diana Spencer • Anne Frank • Billy Graham • Che Guevara • Edmund Hillary & Tenzing Norgay • Helen Keller • Kennedy family • Bruce Lee • Charles Lindbergh • Harvey Milk • Marilyn Monroe • Emmeline Pankhurst • Rosa Parks • Pelé • Jackie Robinson • Andrei Sakharov • Mother Teresa • Bill W. Authority control International • FAST • ISNI • VIAF National • Norway • Chile • Spain • France • BnF data • Catalonia • Germany • Italy • Israel • United States • Sweden • Latvia • Japan • Czech Republic • Australia • Greece • Korea • Croatia • Netherlands • Poland • Portugal Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH People • Deutsche Biographie • Trove Other • SNAC • IdRef
Wikipedia
\begin{document} \title[Protoadditive functors, derived torsion theories and homology]{Protoadditive functors, derived torsion theories and homology} \author{ Tomas Everaert and Marino Gran} \address{Universit\'e catholique de Louvain, Institut de Recherche en Math\'ematique et Physique, Chemin du Cyclotron 2, 1348 Louvain-la-Neuve, Belgium } \address{ Vakgroep Wiskunde \\ Vrije Universiteit Brussel \\ Department of Mathematics \\ Pleinlaan 2\\ 1050 Brussel \\ Belgium. } \email{[email protected]} \email{[email protected]} \date{\today} \maketitle \begin{abstract} Protoadditive functors are designed to replace additive functors in a non-abelian setting. Their properties are studied, in particular in relationship with torsion theories, Galois theory, homology and factorisation systems. It is shown how a protoadditive torsion-free reflector induces a chain of derived torsion theories in the categories of higher extensions, similar to the Galois structures of higher central extensions previously considered in semi-abelian homological algebra. Such higher central extensions are also studied, with respect to Birkhoff subcategories whose reflector is protoadditive or, more generally, factors through a protoadditive reflector. In this way we obtain simple descriptions of the non-abelian derived functors of the reflectors via higher Hopf formulae. Various examples are considered in the categories of groups, compact groups, internal groupoids in a semi-abelian category, and other ones. \\ \noindent MSC: 18G50, 18G10, 18E40, 18A40, 20J05, 08B05 \\ \noindent \emph{Keywords}: protoadditive functor, semi-abelian category, torsion theory, Galois theory, homology, factorisation system. \end{abstract} \section*{Introduction} In recent years, the theory of \emph{semi-abelian categories} \cite{JMT} has become a central subject in categorical algebra. Semi-abelian categories allow for a conceptual and unified treatment of the theories of groups, rings, algebras, and similar non-abelian structures, just like, say, abelian categories are suitable for the study of abelian groups and modules, or toposes for investigating the category of sets and categories of sheaves. As explained in \cite{JMT}, the formulation of the notion of semi-abelian category can be seen as an appropriate solution to an old problem S.~Mac Lane mentioned in his classical article \cite{Dfg}, which, in fact, led to the introduction of the notion of abelian category a few years later \cite{Buchsbaum}. With the introduction of any mathematical structure naturally comes the question of defining a suitable notion of morphism. The meaning of ``suitable'' may of course vary, and depends on the applications one has in mind. For instance, between toposes one usually considers so-called ``geometric morphisms'', but the notion of ``logical morphism'' is of importance too. In asking for an appropriate notion of morphism between semi-abelian categories, we should therefore be more specific. As their name suggests, semi-abelian categories are a weaker notion than that of abelian category. Hence, it seems natural to ask if the classical notion of additive functor can be generalised, in a meaningful way, to the non-additive context of semi-abelian categories. We believe the answer is yes, with the sought-after notion being that of ``protoadditive functor'' we introduced in \cite{EG}, and which we intend to investigate more extensively in the present article. Before recalling the definition, it is useful to make some comparative remarks on abelian and semi-abelian categories. By a well-known theorem of M.~Tierney, a category is \emph{abelian} if and only if it is both \emph{exact} (in the sense of Barr \cite{Barr}) and \emph{additive}. Now, if we ignore some natural (co)completeness assumptions, \emph{semi-abelian} categories can be defined as exact categories which are also \emph{pointed} and \emph{protomodular} \cite{Bourn0}. Accordingly, a semi-abelian category can be seen as what remains of the notion of abelian category if one replaces ``additivity'' by the weaker (pointed) ``protomodularity'' condition. As observed by D.~Bourn, there is a simple way to express the ``difference'' between an additive and a pointed protomodular category. Classically, any split short exact sequence \begin{equation}\label{sses1} \xymatrix{0 \ar[r] & K \ar[r]^-{\ensuremath{\mathsf{ker\,}} (f)} & A\ar[r]<-.8 ex>_f & B \ar[l]<-.8 ex>_s \ar[r] & 0 } \end{equation} in an additive category $\ensuremath{\mathcal{A}}$ determines a canonical isomorphism $A \cong K \oplus B$, showing that any split short exact sequence is given by a biproduct. Since this property is actually equivalent to the additivity condition, we no longer have that it holds in an arbitrary pointed protomodular category: for instance, in the semi-abelian category $\mathsf{Grp}$ of groups, split short exact sequences are well known to correspond to semi-direct products, not to products. Nevertheless, it is still the case in any pointed protomodular category that $A$ is the \emph{supremum} of $\ensuremath{\mathsf{ker\,}} (f) \colon K \rightarrow A$ and $s \colon B \rightarrow A$ as subobjects of $A$: $A \cong K \vee B$. In fact, we have that the following stronger property holds in a pointed category \emph{if and only if} it is protomodular: for every split short exact sequence \eqref{sses1}, $\ensuremath{\mathsf{ker\,}} (f)$ and $s$ are jointly \emph{extremal} epic (rather than just jointly epic). Now recall that a functor between additive categories is additive if and only if it preserves (binary) biproducts. Taking into account the correspondence between biproducts and split short exact sequences in an additive category, as well as the above comparison between additive and pointed protomodular categories, it seems natural to call \emph{protoadditive} \cite{EG} any functor between pointed protomodular categories that preserves split short exact sequences. Then, of course, for a functor between additive categories, being protoadditive is the same thing as being additive, but there are many examples of interest beyond the additive context, as we shall see in this article. This choice of definition is also motivated by the following reformulation. For a finitely complete category $\ensuremath{\mathcal{A}}$, write $\mathsf{Pt}(\ensuremath{\mathcal{A}})$ for the category of ``points'' in $\ensuremath{\mathcal{A}}$: split epimorphisms with a given splitting. $\mathsf{Pt}(\ensuremath{\mathcal{A}})$ is fibred over $\ensuremath{\mathcal{A}}$ via the codomain functor $\mathsf{Pt}(\ensuremath{\mathcal{A}})\rightarrow \ensuremath{\mathcal{A}}$, the so-called ``fibration of points'' \cite{Bourn0, Bourn1996}, the cartesian morphisms being pullbacks along split epimorphisms. This fibration has been intensively studied during the past twenty years, mainly in connection to its strong classification properties in algebra (see \cite{BB}, for instance, and references therein). In particular, a category is protomodular if and only if the change of base functors of the fibration of points reflect isomorphisms. Now, it turns out that if a zero preserving functor $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{B}}$ between pointed protomodular categories is protoadditive then it preserves at once \emph{arbitrary} pullbacks along split epimorphisms (and not only of morphisms from the zero-object). In other words, we have that $F$ is protoadditive if and only if the induced functor $\mathsf{Pt}(\ensuremath{\mathcal{A}})\rightarrow\mathsf{Pt}(\ensuremath{\mathcal{B}})$ between the categories of points preserves cartesian morphisms, i.e. if it is a \emph{morphism of fibrations}. The validity of the classical homological diagram lemmas, such as the five lemma or the snake lemma, make semi-abelian categories particularly suitable for a generalised treatment of non-abelian (co)homology theories. Given, moreover, that the main domain of application of abelian categories and additive functors is homological algebra, it is then natural to investigate the role of protoadditive functors in semi-abelian homological algebra. We started this investigation in \cite{EG} and will continue it in the present article. Recall that, for any Birkhoff subcategory $\ensuremath{\mathcal{B}}$ (= a reflective subcategory closed under subobjects and regular quotients) of a semi-abelian monadic category $\ensuremath{\mathcal{A}}$, the Barr-Beck derived functors of the reflector $I\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{B}}$ can be described via generalised \emph{Hopf formulae} \cite{EGV}. These ``formulae'' were defined with respect to a certain chain of ``higher dimensional Galois structures'' naturally induced by the reflection. If $\ensuremath{\mathcal{A}}$ is abelian, then the case where $I$ is additive is of particular importance, as in this case we obtain classical abelian derived functors. Also for a semi-abelian category $\ensuremath{\mathcal{A}}$, the case of a protoadditive $I$ is of interest, since in this case the Hopf formulae take a simplified shape. In the present article, among other things, we shall be interested in extending the work of \cite{EG} in two directions: on the one hand, we shall consider reflections $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ where $\ensuremath{\mathcal{A}}$ need not be semi-abelian, but only homological and such that every regular epimorphism is an effective descent morphism, and where $\ensuremath{\mathcal{F}}$ is a torsion-free subcategory of $\ensuremath{\mathcal{A}}$ (not necessarily Birkhoff) with protoadditive reflector $F$. We prove that such type of reflections also induce a similar chain of higher dimensional Galois structures along with what we call \emph{derived torsion theories}. This shows, in particular, that protoadditivity is of interest also for functors between homological categories (not necessarily semi-abelian). On the other hand, we shall consider Birkhoff subcategories of semi-abelian categories whose reflector may itself not be protoadditive but only factors through a protoadditive reflector. Here, once again we shall obtain a simplified description of the derived functors via the associated Hopf formulae. To give a simple illustration of this, consider the reflection \begin{equation}\label{profgroups1} \xymatrix@=30pt{ {\mathsf{Grp(HComp)} \, } \ar@<1ex>[r]_-{^{\perp}}^-{{I}} & {\mathsf{Grp(Prof)}, } \ar@<1ex>[l]^-V} \end{equation} where ${\mathsf{Grp(HComp)}}$ is the category of compact (Hausdorff) groups, ${\mathsf{Grp(Prof)}}$ the category of profinite groups, $V$ the inclusion and ${I}$ the functor sending a compact group $G$ to the quotient $I(G)= G/\Gamma_0 (G)$ of $G$ by the connected component $\Gamma_0(G)$ of the neutral element in $G$. It is well known that ${\mathsf{Grp(HComp)}}$ is a semi-abelian category with enough projectives, and the functor $I$ can be shown to be protoadditive (see Example \ref{exproto}.\ref{exdisc}). We can then consider a \emph{double presentation} $$ \label{doublext} \xymatrix{F \ar[r]^{} \ar[d] & F/K_1 \ar[d] \\ F/K_2 \ar[r] & G } $$ of a compact group $G$, in the sense that $K_1$ and $K_2$ are closed normal subgroups of a free compact group $F$ with the property that both $F/K_1$ and $F/K_2$ are also free, and the square is a pushout. Then the third homology group of $G$ corresponding to the reflection (\ref{profgroups1}) (i.e. with coefficients in the functor $I$) is given by the formula \[ H_3 (G, {\mathsf{Grp(Prof)} } ) = \frac{K_1 \cap K_2 \cap (\Gamma_0(F))} { \Gamma_0(K_1 \cap K_2)}, \] which is therefore independent of the chosen double presentation. By choosing a different reflective subcategory of ${\mathsf{Grp(HComp)}}$, for instance the category ${\mathsf{Ab(Prof)} }$ of profinite abelian groups, we get the composite reflection $$ \xymatrix@=30pt{ {\mathsf{Grp(HComp)} \, } \ar@<1ex>[r]_-{^{\perp}}^-{\ensuremath{\mathsf{ab}}} & {\mathsf{Ab(HComp)}} \ar@<1ex>[l]^-U \ar@<1ex>[r]_-{^{\perp}}^-{\overline{I}} & {\mathsf{Ab(Prof),} } \ar@<1ex>[l]^-V} $$ where $\ensuremath{\mathsf{ab}} \colon {\mathsf{Grp(HComp)}} \rightarrow {\mathsf{Ab(Prof)} }$ is the abelianisation functor, and $\overline{I}$ the (additive) restriction of the functor $I$ in (\ref{profgroups1}). The results in the present article imply in particular that the corresponding homology group of $G$ is given by $$H_3 (G, {\mathsf{Ab(Prof)} } ) = \frac{K_1 \cap K_2 \cap (\overline{[F,F]}\cdot \Gamma_0(F))}{\overline{[K_1,K_2]}\cdot \overline{[K_1 \cap K_2, F]}\cdot \Gamma_0(K_1 \cap K_2)},$$ where the symbol $\cdot$ denotes the product of normal subgroups, and $\overline{[. ,. ] }$ is the topological closure of the commutator subgroup $[. ,. ] $. Similar formulas are obtained for the $n$-th homology group $H_n (G, {\mathsf{Ab(Prof)} })$ of $G$, for any $n \ge 2$. The same method applies to many other reflections, some of which are studied in the present article. This provides us with another motivation for studying protoadditive functors: their usefulness to ``compute'' the homology objects explicitely in a variety of situations. Let us then give a brief overview of the different sections of the article. \noindent {\bf Structure of the article.} The first section is preliminary: we recall some definitions and results---concerning torsion theories, categorical Galois theory and reflective factorisation systems---needed in the text. Section $2$ is mainly devoted to proving alternative characterisations of the protoadditivity condition in various situations. In particular, we show that the protoadditivity of a torsion-free reflector can be detected from a hereditariness condition of the corresponding torsion subcategory (Theorem \ref{protoM}). Several examples of protoadditive reflectors are examined, and some counter-examples considered, which show the independence from other important types of reflections (such as semi-left-exact, admissible or Barr-exact reflections). In Section $3$ we study torsion theories in homological categories whose torsion-free reflector is protoadditive. We prove that an effective descent morphism is a normal extension if and only if its kernel is torsion-free (Proposition \ref{protocentral}). Next, we establish a bijection between torsion theories satisfying a normality condition $(N)$ and stable factorisation systems $(\mathbb E, \mathbb M)$ having the property that every $e \in \mathbb E$ is a normal epimorphism (Proposition \ref{inducedfactorisation}). As a consequence of this, we obtain that every effective descent morphisms $f$ admits a stable ``monotone-light'' factorisation $f=m\circ e$ into a morphism $e$ inverted by the torsion-free reflector followed by a normal extension $m$. We conclude in particular that the category of normal extensions is reflective in the category of effective descent morphisms (Theorem \ref{protofactorisation}). We continue our study of protoadditive torsion-free reflectors in Section $4$. It turns out that the category of normal extensions is not only reflective in the category of effective descent morphisms, but also torsion-free, and that the reflector is again protoadditive (Proposition \ref{firstderivedtt}). We use this result to construct a chain of derived torsion theories in the categories of so-called higher extensions (Theorem \ref{higherderivedT1}), by adopting the axiomatic approach to higher extensions from \cite{Ev}. Next, in Section $5$, we study the normal extensions with respect to a Birkhoff subcategory of a semi-abelian category, in the situation where the reflector is protoadditive. Similar to the case of torsion theories, we have that an effective descent morphism is a normal extension if and only if its kernel lies in the Birkhoff subcategory, but this time the protoadditivity of the reflector is also necessary for this characterisation of normal extensions to hold whenever the normality condition $(N)$ (see page \pageref{conditionN}) is satisfied (Proposition \ref{characterisationbyextensions}). A higher dimensional version of the same result is also proved (Theorem \ref{characterisationbyextensionshigher}). In the last section, we generalise results from the previous sections by characterising the normal extensions and higher dimensional normal extensions with respect to a composite reflection \[ \xymatrix@=30pt{ {\ensuremath{\mathcal{A}} \, } \ar@<1ex>[r]_-{^{\perp}}^-{I} & {\, \ensuremath{\mathcal{B}} \, } \ar@<1ex>[l]^H \ar@<1ex>[r]_-{^{\perp}}^-{J} & \ensuremath{\mathcal{C}} \ar@<1ex>[l]^G } \] where $\ensuremath{\mathcal{A}}$ is a semi-abelian category, $\ensuremath{\mathcal{B}}$ a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$, and $\ensuremath{\mathcal{C}}$ an admissible (normal epi)-reflective subcategory of $\ensuremath{\mathcal{B}}$ with protoadditive reflector (Theorem \ref{highercomposite}). The admissibility condition on $\ensuremath{\mathcal{C}}$ is satisfied both in the case where $\ensuremath{\mathcal{C}}$ is a torsion-free subcategory, and where a Birkhoff subcategory of $\ensuremath{\mathcal{B}}$: either case is investigated seperately (Proposition \ref{compositetorsion} and Theorems \ref{compositecommutator} and \ref{compositeintersection}). Finally, we apply the results for the latter case in order to obtain simple descriptions of the non-abelian derived functors of $J\circ I$ via higher Hopf formulae (Corollaries \ref{compositehopf} and \ref{compositehopf2}). We conclude with some new examples in the categories of groups, compact semi-abelian algebras, and internal groupoids in a semi-abelian category. \tableofcontents \section{Preliminaries} \subsection*{Torsion theories} Torsion theories, although classically defined in abelian categories, have been studied in more general contexts by various authors (see for instance \cite{CHK}, and more recently \cite{BG,CDT,BelReit,JT}). Here we recall the definition from \cite{JT}, which is essentially Dickson's definition from \cite{D}, except that the category $\ensuremath{\mathcal{A}}$ is not asked to be abelian, but only pointed. Note that by a \emph{pointed} category we mean, as usual, a category $\ensuremath{\mathcal{A}}$ which admits a \emph{zero-object}, i.e.~an object $0\in\ensuremath{\mathcal{A}}$ which is both initial and terminal. For any pair of objects $A, B\in\ensuremath{\mathcal{A}}$, the unique morphism $A\rightarrow B$ factorising through the zero-object, will also be denoted by $0$. If $f\colon A\rightarrow B$ is a morphism in $\ensuremath{\mathcal{A}}$, we shall write $\ensuremath{\mathsf{ker\,}}(f)\colon K[f]\rightarrow A$ for its kernel (the pullback along $f$ of the unique morphism $0\rightarrow B$) and $\ensuremath{\mathsf{coker\,}} (f)\colon B\rightarrow\ensuremath{\mathrm{Cok}} [f]$ for its cokernel (the pushout by $f$ of $A\rightarrow 0$), provided they exist. A \emph{short exact sequence} in $\ensuremath{\mathcal{A}}$ is given by a composable pair of morphisms $(k,f)$, as in the diagram \begin{equation}\label{ses} \xymatrix{ 0 \ar[r] & K \ar[r]^k & A \ar[r]^f \ar[r] & B\ar[r] & 0,} \end{equation} such that $k=\ensuremath{\mathsf{ker\,}} (f)$ and $f=\ensuremath{\mathsf{coker\,}} (k)$. Given such a short exact sequence, we shall sometimes denote the object $B$ by $A/K$. \begin{definition} Let $\ensuremath{\mathcal{A}}$ be a pointed category. A pair $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ of full and replete subcategories of $\ensuremath{\mathcal{A}}$ is called a \emph{torsion theory} in $\ensuremath{\mathcal{A}}$ if the following two conditions are satisfied: \begin{enumerate} \item $\ensuremath{\mathrm{Hom}}_{\ensuremath{\mathcal{A}}}(T,F)=\{0\}$ for any $T\in\ensuremath{\mathcal{T}}$ and $F\in\ensuremath{\mathcal{F}}$; \item for any object $A\in\ensuremath{\mathcal{A}}$ there exists a short exact sequence \begin{equation}\label{torsionses} 0\rightarrow T \rightarrow A \rightarrow F \rightarrow 0 \end{equation} such that $T\in\ensuremath{\mathcal{T}}$ and $F\in\ensuremath{\mathcal{F}}$. \end{enumerate} \end{definition} $\ensuremath{\mathcal{T}}$ is called the \emph{torsion part} and $\ensuremath{\mathcal{F}}$ the \emph{torsion-free part} of the torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$. A full and replete subcategory $\ensuremath{\mathcal{F}}$ of a pointed category $\ensuremath{\mathcal{A}}$ is \emph{torsion-free} if it is the torsion-free part of some torsion theory in $\ensuremath{\mathcal{A}}$. \emph{Torsion} subcategories are defined dually. The terminology comes from the classical example $( \ensuremath{\mathsf{Ab}}_{t.}, \ensuremath{\mathsf{Ab}}_{t.f.})$ of torsion theory in the variety $\ensuremath{\mathsf{Ab}}$ of abelian groups, where $\ensuremath{\mathsf{Ab}}_{t.f.}$ consists of all torsion-free abelian groups in the usual sense (=abelian groups satisfying, for every $n\geq 1$, the implication $nx=0 \Rightarrow x=0$), and $\ensuremath{\mathsf{Ab}}_{t.}$ consists of all torsion abelian groups. There are, of course, many more examples of interest, several of which will be considered below. A torsion-free subcategory is necessarily a reflective subcategory, while a torsion subcategory is always coreflective: the reflection and coreflection of an object $A$ are given by the short exact sequence \eqref{torsionses}, which is uniquely determined, up to isomorphism. Such reflections $\ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$, for which each unit $\eta_A \colon A \rightarrow F(A)$ is a normal epimorphism (=the cokernel of some morphism) will be called \emph{(normal epi)-reflective}. Given a (normal epi)-reflective subcategory of a pointed category, there are various ways to determine whether or not it is torsion-free. For instance, this happens when the induced radical is idempotent. In order to explain what this means, recall that a subfunctor $T \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{A}}$ of the identity functor is a \emph{radical} if, for any $A\in\ensuremath{\mathcal{A}}$, the canonical subobject $t_A \colon T(A) \rightarrow A$ is a normal monomorphism (=the kernel of some morphism) and $T(A/T(A)) = 0$ (assuming, in particular, that every $t_A$ admits a cokernel). $T$ is \emph{idempotent} if $T\circ T=T$ or, more precisely, $t_{T(A)}\colon T(T(A))\rightarrow T(A)$ is an isomorphism, for every $A\in\ensuremath{\mathcal{A}}$. Any radical $T\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{A}}$ induces a (normal epi)-reflection $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ with units $\eta_A\colon A\rightarrow A/T(A)$. Conversely, given any (normal epi)-reflection $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ one obtains a radical by considering the kernels $t_A=\ensuremath{\mathsf{ker\,}} (\eta_A)\colon K[\eta_A]\rightarrow A$, provided they exist. This bijection restricts to a bijection between torsion theories and idempotent radicals, as we shall recall in Theorem \ref{torsiontheorem}. There are strong connections between torsion theories, admissible Galois structures in the sense of \cite{J} and reflective factorisation systems in the sense of \cite{CHK}. We briefly recall some of these connections in the present section, and refer the reader to the article \cite{CJKP} and to the book \cite{BoJ} for more details. \subsection*{Admissible Galois structures} In this subsection, we recall some definitions from Categorical Galois Theory \cite{J1, J}. We shall restrict ourselves to the special case where the basic adjunction in the Galois structure is a reflection (as in \cite{JK4}). \begin{definition} A \emph{Galois structure} $\Gamma = ( \ensuremath{\mathcal{A}},\ensuremath{\mathcal{F}}, F ,U,{\mathcal{E}})$ on a category $\ensuremath{\mathcal{A}}$ consists of a full replete reflective subcategory $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathcal{A}}$, with inclusion $U\colon \ensuremath{\mathcal{F}}\rightarrow\ensuremath{\mathcal{A}}$ and reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$: \begin{equation} \label{GaloisSTR} \xymatrix{ {\ensuremath{\mathcal{A}}}\,\, \ar@<1ex>[r]^-{F} & {\ensuremath{\mathcal{F}}}, \ar@<1ex>[l]^-{U}_-{_{\perp}}} \end{equation} together with a class $\ensuremath{\mathcal{E}}$ of morphisms in $\ensuremath{\mathcal{A}}$, such that: \begin{enumerate} \item $\ensuremath{\mathcal{A}}$ admits pullbacks along morphisms in $\ensuremath{\mathcal{E}}$; \item $\ensuremath{\mathcal{E}}$ contains all isomorphisms, is stable under composition and under pullbacks; \item $UF(\ensuremath{\mathcal{E}}) \subset \ensuremath{\mathcal{E}}$. \end{enumerate} \end{definition} We shall usually drop the functor $U$ from the notations, since it is a full inclusion. Often, $\ensuremath{\mathcal{E}}$ is the class of \emph{all} morphisms in $\ensuremath{\mathcal{A}}$, in which case the Galois structure $\Gamma = ( \ensuremath{\mathcal{A}},\ensuremath{\mathcal{F}}, F,{\mathcal{E}})$ is called \emph{absolute}. However, in many of the examples we consider, $\ensuremath{\mathcal{E}}$ will be a class of \emph{effective descent morphisms}, whose definition we now recall. (See \cite{JST} for a beautiful introduction to descent theory.) For an object $B\in\ensuremath{\mathcal{A}}$, we write $(\ensuremath{\mathcal{A}}\downarrow_{\ensuremath{\mathcal{E}}} B)$ for the full subcategory of the comma category ($\ensuremath{\mathcal{A}} \downarrow B$) of objects over $B$, determined by the morphisms in $\ensuremath{\mathcal{E}}$ with codomain $B$. Similarly, we write $(\ensuremath{\mathcal{F}}\downarrow_{\ensuremath{\mathcal{E}}}F(B))$ for the full subcategory of ($\ensuremath{\mathcal{F}} \downarrow F(B)$) whose objects are in $\ensuremath{\mathcal{E}}$. For a morphism $p \colon E \rightarrow B$ in $\ensuremath{\mathcal{A}}$, we denote by \[ p^* \colon (\ensuremath{\mathcal{A}}\downarrow_{\ensuremath{\mathcal{E}}} B) \rightarrow (\ensuremath{\mathcal{A}}\downarrow_{\ensuremath{\mathcal{E}}} E) \] the ``change of base'' functor sending a morphism $f\colon A \rightarrow B$ in $\ensuremath{\mathcal{E}}$ to its pullback $p^* (f) \colon E\times_B A \rightarrow E$ along $p$. \begin{definition} A morphism $p\colon E \rightarrow B\in \ensuremath{\mathcal{E}}$ is a \emph{monadic extension} when the functor $p^*$ is monadic. When $\ensuremath{\mathcal{E}}$ is the class of all morphisms, a monadic extension will be called an \emph{effective descent morphism}. \end{definition} In a variety of universal algebras, an effective descent morphism is the same as a surjective homomorphism. More generaly, in an exact \cite{Barr} category, the effective descent morphisms are precisely the regular epimorphisms. However, this need no longer be the case in an arbitrary regular \cite{Barr} category. Now, let $B$ be an object of $\ensuremath{\mathcal{A}}$. The reflection \eqref{GaloisSTR} induces an adjunction \begin{equation} \label{GaloisInduced} \xymatrix{ {(\ensuremath{\mathcal{A}}\downarrow_{\ensuremath{\mathcal{E}}} B)}\,\, \ar@<1ex>[r]^-{F^B} & {(\ensuremath{\mathcal{F}}\downarrow_{\ensuremath{\mathcal{E}}}F(B))}, \ar@<1ex>[l]^-{U^B}_-{_{\perp}}} \end{equation} where $F^B$ is defined by $F^B (f)= F(f)$ for any $f \in (\ensuremath{\mathcal{A}}\downarrow_{\ensuremath{\mathcal{E}}}B)$, and $U^B (\phi)= \eta_B^* (U(\phi))$ on any $\phi \in (\ensuremath{\mathcal{F}}\downarrow_{\ensuremath{\mathcal{E}}} F(B))$. This adjunction need not, in general, be a full reflection, but those Galois structures for which this \emph{is} the case for every $B\in\ensuremath{\mathcal{A}}$, play a fundamental role: \begin{definition} A Galois structure $\Gamma = ( \ensuremath{\mathcal{A}},\ensuremath{\mathcal{F}},F ,U,\ensuremath{\mathcal{E}})$ is \emph{admissible} when the functor $U^B\colon {(\ensuremath{\mathcal{F}}\downarrow_{\ensuremath{\mathcal{E}}} F(B))} \rightarrow {(\ensuremath{\mathcal{A}}\downarrow_{\ensuremath{\mathcal{E}}} B)}$ is fully faithful for every $B \in \ensuremath{\mathcal{A}}$. \end{definition} We shall sometimes say that the reflection $F\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{F}}$ is ``admissible with respect to $\ensuremath{\mathcal{E}}$'', when we mean that $\Gamma$ is admissible. With respect to a given admissible Galois structure, one studies the following types of morphisms: \begin{definition} Let $\Gamma = ( \ensuremath{\mathcal{A}},\ensuremath{\mathcal{F}},F ,U,\ensuremath{\mathcal{E}})$ be an admissible Galois structure. A morphism $f\colon A\rightarrow B$ in $\ensuremath{\mathcal{E}}$ is called \begin{enumerate} \item a \emph{trivial extension} (or \emph{trivial covering}) when the canonical commutative square \[ \xymatrix{A \ar[r]^-{\eta_A} \ar[d]_{f} & F(A) \ar[d]^{F(f)} \\ B \ar[r]_-{\eta_B} & F(B)} \] is a pullback; \item a \emph{central extension} (or \emph{covering}) when it is ``locally trivial'': there exists a monadic extension $p\colon E \rightarrow B$ with the property that $p^* (f)$ is a trivial extension; \item a \emph{normal extension} if it is a monadic extension and $f^* (f)$ is a trivial extension. \end{enumerate} \end{definition} Note that, by admissibility, $f$ is a trivial extension if and only if it lies in the (essential) image of the functor $U^B$. If we choose $\ensuremath{\mathcal{A}}=\ensuremath{\mathsf{Gp}}$ the variety of groups, $\ensuremath{\mathcal{F}}=\ensuremath{\mathsf{Ab}}$ the subvariety of abelian groups and $F=\ensuremath{\mathsf{ab}}$ the abelianisation functor, then $(\ensuremath{\mathsf{Gp}},\ensuremath{\mathsf{Ab}},\ensuremath{\mathsf{ab}},\ensuremath{\mathcal{E}})$ is an admissible Galois structure for $\ensuremath{\mathcal{E}}$ the class of surjective homomorphisms \cite{J}. Here, the trivial extensions are precisely the surjective homomorphisms $f\colon A\rightarrow B$ whose restriction $[A,A]\rightarrow [B,B]$ to the commutator subgroups is an isomorphism. Central and normal extensions coincide and are precisely the central extensions in the usual sense: surjective homomorphisms $f\colon A\rightarrow B$ whose kernel $K[f]$ lies in the centre of $A$. (See \cite{BoJ}, for instance, for more details.) Note that the admissibility can also be expressed as an exactness property of the reflector: $\Gamma = ( \ensuremath{\mathcal{A}},\ensuremath{\mathcal{F}},F ,U,\ensuremath{\mathcal{E}})$ is admissible if and only if the reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{F}}$ preserves pullbacks of the form \begin{equation}\label{sle}\vcenter{ \xymatrix{ P \ar[d] \ar[r] \ar@{}[rd]|<<{\copy\pullbackbox} & X \ar[d]^{x}\\ A \ar[r]_-{\eta_A} & F(A)}} \end{equation} where $X \in \ensuremath{\mathcal{F}}$, $x \colon X\rightarrow F(A) $ lies in $\ensuremath{\mathcal{E}}$ and $\eta_A$ is the reflection unit. In particular, in the absolute case (where $\ensuremath{\mathcal{E}}$ is the class of \emph{all} morphisms) this means that an admissible Galois structure is the same as a \emph{semi-left-exact} reflection in the sense of \cite{CHK}: a reflection $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ of a category $\ensuremath{\mathcal{A}}$ into a full and replete subcategory $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathcal{A}}$ preserving all pullbacks \eqref{sle} where $X \in \ensuremath{\mathcal{F}}$. Semi-left-exact reflections were introduced in the study of reflective factorisation systems. We briefly recall some notions from \cite{CHK}. \subsection*{Reflective factorisation systems} For morphisms $e$ and $m$ in a category $\ensuremath{\mathcal{A}}$ we write $e\downarrow m$ if for every pair of morphisms $(a,b)$ such that $b\circ e=m\circ a$, there exists a unique morphism $d$ such that $d\circ e=a$ and $m\circ d=b$: \[ \xymatrix{ A \ar[r]^e \ar[d]_a & B \ar@{.>}[ld]|{d} \ar[d]^b\\ C \ar[r]_m & D.} \] For classes $\mathbb{E}$ and $\mathbb{M}$ of morphisms in $\ensuremath{\mathcal{A}}$ we put \[ \mathbb{E}^{\downarrow}=\{m | e\downarrow m \ \textrm{for all} \ e\in \mathbb{E}\}, \ \ \ \mathbb{M}^{\uparrow}=\{e | e\downarrow m \ \textrm{for all} \ m\in \mathbb{M}\}. \] By a \emph{prefactorisation system} on a category $\ensuremath{\mathcal{A}}$ we mean a pair $(\mathbb{E},\mathbb{M})$ of classes of morphisms in $\ensuremath{\mathcal{A}}$ such that $\mathbb{E}=\mathbb{M}^{\uparrow}$ and $\mathbb{M}=\mathbb{E}^{\downarrow}$. A \emph{factorisation system} is a prefactorisation system $(\mathbb{E},\mathbb{M})$ such that for every morphism $f$ in $\ensuremath{\mathcal{A}}$ there exist morphisms $e\in \mathbb{E}$ and $m\in \mathbb{M}$ such that $f=m\circ e$. Any full replete reflective subcategory $\ensuremath{\mathcal{F}}$ of a category $\ensuremath{\mathcal{A}}$ determines a prefactorisation system $(\mathbb E, \mathbb M)$ on $\ensuremath{\mathcal{A}}$, where $\mathbb E$ is the class of morphisms inverted by the reflector $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ and $\mathbb M=\mathbb{E}^{\downarrow}$. Furthermore, when the reflector $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ is semi-left-exact and $\ensuremath{\mathcal{A}}$ admits pullbacks along every unit $\eta_A\colon A\rightarrow F(A)$ ($A\in\ensuremath{\mathcal{A}}$), the prefactorisation system $(\mathbb E, \mathbb M)$ is a factorisation system and $\mathbb M$ consists exactly of the trivial extensions with respect to the corresponding absolute Galois structure. When $\ensuremath{\mathcal{A}}$ admits arbitrary pullbacks, $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ is semi-left-exact if and only if $F$ preserves pullbacks along morphisms in $\mathbb M$. (See \cite{CHK,CJKP} for more details.) When the reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{F}}$ preserves pullbacks of the form \eqref{sle} where, however, one no longer assumes that $X$ belongs to $\ensuremath{\mathcal{F}}$, one says that $F$ has \emph{stable units} \cite{CHK}. This property is equivalent to the units $\eta_A\colon A\rightarrow F(A)$ ($A\in\ensuremath{\mathcal{A}}$) being \emph{stably in $\mathbb E$}: the pullback $f^*(\eta_A)$ along any morphism $f\colon B\rightarrow F(A)$ lies in $\mathbb E$. Note that even if $F$ has stable units, the reflective factorisation systems $(\mathbb{E},\mathbb{M})$ need not be \emph{stable} (i.e. the class $\mathbb{E}$ is pullback stable), in general. In fact, it was shown in \cite{CHK} that this only happens when $F$ is a localisation: $F$ preserves arbitrary finite limits. Restricting $\mathbb{E}$ to the class $\mathbb{E}'$ of morphisms $e\in \mathbb{E}$ that are stably in $\mathbb{E}$ and enlarging $\mathbb{M}$ to the class $\mathbb{M}^*$ of central extensions, sometimes (but certainly not always) yields a new factorisation system $(\mathbb{E}',\mathbb{M}^*)$ which is stable by definition. We shall consider examples of where this is ``partially'' true in Section \ref{coveringmorphisms}. We conclude this section by listing several characterisations of torsion-free subcategories in terms of some of the notions recalled above. Most of these are known, but the equivalence between the semi-left-exactness and the stability of units (under the given conditions) is new, as far as we know. \begin{theorem}\label{torsiontheorem} For a full replete subcategory $\ensuremath{\mathcal{F}}$ of a finitely complete pointed category $\ensuremath{\mathcal{A}}$ with pullback-stable normal epimorphisms, the following conditions are equivalent: \begin{enumerate} \item $\ensuremath{\mathcal{F}}$ is a torsion-free subcategory of $\ensuremath{\mathcal{A}}$; \item $\ensuremath{\mathcal{F}}$ is a (normal epi)-reflective subcategory of $\ensuremath{\mathcal{A}}$ and the induced radical is idempotent; \item $\ensuremath{\mathcal{F}}$ is a (normal epi)-reflective subcategory of $\ensuremath{\mathcal{A}}$ and the reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ has stable units; \item $\ensuremath{\mathcal{F}}$ is a (normal epi)-reflective subcategory of $\ensuremath{\mathcal{A}}$ and the reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ is semi-left-exact (=admissible); \end{enumerate} \end{theorem} \begin{proof} For the equivalences $(1) \Leftrightarrow (2) \Leftrightarrow (4)$ see \cite{BG,JT}. $(3)\Rightarrow (4)$ is true by definition. We prove that $(2)$ implies $(3)$. Consider, for this, a commutative diagram $$\xymatrix@=35pt{&P \ar[r]^{p_2} \ar@{}[rd]|<<<{\copy\pullbackbox}\ar[d]_{p_1} & X \ar[d]^{x} \\ T(A) \ar[r]_-{t_A} \ar[ur]^{t} & A \ar[r]_{\eta_A } & F(A) } $$ where the square is a pullback and $t= \ensuremath{\mathsf{ker\,}} (p_2)$. Observe that $p_2$ is necessarily the cokernel of $t$. Moreover, since $T$ is idempotent, we have that $F(T(A))=0$. Hence, by applying the left adjoint $F$, one gets the commutative diagram $$\xymatrix@=35pt{&F(P) \ar[r]^{F(p_2)} \ar[d]_(.6){F(p_1)} & F(X) \ar[d]^{F(x)} \\ 0 \ar[r]_-{t_A} \ar[ur]^{F(t)} & F(A) \ar@{=}[r]_{} & F(A) } $$ where $F(p_2)= \ensuremath{\mathsf{coker\,}}(F(t))$, since obviously $F$ preserves cokernels, so that the square is a pullback, as desired. \end{proof} The above theorem asserts, in particular, that any torsion-free reflection gives rise to an admissible Galois structure, and suggests to study the central extensions with respect to a torsion theory. We shall do this in sections \ref{coveringmorphisms} and \ref{sectionderived} (see also \cite{CJKP,GR,GJ}), and we shall be particularly interested in torsion theories with a \emph{protoadditive} reflector. \section{Protoadditive functors}\label{protoadditivesection} Let $\ensuremath{\mathcal{A}}$ be a pointed category with pullbacks along split epimorphisms. By a \emph{split short exact sequence} in $\ensuremath{\mathcal{A}}$ we mean a triple $(k,f,s)$ of morphisms in $\ensuremath{\mathcal{A}}$, as in the diagram \begin{equation}\label{sses} \xymatrix{0 \ar[r]& K \ar[r]^k & A \ar@<-.8 ex>[r]_f & B \ar@<-.8ex>[l]_s \ar[r] &0, } \end{equation} such that $k=\ensuremath{\mathsf{ker\,}}(f)$ and $f\circ s=1_B$ (i.e. $f$ is a split epimorphism with splitting~$s$). $\ensuremath{\mathcal{A}}$ is a \emph{protomodular} category in the sense of Bourn \cite{Bourn0} precisely when the split short five lemma holds true in $\ensuremath{\mathcal{A}}$: given a morphism \[ \xymatrix{ 0 \ar[r] & K \ar[r] \ar[d]_-{\kappa} & A \ar[d]_{\alpha} \ar@<-.8 ex>[r]_f & B \ar@<-.8ex>[l]_{s} \ar[d]^{\beta} \ar[r] & 0\\ 0 \ar[r] & K' \ar[r] & A' \ar@<-.8 ex>[r]_{f'} & B' \ar@<-.8ex>[l]_{s' } \ar[r] & 0} \] of split short exact sequences, if both $\kappa$ and $\beta$ are isomorphisms, then so is $\alpha$. Note that the protomodularity can be equivalently expressed as the property that the right-hand square $\beta \circ f = f' \circ \alpha$ is a pullback if and only if $\kappa$ is an isomorphism, for any morphism of split short exact sequences as above. The prototypical example of a pointed protomodular category is the variety of groups. In fact, \emph{any} pointed variety whose theory contains the group operations and identities (such as the varieties of rings and of Lie algebras) is protomodular, and more examples will be considered in what follows. If a pointed protomodular category $\ensuremath{\mathcal{A}}$ is, moreover, finitely complete, then any regular epimorphism (=the coequaliser of some pair of morphisms), and in particular any split epimorphism, is normal \cite{Bourn0}. Thus, in particular, any split short exact sequence is a short exact sequence. Of course, if $\ensuremath{\mathcal{A}}$ is an additive category, then any split short exact sequence in $\ensuremath{\mathcal{A}}$ is, up to isomorphism, of the form \[ \xymatrix{0 \ar[r]& K \ar[r]^-{i_K} & K\oplus B \ar@<-.8 ex>[r]_-{\pi_B} & B \ar@<-.8ex>[l]_-{i_B} \ar[r] &0 } \] where $K\oplus B$ is the biproduct of $K$ and $B$, $i_K$ and $i_B$ are the canonical injections and $\pi_B$ the canonical projection, and the split short five lemma becomes a triviality. Hence, any additive category is pointed protomodular. Moreover, a functor between additive categories is additive (that is, it preserves binary biproducts) if and only if it preserves split short exact sequences. We claim that the latter property is still meaningful in a non-additive context. This brings us to the central notion of this article: \begin{definition} \cite{EG} A functor $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ between pointed protomodular categories $\ensuremath{\mathcal{A}}$ and $\ensuremath{\mathcal{F}}$ is \emph{protoadditive} if it preserves split short exact sequences: for any split short exact sequence \eqref{sses} in $\ensuremath{\mathcal{A}}$, the image $$\xymatrix{0 \ar[r]& F(K) \ar[r]^{F(k)} & F(A) \ar@<-.8 ex>[r]_{F(f)} & F(B) \ar@<-.8ex>[l]_{F(s)} \ar[r] &0 }$$ by $F$ is a split short exact sequence in $\ensuremath{\mathcal{F}}$. \end{definition} Note that a protoadditive functor necessarily preserves the zero object. Moreover, the preservation of split short exact sequences implies at once the preservation of arbitrary pullbacks along split epimorphisms: \begin{proposition}\label{protoadditive-pullback} A zero-preserving functor $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ between pointed protomodular categories $\ensuremath{\mathcal{A}}$ and $\ensuremath{\mathcal{F}}$ is protoadditive if and only if it preserves pullbacks along split epimorphisms. \end{proposition} \begin{proof} Clearly, $F$ is protoadditive as soon as it preserves pullbacks along split epimorphisms as well as the zero object. Now assume that $F$ is protoadditive. Given a pullback \[ \xymatrix{A \times_B E \ar@{}[rd]|<<{\copy\pullbackbox} \ar@<-.8ex>[r]_-{\pi_E} \ar[d]_{\pi_A} & E \ar[d]^p \ar@<-.8ex>[l] \\ A \ar@<-.8 ex>[r]_f & B \ar@<-.8ex>[l] } \] along a split epimorphism $f$, the restriction $\overline{\pi}_A \colon K[\pi_E] \rightarrow K[f]$ of $\pi_A$ is an isomorphism. By applying the functor $F$, one gets a morphism $$ \xymatrix@=40pt{ 0 \ar[r] & F(K[\pi_E]) \ar[d]_{F(\overline{\pi}_A)} \ar[r]^-{F(\ensuremath{\mathsf{ker\,}} (\pi_E))} & F(E \times_B A) \ar@<-.8ex>[r]_-{F(\pi_E)} \ar[d]_-{F(\pi_A)} &F( E ) \ar@<-.8ex>[l] \ar[d]^{F(p)} \ar[r] & 0 \\ 0 \ar[r] & F(K[f]) \ar[r]_-{F(\ensuremath{\mathsf{ker\,}} (f))} & F(A) \ar@<-.8ex>[r]_{F(f)} & F(B) \ar@<-.8ex>[l] \ar[r] & 0} $$ of split short exact sequences in $\ensuremath{\mathcal{F}}$, where the left hand vertical arrow $F(\overline{\pi}_A) $ is an isomorphism. The right hand square is then a pullback by protomodularity. \end{proof} A pointed protomodular category is called \emph{homological} \cite{BB} if it is also regular \cite{Barr}: finitely complete with stable regular epi-mono factorisations. A \emph{semi-abelian} category \cite{JMT} is a pointed protomodular category $\ensuremath{\mathcal{A}}$ with binary coproducts which is, moreover, exact \cite{Barr}: regular and every equivalence relation in $\ensuremath{\mathcal{A}}$ is effective (=the kernel pair of some morphism). Since any variety of universal algebras is exact, any pointed protomodular variety is semi-abelian. The category of topological groups provides an example of a homological category which is not semi-abelian, as opposed to its full subcategory of compact Hausdorff groups, which \emph{is} semi-abelian. In fact, in the latter two examples, we could replace the theory of groups with any semi-abelian algebraic theory, i.e.~a Lawvere theory $\mathbb{T}$ such that the category $\ensuremath{\mathsf{Set}}^{\mathbb{T}}$ of $\mathbb{T}$-models in the category $\ensuremath{\mathsf{Set}}$ of sets is a semi-abelian category. It was shown in \cite{BouJ} that a theory is semi-abelian precisely when it contains a unique constant, written $0$, binary terms $\alpha_i (x,y)$ (for $i\in \{1, \dots, n\}$ and a natural number $n\geq 1$) and an $(n+1)$-ary term $\beta$ subject to the identities $$\alpha_i(x,x)=0 \quad {\rm and}\quad \beta(\alpha_1(x,y), \dots , \alpha_n (x,y),y)=x.$$ In \cite{BC} it was proved, for any semi-abelian theory $\mathbb{T}$, that the category $\ensuremath{\mathsf{Top}}^{\mathbb{T}}$ of topological $\mathbb{T}$-algebras (=$\mathbb{T}$-models in the category $\ensuremath{\mathsf{Top}}$ of topological spaces) is homological, and that the full subcategory $\mathsf{HComp}^{\mathbb{T}}$ of compact Hausdorff topological $\mathbb{T}$-algebras is semi-abelian (in fact, a semi-abelian category monadic over $\ensuremath{\mathsf{Set}}$). Diagram lemmas such as the (short) five lemma, the $3\times 3$ lemma and the snake lemma, which are well known to hold in the abelian context, are also valid in any homological category \cite{B2, BB}. The $3\times 3$ lemma immediately gives us the following: \begin{proposition}\label{reflector=radical} Let $\ensuremath{\mathcal{A}}$ be a homological category and $F\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{F}}$ the reflector into a full replete (normal epi)-reflective subcategory $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathcal{A}}$. Then $F$ is protoadditive if and only if the corresponding radical $T\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{A}}$ is protoadditive. \end{proposition} For a homological category $\ensuremath{\mathcal{A}}$, and a full replete (normal epi)-reflective subcategory $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathcal{A}}$, the protoadditivity of the reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ can also be formulated as the preservation of a certain class of monomorphisms. \begin{definition}\cite{BJK} A \emph{protosplit monomorphism} in a pointed protomodular category $\ensuremath{\mathcal{A}}$ is a normal monomorphism $k \colon K \rightarrow A$ that is the kernel of a split epimorphism. \end{definition} In other words, protosplit monomorphisms are the monomorphisms $k$ appearing in split short exact sequences of the form \eqref{sses}. \begin{proposition}\label{caracterisationproto} Let $\ensuremath{\mathcal{A}}$ be a homological category and $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ the reflector into a full replete (normal epi)-reflective subcategory $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathcal{A}}$. Then the following conditions are equivalent: \begin{enumerate} \item $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ is a protoadditive functor; \item $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ sends protosplit monomorphisms to normal monomorphisms; \item $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ sends protosplit monomorphisms to monomorphisms. \end{enumerate} \end{proposition} \begin{proof} The implications $(1) \Rightarrow (2) \Rightarrow (3)$ are trivial, and we have that $(2)$ implies $(1)$ since, for a split short exact sequence \eqref{sses}, if $F(k)$ is a normal monomorphism, it is necessarily the kernel of its cokernel, and the latter is $F(f)$, since $F$ preserves colimits. Note that for these implications the regularity of $\ensuremath{\mathcal{A}}$ is irrelevant, as is the assumption that the reflection units $\eta_A\colon A\rightarrow F(A)$ are normal epimorphisms. Let us then prove the implication $(3) \Rightarrow (1)$. For this, we consider a split short exact sequence \eqref{sses}. It induces the diagram \[ \xymatrix{0 \ar[r] &K \ar[r]^k \ar[d]_{\eta_K} & A \ar[d]^{\eta_A} \ar@<-.8 ex> [r]_-f & B \ar[d]^{\eta_{B}} \ar@<-.8ex>[l]_-s \ar[r] \ar[d] &0 \\ &F(K) \ar[r]_{F(k)} & F(A) \ar@<-.8 ex> [r]_-{F(f)} & F(B) \ar@<-.8ex>[l]_-{F(s)} \ar[r] &0 } \] in $\ensuremath{\mathcal{A}}$ where the vertical morphisms are the reflection units. By assumption, $F(k)$ is a monomorphism. Moreover, since any homological category is regular Mal'tsev (see \cite{Bourn1996}), the right hand square of regular epimorphisms is a \emph{regular pushout} or \emph{double extension} (by Proposition $3.2$ in \cite{B3}), which means that also the induced morphism $(\eta_A,f)\colon A\rightarrow F(A)\times_{F(B)}B$ to the pullback of $F(f)$ along $\eta_B$ is a regular epimorphism. Hence, by regularity of $\ensuremath{\mathcal{A}}$, so is the restriction of $\eta_A$ to the kernels $K\rightarrow K[F(f)]$, since this is a pullback of $(\eta_A,f)$. It follows that also the induced morphism $F(K)\rightarrow K[F(f)]$ is a regular epimorphism. As it is also a monomorphism---since $F(k)$ is a monomorphism---it is then an isomorphism. Hence $F(k)$ is the kernel of $F(f)$ in the category $\ensuremath{\mathcal{A}}$. Since the inclusion $\ensuremath{\mathcal{F}} \rightarrow \ensuremath{\mathcal{A}}$ reflects limits, $F(k)$ is the kernel of $F(f)$ in $\ensuremath{\mathcal{F}}$, and we can conclude that the functor $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ is indeed protoadditive. \end{proof} Let us now consider torsion theories $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ whose reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ is protoadditive. First of all, we show how the protoadditivity of $F$ can be detected from the torsion subcategory $\ensuremath{\mathcal{T}}$, when $\ensuremath{\mathcal{A}}$ is homological. A subcategory $\ensuremath{\mathcal{T}}$ of a category $\ensuremath{\mathcal{A}}$ is called \emph{$\ensuremath{\mathcal{M}}$-hereditary}, for $\ensuremath{\mathcal{M}}$ a class of monomorphisms in $\ensuremath{\mathcal{A}}$, if for any $m\colon A\rightarrow B$ in $\ensuremath{\mathcal{M}}$, $B\in\ensuremath{\mathcal{T}}$ implies that $A\in\ensuremath{\mathcal{T}}$. When $\ensuremath{\mathcal{M}}$ is the class of all monomorphisms, $\ensuremath{\mathcal{T}}$ is simply called \emph{hereditary}. A torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ is ($\ensuremath{\mathcal{M}}$-)hereditary if its torsion part $\ensuremath{\mathcal{T}}$ is so. \begin{theorem}\label{protoM} For a torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ in a homological category $\ensuremath{\mathcal{A}}$, the following conditions are equivalent: \begin{enumerate} \item the torsion subcategory $\ensuremath{\mathcal{T}}$ is $\ensuremath{\mathcal{M}}$-hereditary, for $\ensuremath{\mathcal{M}}$ the class of protosplit monomorphisms; \item the reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ is protoadditive. \end{enumerate} \end{theorem} \begin{proof} The implication $(1)\Rightarrow (2)$ follows from Proposition \ref{reflector=radical}, since for any $\ensuremath{\mathcal{M}}$-hereditary torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ in $\ensuremath{\mathcal{A}}$, the image $$ \xymatrix{0 \ar[r]& T(K) \ar[r]^{T(k)} & T(A) \ar@<-.8 ex> [r]_-{T(f)} & T(B) \ar@<-.8ex>[l]_-{T(s)} \ar[r] &0.} $$ by the corresponding radical $T\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{A}}$ of any split short exact sequence \eqref{sses} in $\ensuremath{\mathcal{A}}$ is again a split short exact sequence. Indeed, the coreflector $T \colon \ensuremath{\mathcal{A}} \rightarrow \mathcal T$ certainly preserves kernels (as any right adjoint), and the fact that $\mathcal T$ is closed in $\ensuremath{\mathcal{A}}$ under protosplit monomorphisms implies that $T(k) \colon T(K) \rightarrow T(A)$ is still the kernel of $T(f)$ in the category $\ensuremath{\mathcal{A}}$. For the implication $(2)\Rightarrow (1)$, assume that $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ is protoadditive. Then, if $k \colon K \rightarrow A$ is a protosplit monomorphism such that $A$ lies in the corresponding torsion subcategory $\mathcal T$, then $K$ lies in $\ensuremath{\mathcal{T}}$ as well: indeed, by applying the functor $F$ to $k$ we obtain the morphism $F(k) \colon F(K) \rightarrow F(A)$ which is a monomorphism since $F$ is protoadditive, so that $F(A)=0$ implies that $F(K)=0$, hence $K\in\mathcal T$. \end{proof} Recall that a subcategory $\ensuremath{\mathcal{F}}$ of a pointed category $\ensuremath{\mathcal{A}}$ is \emph{closed under extensions} if for any short exact sequence \eqref{ses} in $\ensuremath{\mathcal{A}}$, the object $A\in\ensuremath{\mathcal{F}}$ as soon as both $K\in\ensuremath{\mathcal{F}}$ and $B\in\ensuremath{\mathcal{F}}$. It is well known that a full replete (normal epi)-reflective subcategory $\ensuremath{\mathcal{F}}$ of an abelian category $\ensuremath{\mathcal{A}}$ is torsion-free if and only if it is closed under extensions. While the ``only if" part is still valid in arbitrary pointed categories $\ensuremath{\mathcal{A}}$, this is no longer the case for the ``if" part (see \cite{JT}). However, it turns out that both implications hold when the reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ is protoadditive and $\ensuremath{\mathcal{A}}$ is either a semi-abelian category or a category of topological semi-abelian algebras, as we shall see below. Moreover, a full replete reflective subcategory $\ensuremath{\mathcal{F}}$ of a pointed protomodular category $\ensuremath{\mathcal{A}}$ with protoadditive reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ is always \emph{closed under split extensions}, even if it is not torsion-free. This means that for any split short exact sequence \eqref{sses} in $\ensuremath{\mathcal{A}}$, the object $A$ lies in $\ensuremath{\mathcal{F}}$ as soon as $K\in\ensuremath{\mathcal{F}}$ and $B\in\ensuremath{\mathcal{F}}$: \begin{proposition}\label{splitext} Any full replete reflective subcategory $\ensuremath{\mathcal{F}}$ of a pointed protomodular category $\ensuremath{\mathcal{A}}$ with protoadditive reflector $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ is closed under split extensions in $\ensuremath{\mathcal{A}}$. \end{proposition} \begin{proof} If \eqref{sses} is a split extension in $\ensuremath{\mathcal{A}}$ with $K, B \in \ensuremath{\mathcal{F}}$, then the split short five lemma applied to the commutative diagram \[ \xymatrix@=35pt{0 \ar[r]& K \ar@{=}[d]_{\eta_K} \ar[r]^{k} & A \ar[d]_{\eta_A} \ar@<-.8 ex> [r]_-{f} & B \ar@{=}[d]^{\eta_B} \ar@<-.8ex>[l]_-{s} \ar[r] &0 \\ 0 \ar[r]& F(K) \ar[r]^{F(k)} & F(A) \ar@<-.8 ex> [r]_-{F(f)} & F(B) \ar@<-.8ex>[l]_-{F(s)} \ar[r] &0} \] of exact sequences in $\mathcal A$ shows that the reflection unit $\eta_A$ is an isomorphism. Hence, $A$ belongs to $\ensuremath{\mathcal{F}}$. \end{proof} \begin{remark} Closedness under split extensions is not a sufficient condition for $F\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{F}}$ to be protoadditive. For instance, the quasivariety $\ensuremath{\mathsf{Gp}}_{t.f.}$ of torsion-free groups (=groups satisfying, for every $n\geq 1$, the implication $x^n=1\Rightarrow x=1$) is closed under (split) extensions in the variety $\ensuremath{\mathsf{Gp}}$ of groups, since it is a torsion-free subcategory of $\ensuremath{\mathsf{Gp}}$, but the reflector $\ensuremath{\mathsf{Gp}}\rightarrow\ensuremath{\mathsf{Gp}}_{t.f.}$ is not protoadditive (see Example \ref{counterproto}.\ref{extfree}). \end{remark} Now let $\ensuremath{\mathcal{A}}$ be a pointed category and $\ensuremath{\mathcal{F}}$ a full replete (normal epi)-reflective subcategory of $\ensuremath{\mathcal{A}}$. As remarked above, closedness under extensions is necessary but not sufficient for $\ensuremath{\mathcal{F}}$ to be torsion-free. However, by the Corollary in \cite{JT}, when $\ensuremath{\mathcal{A}}$ is homological, the two conditions are equivalent as soon as the composite $t_A\circ t_{T(A)}\colon T(T(A))\rightarrow A$ is a normal monomorphism, for any $A\in\ensuremath{\mathcal{A}}$ (here, as before, $t_A\colon T(A)\rightarrow A$ denotes the coreflection counit). It turns out that the latter property is always satisfied if $\ensuremath{\mathcal{A}}$ is semi-abelian and $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ is protoadditive: \begin{lemma} \label{compositeisnormal } Let $\ensuremath{\mathcal{F}}$ be a full replete (normal epi)-reflective subcategory of a semi-abelian category $\ensuremath{\mathcal{A}}$ with protoadditive reflector $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$. Then, for any normal monomorphism $k \colon K \rightarrow A$, the monomorphism $k \circ t_K \colon T(K) \rightarrow A$ is normal. \end{lemma} \begin{proof} Let $(R, \pi_1, \pi_2)$ be the equivalence relation on $A$ corresponding to the normal subobject $k \colon K \rightarrow A$, so that $k = \pi_2 \circ \ensuremath{\mathsf{ker\,}} (\pi_1)$. One then forms the diagram $$\xymatrix@=35pt{T(K ) \ar[d]_{t_K} \ar[r]^{T ( \ensuremath{\mathsf{ker\,}} (\pi_1))} & T(R) \ar[d]_{t_R} \ar@<.8 ex>[r]^{T(\pi_1)} \ar@<-.8 ex>[r]_{T (\pi_2)} & T(A)\ar[d]^{t_A} \\ K \ar[r]_{\ensuremath{\mathsf{ker\,}} (\pi_1)} & R \ar@<.8 ex>[r]^{\pi_1} \ar@<-.8 ex>[r]_{\pi_2} & A}$$ which is obtained by applying the radical $T$ corresponding to the reflector $F$ to the lower row. One observes that the left-hand square is a pullback, due to the fact that $T(\ensuremath{\mathsf{ker\,}} (\pi_1))$ is the kernel of $T(\pi_1)$ (by Proposition \ref{reflector=radical}) and $t_A$ is a monomorphism. It follows that the composite $\ensuremath{\mathsf{ker\,}} (\pi_1) \circ t_K$ is a normal monomorphism, as an intersection of normal monomorphisms. Finally, the arrow $\pi_2 \circ \ensuremath{\mathsf{ker\,}} (\pi_1) \circ t_K = k \circ t_K$ is a normal monomorphism, as it is the regular image along the regular epimorphism $\pi_2$ of the normal monomorphism $\ensuremath{\mathsf{ker\,}} (\pi_1) \circ t_K$. \end{proof} Hence, the Corollary in \cite{JT} gives us: \begin{proposition}\label{torsion=closed} Let $\ensuremath{\mathcal{A}}$ be a semi-abelian category and $F \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ a protoadditive reflector into a full replete (normal epi)-reflective subcategory $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathcal{A}}$. Then $\ensuremath{\mathcal{F}}$ is a torsion-free subcategory of $\ensuremath{\mathcal{A}}$ if and only if $\ensuremath{\mathcal{F}}$ is closed in $\ensuremath{\mathcal{A}}$ under extensions. \end{proposition} \begin{remark}\label{idealtopological} Lemma \ref{compositeisnormal }, and, consequently, also Proposition \ref{torsion=closed} remain valid if $\ensuremath{\mathcal{A}}=\ensuremath{\mathsf{Top}}^{\mathbb{T}}$ is a category of topological semi-abelian algebras. Indeed, to adapt the proof of Lemma \ref{compositeisnormal } to this situation, it suffices to verify that the monomorphism $\pi_2\circ \ensuremath{\mathsf{ker\,}} (\pi_1) \circ t_K$ is normal. For this, first notice that the underlying morphism of semi-abelian algebras is normal. To check that it is also normal in the category of \emph{topological} semi-abelian algebras, we observe that $T(K)$ carries the induced topology for the inclusion $\pi_2\circ \ensuremath{\mathsf{ker\,}} (\pi_1) \circ t_K$ into $A$. This follows from the fact that $0\times T(K)$ has the topology induced by $R$, while $R$ has the topology induced by $A\times A$. \end{remark} Before considering some examples, we investigate the influence of a protoadditive reflector on the associated (pre)factorisation system. As recalled above, a full replete reflection $F\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{F}}$ is a localisation if and only if the class $\mathbb E$ of morphisms inverted by $F$ is stable under pullback. In the case of a protoadditive $F$, we still have that $\mathbb E$ is stable under pullback along split epimorphisms, and also the converse is true if $F$ is semi-left-exact: \begin{theorem} Let $\ensuremath{\mathcal{A}}$ be a finitely complete pointed protomodular category, $\ensuremath{\mathcal{F}}$ a full replete reflective subcategory of $\ensuremath{\mathcal{A}}$ and $(\mathbb E,\mathbb M)$ the induced (pre)factorisation system. Then $(1)$ implies $(2)$: \begin{enumerate} \item $F : \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{F}}$ is protoadditive; \item the class $\mathbb E$ is stable under pullback along split epimorphisms. \end{enumerate} If, moreover, the reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{F}}$ is semi-left-exact, then the two conditions are equivalent. \end{theorem} \begin{proof} First of all note that $\ensuremath{\mathcal{F}}$ is pointed and protomodular as a full reflective subcategory of $\ensuremath{\mathcal{A}}$. Since a protoadditive functor between pointed protomodular categories preserves pullbacks along split epimorphisms (by Proposition \ref{protoadditive-pullback}), $(1)$ implies $(2)$. Conversely, assume that $\mathbb E$ is stable under pullback along split epimorphisms and $F$ is semi-left-exact. Consider morphisms $f\colon A\rightarrow B$ and $p\colon E\rightarrow B$ in $\ensuremath{\mathcal{A}}$, with $f$ a split epimorphism. Let $e\in \mathbb E$ and $m\in \mathbb M$ be morphisms such that $p=m\circ e$. Then in the diagram \[ \xymatrix{ E\times_BA \ar[r] \ar@<-.8 ex>[d] & I\times_BA \ar[r] \ar@<-.8 ex>[d] & A \ar@<-.8 ex>[d]_f\\ E \ar@<-.8 ex>[u] \ar[r]_e & I \ar@<-.8 ex>[u] \ar[r]_m & B \ar@<-.8 ex>[u]} \] the left hand pullback is preserved by $F$ by assumption, and the right hand pullback because $F$ is semi-left-exact (which implies that $F$ preserves pullbacks along morphisms in $\mathbb M$). Thus $F$ is protoadditive. \end{proof} \begin{remark} Notice that we could have taken the object $E$ in the above proof to be zero. Hence, one could replace the condition $(2)$ with the apparently weaker condition: $(2')$ if $0 \rightarrow B$ is in $\mathbb E$, any kernel of a split epimorphism with codomain $B$ is in $\mathbb E$. \end{remark} \begin{examples}\label{exproto} \begin{enumerate} \item Any reflector into a full reflective subcategory of an additive category is additive, hence protoadditive. \item\label{rings} Let $\mathsf{CRng}$ be the semi-abelian variety of commutative but not necessarily unitary rings. Write $\mathsf{RedCRng}$ for the quasivariety of reduced commutative rings (namely those ones satisfying, for every $n\geq 1$, the implication $x^n=0 \Rightarrow x=0$) and $\mathsf{NilCRng}$ for the full subcategory of $\mathsf{CRng}$ consisting of nilpotent commutative rings. Then ($\mathsf{NilCRng},\mathsf{RedCRng}$) is a hereditary torsion-theory in $\mathsf{CRng}$, so that, by Theorem \ref{protoM}, the reflector $F$ \[ \xymatrix{ { \mathsf{CRng} }\,\, \ar@<1ex>[r]^-{F} & {\mathsf{RedCRng} } \ar@<1ex>[l]^-{U}_-{_{\perp}}} \] is protoadditive. \item\label{exgroupoids} Let $\ensuremath{\mathcal{A}}$ be an arbitrary semi-abelian category and $\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})$ the category of (internal) groupoids in $\ensuremath{\mathcal{A}}$, which is again semi-abelian. Recall (e.g. from \cite{ML}) that an (internal) groupoid $A=(A_1,A_0,m, d,c,i)$ in $\ensuremath{\mathcal{A}}$ is a diagram in $\ensuremath{\mathcal{A}}$ of the form \[ \xymatrix{ A_1\times_{A_0}A_1 \ar[r]^-{m} & A_1 \ar@<1.8 ex>[rr]^{d} \ar@<-1.8 ex>[rr]_{c} && A_0, \ar@<0.7 ex>[ll]_{i}} \] where $A_0$ represents the ``object of objects'', $A_1$ the ``object of arrows'', $A_1\times_{A_0}A_1$ the ``object of composable arrows'', $d$ the ``domain'', $c$ the ``codomain'', $i$ the ``identity'', and $m$ the ``composition''. Of course, these morphisms have to satisfy the usual commutativity conditions expressing, internally, the fact that $A$ is a groupoid. There is an adjunction \[ \xymatrix{ {\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}}) }\,\, \ar@<1ex>[r]^-{\pi_0} & {\ensuremath{\mathcal{A}}} \ar@<1ex>[l]^-{D}_-{_{\perp}}} \] where $D$ is the functor associating, with any object $A_0\in\ensuremath{\mathcal{A}}$, the discrete equivalence relation on $A_0$, and $\pi_0$ is the connected component functor. This functor $\pi_0$ sends a groupoid $A$ as above to the object $\pi_0 (A)$ in $\ensuremath{\mathcal{A}}$ given by the coequalizer of $d$ and $c$ or, equivalently, by the quotient $A_0/\Gamma_0(A)$, where $\Gamma_0(A)$ is the connected component of $0$ in $A$. It was proved in \cite{EG} that the functor $\pi_0$ is protoadditive. \item\label{exdisc} Let $\mathbb T$ be a semi-abelian algebraic theory. As mentioned above, $\mathbb T$ contains a unique constant $0$, binary terms $\alpha_i (x,y)$ (for $i\in \{1, \dots, n\}$ and some natural number $n\geq 1$) and an $(n+1)$-ary term $\beta$ subject to the identities \[ \alpha_i(x,x)=0 \quad {\rm and}\quad \beta(\alpha_1(x,y), \dots , \alpha_n (x,y),y)=x. \] Consider the semi-abelian category $\mathsf{HComp}^{\mathbb{T}}$ of compact Hausdorff topolo-~gical $\mathbb{T}$-algebras (\emph{compact $\mathbb{T}$-algebras} for short) and $\mathsf{TotDis}^{\mathbb{T}}$ its full subcategory of compact and totally disconnected $\mathbb{T}$-algebras. It was shown in \cite{BC} that $\mathsf{TotDis}^{\mathbb{T}}$ is a (normal epi)-reflective subcategory of $\mathsf{HComp}^{\mathbb{T}}$, where the reflector $I$ \[ \xymatrix{ {\mathsf{HComp}^{\mathbb{T}} }\,\, \ar@<1ex>[r]^-{I} & {\mathsf{TotDis}^{\mathbb{T}}} \ar@<1ex>[l]^-{}_-{_{\perp}}} \] sends a compact algebra $A$ to the quotient $A/\Gamma_0(A)$ of $A$ by the connected component $\Gamma_0(A)$ of $0$ in $A$. From \cite{BG} we know that $\mathsf{TotDis}^{\mathbb{T}}$ is, moreover, a torsion-free subcategory of $\mathsf{HComp}^{\mathbb{T}}$ with corresponding torsion subcategory the category $\mathsf{ConnComp}^{\mathbb{T}}$ of connected compact $\mathbb{T}$-algebras. We claim that $I$ is protoadditive. By Theorem \ref{protoM}, it suffices to prove that $\mathsf{ConnComp}^{\mathbb{T}}$ is $\ensuremath{\mathcal{M}}$-hereditary, for $\ensuremath{\mathcal{M}}$ the class of protosplit monomorphisms. For this purpose we consider a split short exact sequence \[ \xymatrix{0 \ar[r]& K \ar[r]^k & A \ar@<-.8 ex> [r]_f & B \ar@<-.8ex>[l]_s \ar[r] &0} \] in $\mathsf{HComp}^{\mathbb{T}}$ and suppose that $A$ is connected. Notice that the binary term \[ \sigma(x,y)= \beta(\alpha_1(x,y), \dots , \alpha_n (x,y),0) \] is a \emph{subtraction} \cite{U}, i.e.~ we have that $\sigma(x,x)=0$ and $\sigma(x,0)=x$. It follows that sending an element $a\in A$ to the element $\sigma(a,s(f(a)))$ defines a continuous map $g\colon A\rightarrow K$ such that $g\circ k=1_{K}$. In particular, $g$ is surjective, and $K$ is connected as a continuous image of the connected space $A$. \item Now consider the category $\mathsf{Top}^{\mathbb T}$ of topological $\mathbb{T}$-algebras and its full subcategory $\mathsf{Haus}^{\mathbb T}$ of Hausdorff $\mathbb{T}$-algebras, still for a semi-abelian theory $\mathbb{T}$. It was shown in \cite{BC} that $\mathsf{Haus}^{\mathbb T}$ is a (normal epi)-reflective subcategory of $\mathsf{Top}^{\mathbb T}$ where the reflector $I$ \[ \xymatrix{ {\mathsf{Top}^{\mathbb T} }\,\, \ar@<1ex>[r]^-{I} & {\mathsf{Haus}^{\mathbb T}} \ar@<1ex>[l]_-{_{\perp}}} \] sends a topological semi-abelian algebra $A$ to the quotient $A /\overline{\{0\} }$ of $A$ by the closure $\overline{\{0\}}$ in $A$ of the trivial subalgebra $\{ 0\}$. From \cite{BG} we know that $\mathsf{Haus}^{\mathbb T}$ is, moreover, a torsion-free subcategory of $\mathsf{Top}^{\mathbb T}$, with corresponding torsion subcategory the category $\mathsf{Ind}^{\mathbb{T}}$ of indiscrete $\mathbb{T}$-algebras, and that $(\mathsf{Ind}^{\mathbb{T}},\mathsf{Haus}^{\mathbb T})$ is \emph{quasi-hereditary} \cite{GR}: $\ensuremath{\mathcal{M}}$-hereditary for $\ensuremath{\mathcal{M}}$ the class of regular monomorphisms. Since any protosplit monomorphism is regular, we conclude via Theorem \ref{protoM} that $I$ is protoadditive. \item Abelianisation functors are usually \emph{not} protoadditive (see the last paragraph in Example \ref{counterproto}.\ref{extfree}, for instance). However, here is a nontrivial example of one that is protoadditive. Let $\mathsf{Rng^*}$ be the semi-abelian variety of (not necessarily unital) rings satisfying the identity $xyxy=xy$. The abelian objects in $\mathsf{Rng^*}$ are the $0$-rings: rings satisfying the identity $xy=0$. The reflector $\mathsf{ab} \colon \mathsf{Rng^*} \rightarrow{\mathsf{0\textrm{-}Rng} }$ in the adjunction \[ \xymatrix{ { \mathsf{Rng^*} }\,\, \ar@<1ex>[r]^-{\mathsf{ab}} & {\mathsf{0\textrm{-}Rng} } \ar@<1ex>[l]^-{}_-{_{\perp}}} \] sends a ring $A$ in $\mathsf{Rng^*}$ to the quotient $\mathsf{ab}(A) = A/[A,A]$ of $A$ by the ideal $[A,A] = \{\sum_i a_i{a}_i' \mid a _i \in A, {a}_i' \in A \} $ consisting of all (finite) sums of products of elements in $A$. We claim that the functor $\mathsf{ab}$ is protoadditive. By Proposition \ref{reflector=radical}, it suffices to prove that the corresponding radical $T=[\cdot, \cdot ]\colon \mathsf{Rng^*}\rightarrow\mathsf{Rng^*}$ is protoadditive. To this end, we consider a split short exact sequence \[ \xymatrix{0 \ar[r]& K \ar[r] & A \ar@<-.8 ex> [r]_-{f} & B \ar@<-.8 ex>[l]_-{s} \ar[r] &0} \] in $\mathsf{Rng^*}$ and the restriction induced by the radical $T$ in $\mathsf{Rng^*}$: \[ \xymatrix{ T(K) \ar[r] & T(A) \ar@<-.8 ex> [r]_-{T(f)} &T(B) \ar@<-.8 ex>[l]_-{T(s)}.} \] We shall prove that $T(K) = K[T(f)]$, which will imply that the lower sequence is exact. Let $a=\sum_i a_i{a}_i' $ be an element of $T(A)$ such that $T(f(a)) = f(a)=0$. We have to prove that $a\in T(K)$. But any element $a_i \in A$ can be written as $a_i=k_i + s(b_i)$ for some $k_i \in K$ and $b_i \in B$ and, similarly, ${a}_i' = k_i' + s(b_i')$. Notice that $f(a)=0$ implies that $\sum_i b_i b_i' =0$. Hence, using the identity $xyxy=xy$ we find that \begin{eqnarray*} a &=& \sum_i k_ik_i' + s(b_i)k_i' + s(b_i')k_i + b_i b_i' \\ &=& \sum_i k_ik_i' + s(b_i)k_i' + s(b_i')k_i \\ &=& \sum_i k_ik_i' + (s(b_i)k_i')(s(b_i)k_i') + (s(b_i')k_i)(s(b_i')k_i). \end{eqnarray*} Since $K$ is a two-sided ideal of $A$, this shows that $a\in T(K)$. Notice also that the identity $xyxy=xy$ implies that the radical $T$ is idempotent so that $0\textrm{-}\ensuremath{\mathsf{Rng}}$ is a torsion-free subcategory of $\mathsf{Rng^*}$ by Theorem~\ref{torsiontheorem}. \end{enumerate} \end{examples} We conclude this section with some (counter)examples, to show the independence of the notions of protoadditivity, admissibility, semi-left-exactness and Barr-exactness (=the preservation of kernel pairs of regular epimorphisms), for a (normal epi)-reflector to a full subcategory. \begin{examples}\label{counterproto} \begin{enumerate} \item \emph{A torsion-free reflector which is not protoadditive.}\label{extfree} Consider the category $\ensuremath{\mathsf{Gp}}$ of groups and the subquasivariety ${\ensuremath{\mathsf{Gp}}}_{t.f.}$ of torsion-free groups (=groups satisfying, for all $n\geq 1$, the implication $x^n=1\Rightarrow x=1$). ${\ensuremath{\mathsf{Gp}}}_{t.f.}$ is easily seen to be a torsion-free subcategory of $\ensuremath{\mathsf{Gp}}$ with corresponding torsion subcategory consisting of all groups generated by elements of finite order. However, the reflector $F \colon \ensuremath{\mathsf{Gp}} \rightarrow {\ensuremath{\mathsf{Gp}}}_{t.f.} $ is not protoadditive. To see this, we shall give an example already considered in \cite{GJ} for a different purpose. Consider the infinite dihedral group $C_2 \ltimes \mathbb Z$, where the action of $C_2=\{1, c\}$ on the group of integers $\mathbb Z$ is given by $c \cdot z = -z$ and $1\cdot z=z, \, \forall z \in \mathbb Z$. The canonical injections of $\mathbb{Z}$ and $C_2$ and the projection on $C_2$ determine a split short exact sequence $$ \xymatrix{0 \ar[r] & {\mathbb Z} \ar[r] & C_2 \ltimes \mathbb Z \ar@<-.8 ex>[r]_-{} & C_2\ar@<-.8 ex>[l]_-{} \ar[r] & 0} $$ which is not preserved by $F$, since its image by $F$ is \[ \xymatrix{ {\mathbb Z} \ar[r] & 0 \ar@<-.8 ex>[r]_-{} & 0 \ar@<-.8 ex>[l]_-{} \ar[r] & 0.} \] Observe that this same split short exact sequence can be used to show that the abelianisation functor $\mathsf{ab}\colon \ensuremath{\mathsf{Gp}} \rightarrow \ensuremath{\mathsf{Ab}}$ is not protoadditive: while both $\mathbb Z$ and $C_2$ are abelian groups, $C_2 \ltimes \mathbb Z$ is not, and one concludes via Proposition~\ref{splitext}. \item \emph{A protoadditive reflector which is not admissible.} Consider the variety $\ensuremath{\mathsf{Ab}}$ of abelian groups and the quasivariety $\ensuremath{\mathcal{F}}$ of abelian groups determined by the implication ($4x=0 \Rightarrow 2x=0$). The reflector $F \colon \ensuremath{\mathsf{Ab}} \rightarrow \ensuremath{\mathcal{F}}$ is additive, thus in particular protoadditive. However, $F$ is not admissible (with respect to surjective homomorphisms). Let $C_n$ denote the cyclic group of order $n$ ($n\geq 1$) and $\ensuremath{\mathbb{Z}}$ the group of integers. Then consider the reflection unit $\eta_{C_4}\colon C_4\rightarrow F(C_4)=C_2$ and the surjective homomorphism $\ensuremath{\mathbb{Z}}\rightarrow C_2$ in $\ensuremath{\mathcal{F}}$, and note that their pullback (the left hand square below) is sent to the right hand square below, which is not a pullback: \[ \xymatrix{ C_4\times_{C_2}\ensuremath{\mathbb{Z}} \ar[r] \ar[d] & C_4 \ar[d] & C_4\times_{C_2}\ensuremath{\mathbb{Z}} \ar[r] \ar[d] & C_2 \ar@{=}[d] \\ \ensuremath{\mathbb{Z}} \ar[r] & C_2 & \ensuremath{\mathbb{Z}} \ar[r] & C_2} \] \item\label{Boolean} \emph{A Barr-exact admissible reflector which is not protoadditive.} Consider the variety $\mathsf{Rng}$ of nonassociative nonunital rings and its subvariety $\mathsf{Boole}$ of nonassociative Boolean rings, determined by the identity $x^2=x$. Since the reflector $I \colon \mathsf{Rng} \rightarrow \mathsf{Boole}$ sends groupoids to groupoids (by Lemma \ref{Marino}) and $\mathsf{Boole}$ is an arithmetical category (which means that every internal groupoid in $\mathsf{Boole}$ is an equivalence relation---see Example $2.9.13$ in \cite{BB}), $I$ is Barr-exact. However, $I$ is not protoadditive. To see this, consider the split short exact sequence \[ \xymatrix{ 0 \ar[r] & C_2 \ar[r]^-{i_2} & C_2 \ltimes C_2 \ar@<-.8 ex>[r]_-{p_1} & C_2 \ar[r] \ar@<-.8 ex>[l]_-{i_1}& 0 } \] in $\mathsf{Rng}$, where $C_2= {\mathbb Z} / 2 {\mathbb Z}$, the addition in the ring $C_2 \ltimes C_2$ is defined by $(a,b) +(c,d)= (a+c, b+d)$, the multiplication by $(a,b) \cdot (c,d) = (ac, bc+bd)$, and the morphisms $i_1$ and $i_2$ are the canonical injections and $p_1$ the canonical projection on the first component. While $C_2$ is Boolean, $C_2 \ltimes C_2$ is not, since $(1,1)\cdot (1,1)= (1,1+1)= (1,0)$. Hence, $\mathsf{Boole}$ is not closed in $\mathsf{Rng}$ under split extensions, and we conclude via Proposition \ref{splitext} that $I$ is not protoadditive. \item\emph{A protoadditive torsion-free reflection which is not Barr-exact.} The reflector $\pi_0 \colon \ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}}) \rightarrow \ensuremath{\mathcal{A}}$ from Example \ref{exproto}.\ref{exgroupoids} is not Barr-exact, in general. For instance, if $\ensuremath{\mathcal{A}}=\ensuremath{\mathsf{Gp}}$ and $A$ is an abelian group, then the diagram \[ \xymatrix{ A\times A \ar[r]^-{-} \ar@<1.2 ex>[d]^{\pi_2}\ar@<-1.2 ex>[d]_{\pi_1} & A \ar@<1.2 ex>[d] \ar@<-1.2 ex>[d] \\ A \ar[u]|{\delta} \ar[r] & 0,\ar[u] } \] where the the upper horizontal morphism $- \colon A \times A \rightarrow A$ sends a pair $(a_1,a_2)$ to the difference $a_1-a_2$, $\pi_1$ and $\pi_2$ are the product projections and $\delta$ the diagonal morphism, is a regular epimorphism of internal groupoids in $\ensuremath{\mathsf{Gp}}$ whose kernel pair is not preserved by the functor $\pi_0$. \item Another example of the same kind is the reflector $I \colon {\mathsf{HComp}^{\mathbb{T}} }\rightarrow {\mathsf{TotDis}^{\mathbb{T}}}$ from Example \ref{exproto}.\ref{exdisc}: $\mathsf{TotDis}^{\mathbb{T}}$ is a torsion-free subcategory of $\mathsf{HComp}^{\mathbb{T}}$ and $I$ is protoadditive, but not Barr-exact. Indeed, if $I$ were Barr-exact, then it would preserve short exact sequences (i.e. it would be a protolocalisation in the sense of \cite{BCGS}) since the kernel of any morphism can be obtained via the kernel of one of the kernel pair projections (as in the proof of Lemma \ref{compositeisnormal }), and this latter kernel is preserved because $I$ is protoadditive. However, the short exact sequence \[ \xymatrix{ 0 \ar[r] & \{-1,1\} \ar[r] & S^1 \ar[r] & S^1/\{-1,1\} \ar[r] & 0 } \] in the category of compact Hausdorff groups, where $S^1$ is the unit circle group equipped with the topology induced by the Euclidean topology on ${\mathbb R}^2$, is not preserved, since both $S^1$ and $S^1/\{-1,1\}$ are connected while $\{-1,1\}$ is not. \item Other examples of this kind are provided by cohereditary (=the torsion-free part is closed under quotients) torsion theories in the abelian context whose corresponding reflector is not a localisation. \item\emph{An additive admissible reflector which is not torsion-free.} Consider the variety $\ensuremath{\mathsf{Ab}}$ of abelian groups, and the Burnside variety $\mathsf{B}_2$ of exponent $2$: $\mathsf{B}_2$ consists of all abelian groups $A$ such that $a+a=0$ for any $a\in A$. Then the reflector $\ensuremath{\mathsf{Ab}}\rightarrow\mathsf{B}_2$ is additive and admissible (with respect to regular epimorphisms) \cite{JK}, but $\mathsf{B}_2$ is not a torsion-free subcategory of $\ensuremath{\mathsf{Ab}}$, since the induced radical $T\colon \ensuremath{\mathsf{Ab}}\rightarrow\ensuremath{\mathsf{Ab}}$ is not idempotent: for instance, by considering the cyclic group $C_4$ we see that $T(C_4)=C_2$ while $T(T(C_4))=T(C_2)=0$. \end{enumerate} \end{examples} \section{Torsion-free subcategories with a protoadditive reflector}\label{coveringmorphisms} In \cite{CJKP} Carboni, Janelidze, Kelly and Par\'e considered for every factorisation system $(\mathbb E,\mathbb M)$ on a finitely complete category $\ensuremath{\mathcal{A}}$ classes $\mathbb E'$ and $\mathbb M^*$ of morphisms in $\ensuremath{\mathcal{A}}$ defined as follows: $\mathbb E'$ consists of all morphisms $f$ that are \emph{stably in $\mathbb E$}, i.e.~ every pullback of $f$ is in $\mathbb E$; while $\mathbb M^*$ consists of all morphisms $f\colon A\rightarrow B$ that are \emph{locally in $\mathbb M$}, this meaning that there exists an effective descent morphism $p\colon E\rightarrow B$ for which the pullback $p^*(f)$ is in $\mathbb M$. Thus, if $(\mathbb E,\mathbb M)$ is the factorisation system associated with an admissible (semi-left-exact) reflection, then $\mathbb M^*$ consists of all central extensions with respect to the corresponding absolute Galois structure. While it is always true that $\mathbb E'\subseteq (\mathbb M^*)^{\uparrow}$, one does not necessarily have that $(\mathbb E',\mathbb M^*)$ is a factorisation system. However, this does happen to be the case in a number of important examples. For instance, when $(\mathbb E,\mathbb M)$ is the factorisation system on the category of compact Hausdorff spaces associated with the reflective subcategory of totally disconnected spaces: in this case, stabilising $\mathbb E$ and localising $\mathbb M$ yields the Eilenberg and Whyburn monotone-light factorisation for maps of compact Hausdorff spaces \cite{Eilenberg,Whyburn}. Another example given in \cite{CJKP} is that of a hereditary torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ in an abelian category $\ensuremath{\mathcal{A}}$ with the property that every object of $\ensuremath{\mathcal{A}}$ is the quotient of an object of $\ensuremath{\mathcal{F}}$. Also in this case the associated factorisation system $(\mathbb E,\mathbb M)$ induces a factorisation system $(\mathbb E',\mathbb M^*)$. In fact, as we shall explain in \cite{EG9}, this remains true when the category $\ensuremath{\mathcal{A}}$ is merely homological and $\ensuremath{\mathcal{T}}$ is not asked to be hereditary. Even if $(\mathbb E',\mathbb M^*)$ fails to be a factorisation system, it might still be ``partially'' so, in the sense that $\ensuremath{\mathcal{A}}$ admits ``monotone-light'' factorisations, but only for morphisms of a particular class. We shall prove in this section that this is the case for the class of effective descent morphisms in a homological category $\ensuremath{\mathcal{A}}$, if $(\mathbb E,\mathbb M)$ is the factorisation system associated with a torsion-free subcategory $\ensuremath{\mathcal{F}}$ whose reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{F}}$ is protoadditive (and such that condition $(N)$ below is satisfied). Let $\ensuremath{\mathcal{A}}$ be a homological category, $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ a torsion theory in $\ensuremath{\mathcal{A}}$ with reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ and coreflector $T\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{T}}$, $(\mathbb E,\mathbb M)$ the associated reflective factorisation system and $(\mathbb E',\mathbb M^*)$ as defined above. We shall also consider the classes $\overline{\mathbb E}$ and $\overline{\mathbb M}$ defined as follows: $\overline{\mathbb E}$ is the class of all normal epimorphisms $f$ in $\ensuremath{\mathcal{A}}$ such that $K[f]\in\ensuremath{\mathcal{T}}$; $\overline{\mathbb M}$ is the class of all morphisms $f$ in $\ensuremath{\mathcal{A}}$ such that $K[f]\in\ensuremath{\mathcal{F}}$. As we shall see, it is always true that $\overline{\mathbb E}\subseteq \overline{\mathbb M}^{\uparrow}$, and it is ``often'' the case that $(\overline{\mathbb E},\overline{\mathbb M})$ is a factorisation system. In the present section, we shall mostly be concerned with comparing $(\mathbb E',\mathbb M^*)$ with $(\overline{\mathbb E},\overline{\mathbb M})$. In particular, we shall be interested in conditions on the torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ under which one has for any effective descent morphism $f$ in $\ensuremath{\mathcal{A}}$ that $f\in\mathbb M^*$ if and only if $f\in\overline{\mathbb M}$. We shall consider the conditions \begin{enumerate} \item[(P)] the reflection $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ is protoadditive; \item[(N)]\label{conditionPageN} for any morphism $f\colon A\rightarrow B$ in $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathsf{ker\,}} (f)\circ t_{K[f]}\colon T(K[f])\rightarrow A$ is a normal monomorphism. \end{enumerate} \begin{remark}\label{conditionN} Recall from Section \ref{protoadditivesection} that condition $(P)$ is satisfied if and only if $\ensuremath{\mathcal{T}}$ is $\ensuremath{\mathcal{M}}$-hereditary, for $\ensuremath{\mathcal{M}}$ the class of protosplit monomorphisms in $\ensuremath{\mathcal{A}}$. Condition $(N)$ is trivially satisfied if $\ensuremath{\mathcal{A}}$ is an abelian category, and this is also the case if $\ensuremath{\mathcal{A}}$ is the category of groups: it suffices to observe that the inner automorphisms on $A$ restrict to $T(K[f])$, for any $f\colon A\rightarrow B$. If $\ensuremath{\mathcal{A}}$ is homological and $\ensuremath{\mathcal{T}}$ is \emph{quasi-hereditary} in the sense of \cite{GR}, then $(N)$ is satisfied, since in this case we have that $T(K[f])=K[f]\cap T(A)$ for any morphism $f\colon A\rightarrow B$. From Lemma \ref{compositeisnormal } and Remark \ref{idealtopological} we know that if $\ensuremath{\mathcal{A}}$ is either semi-abelian or a category of topological semi-abelian algebras then $(P)$ implies $(N)$. \end{remark} \begin{remark} When studying the factorisation system associated with an absolute Galois structure it is natural to consider also the condition (C) ``$\ensuremath{\mathcal{F}}$ covers $\ensuremath{\mathcal{A}}$'': for any object $B\in\ensuremath{\mathcal{A}}$, there exists an effective descent morphism $E\rightarrow B$ such that $E\in\ensuremath{\mathcal{F}}$. This condition has been considered before by several authors, for instance, in \cite{CJKP,JMT1, GJ,GR}. Several of the results in this section established under conditions $(N)$ and $(P)$ have a corresponding ``absolute'' formulation where condition $(C)$ replaces $(P)$: since this article is mainly concerned with condition $(P)$ we decided to leave these developments for another article \cite{EG9}. \end{remark} We begin with a characterisation of the normal extensions associated with a torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ in a homological category $\ensuremath{\mathcal{A}}$ satisfying condition $(P)$ (see also \cite{GJ, GR}). Let us write $\Gamma_{\ensuremath{\mathcal{F}}}$ for the induced absolute Galois structure on $\ensuremath{\mathcal{A}}$. \begin{proposition}\label{protocentral} Assume that $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ satisfies condition $(P)$. Then for any effective descent morphism $f\colon A\rightarrow B$ in $\ensuremath{\mathcal{A}}$ the following conditions are equivalent: \begin{enumerate} \item $f \colon A \rightarrow B$ is a normal extension with respect to $\Gamma_{\ensuremath{\mathcal{F}}}$; \item $f \colon A \rightarrow B$ is a central extension with respect to $\Gamma_{\ensuremath{\mathcal{F}}}$; \item $K[f] \in \ensuremath{\mathcal{F}}$. \end{enumerate} \end{proposition} \begin{proof} $(1)\Rightarrow (2)$ is true by definition. $(2)\Rightarrow (3)$ is well known, but let us recall the argument. Consider the following diagram \eqref{coveringdiagram} in $\ensuremath{\mathcal{A}}$, where the right hand square is a pullback and $p$ is an effective descent morphism: \[\vcenter{\begin{equation}\label{coveringdiagram} \xymatrix{ F(P) \ar[d]_{F(p^*(f))} & P \ar@{}[rd]|<<{\copy\pullbackbox} \ar[d]_{p^*(f)}\ar[l]_-{\eta_P} \ar[r] & A \ar[d]^f\\ F(E) & E \ar[r]_p \ar[l]^-{\eta_E} & B} \end{equation}} \] If the left hand square is a pullback as well, then \[ K[F(p^*(f))] \cong K[p^*(f)] \cong K[f] \] and it follows that $K[f]\in\ensuremath{\mathcal{F}}$, since $K[F(p^*(f))]\in\ensuremath{\mathcal{F}}$ as the kernel of the morphism $F(p^*(f))$ in $\ensuremath{\mathcal{F}}$. $(3)\Rightarrow (1)$ Consider an effective descent morphism $f\colon A\rightarrow B$ with $K[f]\in \ensuremath{\mathcal{F}}$ and its kernel pair $(\pi_1,\pi_2)\colon R[f]\rightarrow A$. By applying the reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$, we obtain the following commutative diagram of split short exact sequences in $\ensuremath{\mathcal{A}}$: \[ \xymatrix@=35pt{0 \ar[r]& K[f] \ar@{=}[d]_{\eta_{K[f]}} \ar[r] & R[f] \ar[d]_{\eta_{R[f]}} \ar@<-.8 ex> [r]_-{\pi_1} & A \ar[d]^{\eta_A} \ar[r] \ar@<-.8ex>[l] &0 \\ 0 \ar[r]& F(K[f]) \ar[r]_{F(k)} & F(R[f]) \ar@<-.8 ex> [r]_-{F(\pi_1)} & F(A) \ar@<-.8ex>[l] \ar[r] &0.} \] Since $\eta_{K[f]}$ is an isomorphism, the right hand square is a pullback, and we conclude that $f$ is a normal extension. \end{proof} Next we show that, for \emph{every} torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$, one has that $\overline{\mathbb E}\subseteq\overline{\mathbb M}^{\uparrow}$ -- or equivalently $\overline{\mathbb M}\subseteq \overline{\mathbb E}^{\downarrow}$: \begin{lemma}\label{orthogonal} For any pair of morphisms $e\in\overline{\mathbb E}$ and $m\in\overline{\mathbb M}$ we have that $e\downarrow m$. \end{lemma} \begin{proof} Consider a commutative square as in the right hand side of the diagram \[ \xymatrix{ K[e] \ar[r]^-{\ensuremath{\mathsf{ker\,}} (e)} \ar[d]_k & A \ar[r]^e \ar[d]_a & B \ar@{.>}[ld]\ar[d]^b\\ K[m] \ar[r]_-{\ensuremath{\mathsf{ker\,}} (m)} & C \ar[r]_m & D} \] and assume that $e\in\overline{\mathbb E}$ and $m\in\overline{\mathbb M}$. Since, by assumption, $K[e]\in\ensuremath{\mathcal{T}}$ and $K[m]\in\ensuremath{\mathcal{F}}$, we see that $k$ is the zero morphism. It follows that also $a\circ \ensuremath{\mathsf{ker\,}} (e)=\ensuremath{\mathsf{ker\,}} (m)\circ k$ is zero. Since $e$ was assumed to be a normal epimorphism, it is the cokernel of its kernel $\ensuremath{\mathsf{ker\,}} (e)$, and there exists a unique dotted arrow making the diagram commute. This shows that $e\downarrow m$. \end{proof} By further assuming that condition $(N)$ is satisfied, we get a stable factorisation system: \begin{proposition}\label{inducedfactorisation} If $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ satisfies condition $(N)$, then $(\overline{\mathbb E},\overline{\mathbb M})$ is a stable factorisation system on $\ensuremath{\mathcal{A}}$, and $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ is uniquely determined by $(\overline{\mathbb E},\overline{\mathbb M})$. In fact, there is a bijective correspondence between \begin{enumerate} \item the torsion theories in $\ensuremath{\mathcal{A}}$ satisfying condition $(N)$; \item the stable factorisation systems $(\mathbb E,\mathbb M)$ on $\ensuremath{\mathcal{A}}$ such that every $e\in\mathbb E$ is a normal epimorphism. \end{enumerate} \end{proposition} \begin{proof} Suppose that $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ is a torsion theory in $\ensuremath{\mathcal{A}}$ satisfying condition $(N)$. Then any morphism $f\colon A\rightarrow B$ in $\ensuremath{\mathcal{A}}$ with kernel $K$ factorises as \[ \xymatrix{ A \ar[r]^-{q_{T(K)}} & A/T(K) \ar[r]^-m &B,} \] where $q_{T(K)}$ is the cokernel of the composite $\ensuremath{\mathsf{ker\,}}(f)\circ t_K\colon T(K)\rightarrow A$, and $t_K\colon T(K)\rightarrow K$ is the coreflection counit. Clearly, $q_{T(K)}$ lies in $\overline{\mathbb E}$, and we also have that $m\in\overline{\mathbb M}$ since from the ``double quotient'' isomorphism theorem (see Theorem $4.3.10$ in \cite{BB}) it follows that \[ K[m]=K[\ensuremath{\mathsf{coim\,}}(m)\colon A/T(K)\rightarrow A/K]=K/T(K)=F(K)\in\ensuremath{\mathcal{F}}, \] where $\ensuremath{\mathsf{coim\,}}(m)$ denotes the normal epi part of the (normal epi)-mono factorisation of $m$. Here we used that $I[m]=I[f]=A/K$ by the uniqueness of the (normal epi)-mono factorisation of $f$. Thus we see that $(\overline{\mathbb E},\overline{\mathbb M})$ is a factorisation system since any morphism of $\ensuremath{\mathcal{A}}$ admits an $(\overline{\mathbb E},\overline{\mathbb M})$-factorisation and $\overline{\mathbb E}\subseteq\overline{\mathbb M}^{\uparrow}$ by Lemma~\ref{orthogonal}. To see that the class $\overline{\mathbb E}$ is pullback-stable, it suffices to observe that normal epimorphisms are pullback-stable in the homological category $\ensuremath{\mathcal{A}}$, and that pulling back induces an isomorphism between kernels. Conversely, given a stable factorisation system $(\mathbb E,\mathbb M)$ on $\ensuremath{\mathcal{A}}$ such that every $e\in\mathbb E$ is a normal epimorphism, we consider the full subcategories $\ensuremath{\mathcal{T}}$ and $\ensuremath{\mathcal{F}}$ of $\ensuremath{\mathcal{A}}$ defined on objects via \[ \ensuremath{\mathcal{T}}=\{T\in\ensuremath{\mathcal{A}} \ | \ T\rightarrow 0\in\mathbb E \}; \ \ \ensuremath{\mathcal{F}}=\{F\in\ensuremath{\mathcal{A}} \ | \ F\rightarrow 0\in\mathbb M \}. \] Then $\ensuremath{\mathrm{Hom}}_{\ensuremath{\mathcal{A}}}(T,F)=\{0\}$ for any $T\in\ensuremath{\mathcal{T}}$ and $F\in\ensuremath{\mathcal{F}}$ since the assumption that $(\mathbb E,\mathbb M)$ is factorisation system implies, for any morphism $T\rightarrow F$, the existence of the dotted arrow making the following diagram commute: \[ \xymatrix{ T \ar[r] \ar[d] & 0\ar[d] \ar@{.>}[ld]\\ F \ar[r] & 0.} \] Moreover, if for an object $A\in\ensuremath{\mathcal{A}}$, $m\circ e\colon A\rightarrow I\rightarrow 0$ is the $(\mathbb E,\mathbb M)$-factorisation of the unique morphism $A\rightarrow 0$, then \[ \xymatrix{ 0 \ar[r] & K[e] \ar[r] & A \ar[r]^e & I\ar[r] & 0} \] is a short exact sequence with $I\in\ensuremath{\mathcal{F}}$ and also $K[e]\in\ensuremath{\mathcal{T}}$, since $K[e]\rightarrow 0\in\overline{\mathbb E}$ as the pullback of $e$ along the unique morphism $0\rightarrow I$. We conclude that $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ is a torsion theory. Note that the radical $T\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{A}}$ is defined on objects $A\in\ensuremath{\mathcal{A}}$ as $T(A)=K[e]$, where $e$ is the ``$\mathbb E$-part'' of the $(\mathbb E,\mathbb M)$-factorisation of the morphism $A\rightarrow 0$. To see that $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ satisfies condition $(N)$, consider a morphism $f\colon A\rightarrow B$ with $(\mathbb E,\mathbb M)$-factorisation $f=m\circ e$ and kernel $K$. Let $m'\circ e'$ be the $(\mathbb E,\mathbb M)$-factorisation of the unique morphism $K\rightarrow 0$. Then there is a unique morphism $I'\rightarrow I$ such that the diagram below---in which the outer rectangle is a pullback---commutes: \[ \xymatrix{ K \ar[d] \ar[r]^{e'} & I' \ar[r]^{m'} \ar@{.>}[d] & 0 \ar[d] \\ A \ar[r]_e & I \ar[r]_m & B} \] The uniqueness of the $(\mathbb E,\mathbb M)$-factorisation of $K\rightarrow 0$, together with the pullback-stability of both classes $\mathbb E$ and $\mathbb M$, imply that the two squares are pullbacks. Consequently, we have that $\ensuremath{\mathsf{ker\,}} (f)\circ \ensuremath{\mathsf{ker\,}} (e')=\ensuremath{\mathsf{ker\,}} (e)$ and this is a normal monomorphism, as desired. Clearly, if $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ is a torsion theory satisfying $(N)$, then $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ coincides with the torsion theory induced by $(\overline{\mathbb E},\overline{\mathbb M})$. On the other hand, consider a stable factorisation system $(\mathbb E,\mathbb M)$ on $\ensuremath{\mathcal{A}}$ such that every $e\in\mathbb E$ is a normal epimorphism. Let $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ be the induced torsion theory and $(\overline{\mathbb E},\overline{\mathbb M})$ the associated factorisation system, and consider a normal epimorphism $e$. Then $e$ is the cokernel of its kernel, i.e.~the following square is both a pullback and a pushout: \[ \xymatrix{ K[e] \ar[r] \ar[d] & 0 \ar[d]\\ A \ar[r]_e & B} \] Using that the class $\mathbb E$ is pullback-stable (by assumption) as well as pushout-stable (since $(\mathbb E,\mathbb M)$ is a factorisation system), we see that \[ e\in\mathbb E \Leftrightarrow K[e]\rightarrow 0\in\mathbb E \Leftrightarrow K[e]\in\ensuremath{\mathcal{T}}\Leftrightarrow e\in\overline{\mathbb E} \] and it follows that $\mathbb E=\overline{\mathbb E}$. Since both $(\mathbb E,\mathbb M)$ and $(\overline{\mathbb E},\overline{\mathbb M})$ are (pre)factorisation systems, this implies that $(\mathbb E,\mathbb M)=(\overline{\mathbb E},\overline{\mathbb M})$. \end{proof} The following observation will be needed: \begin{lemma}\label{stablekernel} Given a torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ one always has that $\overline{\mathbb E}\subseteq \mathbb E'$. \end{lemma} \begin{proof} It has already been explained in the proof of Proposition \ref{inducedfactorisation} that the class $\overline{\mathbb E}$ is pullback-stable. Hence, to prove that $\overline{\mathbb E}\subseteq \mathbb E'$, it suffices to show that $\overline{\mathbb E}\subseteq \mathbb E$. Consider, therefore, a normal epimorphism $f$ in $\ensuremath{\mathcal{A}}$ with $K[f]\in\ensuremath{\mathcal{T}}$. Then $f$ is the cokernel of its kernel $\ensuremath{\mathsf{ker\,}} (f)$ and, consequently, $F(f)$ the cokernel (in $\ensuremath{\mathcal{F}}$) of $F(\ensuremath{\mathsf{ker\,}} (f))$. Since $F(K[f])=0$ by assumption, this implies that $F(f)$ is an isomorphism. \end{proof} We are now ready to prove the what we announced at the beginning of this section concerning the existence of ``monotone-light'' factorisations. We write $\ensuremath{\mathsf{EffDes}}(\ensuremath{\mathcal{A}})$ (resp. $\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})$) for the full subcategory of the arrow category $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ determined by all effective descent morphisms in $\ensuremath{\mathcal{A}}$ (resp.~all normal extensions with respect to $\Gamma_{\ensuremath{\mathcal{F}}}$). \begin{theorem}\label{protofactorisation} If $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ is a torsion theory in $\ensuremath{\mathcal{A}}$ satisfying conditions $(P)$ and $(N)$, then the following properties hold: \begin{enumerate} \item $\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})$ is a reflective subcategory of $\ensuremath{\mathsf{EffDes}}(\ensuremath{\mathcal{A}})$; \item normal extensions are stable under composition; \item any effective descent morphism $f\colon A\rightarrow B$ factors uniquely (up to isomorphism) as a composite $f=m\circ e$, where $e$ is stably in $\mathbb E$ and $m$ is a normal extension; moreover, this factorisation coincides with the $(\overline{\mathbb E},\overline{\mathbb M})$-factorisation of $f$. \end{enumerate} \end{theorem} \begin{proof} $(1)$ By Propostion \ref{inducedfactorisation}, the full subcategory of $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ determined by the class $\overline{\mathbb E}$ is reflective in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$: the reflection of a morphism $f$ with $(\overline{\mathbb E},\overline{\mathbb M})$-factorisation $f=m\circ e$ is given by $m$, with unit $e$. To see that this reflection restricts to a reflection $\ensuremath{\mathsf{EffDes}}(\ensuremath{\mathcal{A}})\rightarrow\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})$, it suffices to consider Proposition \ref{protocentral} and to note that effective descent morphisms satisfy the strong right cancellation property. $(2)$ follows from condition $(P)$ only: if $f\colon A\rightarrow B$ and $g\colon B\rightarrow C$ are normal extensions then $g\circ f$ is still an effective descent morphism, and there is a short exact sequence \[ \xymatrix{ 0\ar[r] & K[f] \ar[r] & K[g\circ f] \ar[r] & K[g] \ar[r] & 0} \] with $K[f]$ and $K[g]$ torsion-free by Proposition \ref{protocentral}. Since $\ensuremath{\mathcal{F}}$ is closed under extensions, $K[g\circ f]$ is torsion-free as well, so that $g\circ f$ is a normal extension, again by Proposition \ref{protocentral}. $(3)$ Let $f\colon A\rightarrow B$ be an effective descent morphism with $(\overline{\mathbb E},\overline{\mathbb M})$-factorisation $f=m\circ e$. Then $e$ is stably in $\mathbb E$ by Lemma \ref{stablekernel}, and it has already been remarked above in $(1)$ that $m$ is a normal extension. The uniqueness of this factorisation follows from the fact that $\mathbb E'\subseteq (\mathbb M^*)^{\uparrow}$. \end{proof} Before considering some examples, let us show that the assumption that the (normal epi)-reflection $F\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{F}}$ is torsion-free in the above was crucial. Note that any split epimorphism, and in particular, any morphism $A\rightarrow 0$ is an effective descent morphism. \begin{proposition} Let $\ensuremath{\mathcal{A}}$ be a pointed category and $(\mathbb E,\mathbb M)$ the (pre)factorisation system on $\ensuremath{\mathcal{A}}$ associated with a given reflection $F\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{F}}$ to a full replete subcategory $\ensuremath{\mathcal{F}}$. If, for every object $A\in\ensuremath{\mathcal{A}}$, the morphism $\tau\colon A\rightarrow 0$ admits a factorisation $\tau=m\circ e$ with $e\in \mathbb E'$ and $m\in \mathbb M^*$, then $F$ has stable units. Thus, in particular, $F$ has stable units whenever $(\mathbb E',\mathbb M^*)$ is a factorisation system. \end{proposition} \begin{proof} Let $A$ be an object of $\ensuremath{\mathcal{A}}$ and $\tau=m\circ e$ the $(\mathbb E',\mathbb M^*)$-factorisation of the morphism $\tau\colon A\rightarrow 0$, i.e.~$e\in \mathbb E'$ and $m\in \mathbb M^*$. Remark that $\tau$ factorises, alternatively, as in the right hand triangle \[ \xymatrix@=10pt{ A \ar[rr]^{\tau} \ar[rd]_e && 0 &&& A \ar[rr]^{\tau} \ar[rd]_{\eta_A} && 0 \\ & I \ar[ru]_{m}&&&&& F(A) \ar[ru] &} \] (where $\eta_A\colon A\rightarrow F(A)$ is the reflection unit) and notice that $\eta_A\in \mathbb E$ and $F(A)\rightarrow 0\in \mathbb M$. If we can prove $\tau=e\circ m$ too is an $(\mathbb E,\mathbb M)$-factorisation, then both factorisations coincide (up to isomorphism) and it will follow that $\eta_A\in\mathbb E'$, as desired. Since $\mathbb E'\subseteq \mathbb E$ by definition, it will suffice to show that $m\in\mathbb M$. Since $m\in \mathbb M^*$, there exists in $\ensuremath{\mathcal{A}}$ an effective descent morphism $E\rightarrow 0$ such that the product projection $\pi_E\colon E\times I\rightarrow E$ lies in $\mathbb M$. But this implies that also $m\in \mathbb M$, since $m$ appears as a pullback of $\pi_E$ in the diagram \[ \xymatrix{ I \ar[d]_m \ar[r] \ar@{}[rd]|<<{\copy\pullbackbox} & E\times I \ar[d]_{\pi_E} \ar[r] \ar@{}[rd]|<<{\copy\pullbackbox} & I \ar[d]^m\\ 0 \ar[r] & E \ar[r] & 0.} \] \end{proof} \begin{examples} \begin{enumerate} \item Recall from Example \ref{exproto}.\ref{exgroupoids} that any semi-abelian category appears, via the discrete equivalence relation functor $D\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})$, as a torsion-free subcategory of the category $\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})$ of internal groupoids in $\ensuremath{\mathcal{A}}$. The corresponding torsion subcategory is the category $\ensuremath{\mathsf{ConnGpd(\Ac)}}$ of connected groupoids (see \cite{EG}). The connected components functor $\pi_0\colon \ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})\rightarrow\ensuremath{\mathcal{A}}$ is protoadditive, hence condition $(N)$ is satisfied by Lemma \ref{compositeisnormal }, since $\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})$ is semi-abelian. We already know from \cite{G,EG} that the normal extensions are precisely the regular epic discrete fibrations. Here we add that every regular epimorphism $f\colon A\rightarrow B$ in $\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})$ factorises, essentially uniquely, as $f=m\circ e$, where $e$ is a regular epimorphism with connected kernel and $m$ is a discrete fibration. Note that since $e\downarrow n$ for any $n\in \overline{\mathbb M}$, this is in particular true for any discrete fibration $n$. One says in this case that $e$ is \emph{final}, and the factorisation $f=m\circ e$ is the so-called \emph{comprehensive} factorisation of $f$ (see \cite{B}). \item The torsion theory $(\mathsf{ConnComp}^{\mathbb{T}},\mathsf{TotDis}^{\mathbb{T}})$ in the category $\mathsf{HComp}^{\mathbb{T}}$ of compact $\mathbb T$-algebras, for $\mathbb T$ a semi-abelian theory, considered in Example \ref{exproto}.\ref{exdisc}, satisfies condition $(P)$, and therefore also $(N)$, since $\mathsf{HComp}^{\mathbb{T}}$ is semi-abelian. Hence we obtain that the normal extensions are precisely the regular epimorphisms (=open surjective homomorphism) with a totally disconnected kernel, and any regular epimorphism $f$ of compact $\mathbb T$-algebras factorises as $f=m\circ e$, where $e$ is a regular epimorphism with a connected kernel, and $m$ a normal extension. If $\mathbb T$ is the theory of groups, then it is well known that as soon as the kernel of a continuous homomorphism $f\colon A\rightarrow B$ is connected (respectively, totally disconnected), then for \emph{any} element $b\in B$ the fibre $f^{-1}(b)$ over $b$ is connected (respectively, totally disconnected). This remains true for $\mathbb T$ an arbitrary semi-abelian theory (see \cite{BC2}). Consequently, the factorisation $f=m\circ e$ of a regular epimorphism of compact $\mathbb T$-algebras obtained above is just the classical monotone-light factorisation of the continuous map $f$. \item Recall from Example \ref{exproto}.\ref{exdisc} that the pair of categories $({\mathsf{Ind}}^{\mathbb T}, {\mathsf{Haus}}^{\mathbb T})$ of indiscrete semi-abelian algebras and of Hausdorff semi-abelian algebras forms an $\mathcal M$-hereditary torsion theory in the category $\mathsf{Top}^{\mathbb T}$ of topological semi-abelian algebras, where $\mathcal M$ is the class of \emph{regular} monomorphisms. By Theorem \ref{protoM} it follows that condition $(P)$ is satisfied, and then also $(N)$ is satisfied, as observed in Remark \ref{idealtopological}. The effective descent morphisms in ${\mathsf{Top}}^{\mathbb T}$ are the open surjective homomorphisms. Accordingly, any open surjective homomorphism $f\colon A\rightarrow B$ factors as $f=m\circ e$, where $e$ is an open surjective homomorphism with an \emph{indiscrete} kernel, and $m$ is an open surjective homomorphism with a \emph{Hausdorff} kernel. \item The hereditary torsion theory ($\mathsf{NilCRng},\mathsf{RedCRng}$) in the semi-abelian category $\mathsf{CRng}$ of commutative rings (Example \ref{exproto}.\ref{rings}) satisfies both conditions $(P)$ and $(N)$. Consequently, any surjective homomorphism $f\colon A\rightarrow B$ in $\mathsf{CRng}$ factors as $f=m\circ e$, where $e$ is a surjective homomorphism with a \emph{nilpotent} kernel, and $m$ is a normal extension, namely a surjective homomorphism with a \emph{reduced} kernel. \end{enumerate} \end{examples} \section{Derived torsion theories}\label{sectionderived} In the previous section, a torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ in a homological category $\ensuremath{\mathcal{A}}$ satisfying conditions $(P)$ and $(N)$ was shown to induce a reflective subcategory $\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})$ of normal extensions (with respect to the corresponding absolute Galois structure $\Gamma_{\ensuremath{\mathcal{F}}}$) in the category of effective descent morphisms $\ensuremath{\mathsf{EffDes}}(\ensuremath{\mathcal{A}})$. We shall prove in the present section that $\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})$ is, in fact, a torsion-free subcategory of $\ensuremath{\mathsf{EffDes}}(\ensuremath{\mathcal{A}})$. This implies that $\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})$ itself determines an admissible Galois structure $\Gamma_{\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})}$. As we shall see, $\Gamma_{\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})}$ in its turn gives rise to a torsion theory in the category of \emph{double extensions}, as defined below, whose torsion-free part consists of those double extensions that are normal with respect to it. Continuing, we shall obtain a chain of torsion theories in the categories of \emph{$n$-fold extensions} ($n\geq 1$)---and, accordingly, also a chain of admissible Galois structures---whose torsion-free part consists of the $n$-fold extensions that are normal with respect to the previous Galois structure in the chain. We shall call these induced torsion theories \emph{derived torsion theories} of $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$. The thus obtained chain of Galois structures should be compared to the Galois structures of so-called ``higher central extensions'' which have played a central role in recent developments in non-abelian homological algebra (see, in particular \cite{EGV}), and which will be considered in the next sections. It will become clear in what follows that if the category $\ensuremath{\mathcal{A}}$ is semi-abelian, and the torsion-free subcategory $\ensuremath{\mathcal{F}}$ is closed in $\ensuremath{\mathcal{A}}$ under regular quotients (in this case one speaks of a \emph{cohereditary} torsion theory), then the torsion-free parts of the derived torsion theories are exacty the categories of higher central extensions. We begin by proving that \emph{any} torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ in a homological category $\ensuremath{\mathcal{A}}$ satisfying condition $(N)$ (but not necessarily $(P)$) induces a chain of torsion theories $(\ensuremath{\mathcal{T}}_n,\ensuremath{\mathcal{F}}_n)$ in the categories $\ensuremath{\mathsf{Arr}^{n}\!}(\ensuremath{\mathcal{A}})$ ($n\geq 1$). These will then be shown to restrict to the derived torsion theories in the categories of $n$-fold extensions mentioned above, when $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ moreover satisfies condition $(P)$, and every regular epimorphism in $\ensuremath{\mathcal{A}}$ is an effective descent morphism. Since, by Proposition \ref{inducedfactorisation}, any torsion theory satisfying condition $(N)$ induces a stable factorisation system $(\overline{\mathbb E},\overline{\mathbb M})$ such that every $e\in\overline{\mathbb E}$ is a normal epimorphism, it is natural to consider the following lemma: \begin{lemma}\label{inducedtt} Let $\ensuremath{\mathcal{A}}$ be a pointed category with kernels of normal epimorphisms. Any stable factorisation system $(\mathbb E,\mathbb M)$ on $\ensuremath{\mathcal{A}}$ for which $\mathbb E$ is contained in the class of normal epimorphisms induces a torsion theory $(\ensuremath{\mathcal{T}}_{\mathbb E},\ensuremath{\mathcal{F}}_{\mathbb M})$ in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$. Here $\ensuremath{\mathcal{F}}_{\mathbb M}$ is the full subcategory of $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ determined by $\mathbb M$, and $\ensuremath{\mathcal{T}}_{\mathbb E}$ the full subcategory of $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ consisting of all $e\in \mathbb E$ of the form $e\colon T\rightarrow 0$. \end{lemma} \begin{proof} Let $f$ be a morphism $f\colon A\rightarrow B$ in $\ensuremath{\mathcal{A}}$ with $(\mathbb E,\mathbb M)$-factorisation $f=m\circ e$. Since, by assumption, $e$ is a normal epimorphism, the following diagram is a short exact sequence in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$: \[ \xymatrix{ 0 \ar[r] & K[e] \ar[d] \ar[r] & A \ar[r] \ar[r]^-{e} \ar[d]_f & C\ar[d]^{m} \ar[r] & 0\\ 0 \ar[r] & 0 \ar[r] & B \ar@{=}[r] & B \ar[r] & 0.} \] Moreover, the morphism $K[e]\rightarrow 0$ lies in $\mathbb E$ since it is the pullback of $e$ along $0\rightarrow C$; and $m$ lies in $\mathbb M$, by assumption. Now consider a morphism $t\rightarrow f$ in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ with $t\colon T\rightarrow 0$ in $\mathbb E$ and $f\colon A\rightarrow B$ in $\mathbb M$: \[ \xymatrix{ T \ar[r] \ar[d]_t & A \ar[d]^f\\ 0 \ar@{.>}[ru] \ar[r] & B.} \] Since $t\downarrow f$ there exists the dotted arrow making the diagram commutative, and it follows that $t\rightarrow f$ is the zero morphism, as desired. We can conclude that $(\ensuremath{\mathcal{T}}_{\mathbb E},\ensuremath{\mathcal{F}}_{\mathbb M})$ is a torsion theory in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$. \end{proof} Combining Proposition \ref{inducedfactorisation} with Lemma \ref{inducedtt}, we obtain the following: \begin{proposition}\label{firstderivedtt} Let $\ensuremath{\mathcal{A}}$ be a homological category. Any torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ in~$\ensuremath{\mathcal{A}}$ satisfying condition $(N)$ induces a torsion theory $(\ensuremath{\mathcal{T}}_1,\ensuremath{\mathcal{F}}_1)$ in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ which again satisfies $(N)$. Here $\ensuremath{\mathcal{F}}_1$ is the full subcategory of $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ consisting of all morphisms $f$ with $K[f]\in\ensuremath{\mathcal{F}}$, and $\ensuremath{\mathcal{T}}_1$ is the full subcategory of $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ of all morphisms $T\rightarrow 0$ with $T\in\ensuremath{\mathcal{T}}$. If $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ satisfies, besides condition $(N)$, also condition $(P)$, then $(\ensuremath{\mathcal{T}}_1,\ensuremath{\mathcal{F}}_1)$ satisfies $(P)$ as well. \end{proposition} \begin{proof} By Proposition \ref{inducedfactorisation}, $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ induces a factorisation system $(\overline{\mathbb E},\overline{\mathbb M})$, which in its turn gives rise to a torsion theory $(\ensuremath{\mathcal{T}}_{\overline{\mathbb E}},\ensuremath{\mathcal{F}}_{\overline{\mathbb M}})$, by Lemma \ref{inducedtt}. It follows immediately from the definitions that $(\ensuremath{\mathcal{T}}_1,\ensuremath{\mathcal{F}}_1)=(\ensuremath{\mathcal{T}}_{\overline{\mathbb E}},\ensuremath{\mathcal{F}}_{\overline{\mathbb M}})$. To see that the torsion theory $(\ensuremath{\mathcal{T}}_1,\ensuremath{\mathcal{F}}_1)$ satisfies condition $(N)$, consider a morphism \[ \xymatrix{ A \ar[r]^f\ar[d]_a & B \ar[d]^b \\ A' \ar[r]_{f'} & B'.} \] in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ with kernel $k\colon K[f]\rightarrow K[f']$. Then \[ K[k]=K[a]\cap K[f]=K\big[(a,f)\colon A\rightarrow A'\times B\big] \] so that $\ensuremath{\mathsf{ker\,}} (k)\circ t_K \colon T(K[k])\rightarrow A$ is a normal monomorphism by condition $(N)$ of $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$. Hence, the induced morphism $T_1(k)\rightarrow a$ is a normal monomorphism in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$, and it follows that $(\ensuremath{\mathcal{T}}_1,\ensuremath{\mathcal{F}}_1)$ satisfies condition $(N)$. Clearly, if the torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ is $\ensuremath{\mathcal{M}}$-hereditary, for $\ensuremath{\mathcal{M}}$ the class of protosplit monomorphisms (i.e.~if it satisfies condition $(P)$), then so is $(\ensuremath{\mathcal{T}}_1,\ensuremath{\mathcal{F}}_1)$. \end{proof} By repeatedly applying the above proposition, one obtains, for every $n\geq 1$, a torsion theory $(\ensuremath{\mathcal{T}}_n,\ensuremath{\mathcal{F}}_n)$ in the (homological) category $\ensuremath{\mathsf{Arr}^{n}\!}(\ensuremath{\mathcal{A}})$ of $n$-fold morphisms in $\ensuremath{\mathcal{A}}$. We shall write $F_n$ for the reflection $\ensuremath{\mathsf{Arr}^{n}\!}(\ensuremath{\mathcal{A}})\rightarrow\ensuremath{\mathcal{F}}_n$, $T_n$ for the coreflection $\ensuremath{\mathsf{Arr}^{n}\!}(\ensuremath{\mathcal{A}})\rightarrow\ensuremath{\mathcal{T}}_n$, $\eta^{n}$ for the unit $1_{\ensuremath{\mathsf{Arr}^{n}\!}(\ensuremath{\mathcal{A}})}\Rightarrow F_n$ and $t^n$ for the counit $1_{\ensuremath{\mathsf{Arr}^{n}\!}(\ensuremath{\mathcal{A}})}\Rightarrow T_n$. Note that an $n$-fold morphism $A$ in $\ensuremath{\mathcal{A}}$ (for $n\geq 1$) determines a commutative $n$-dimensional cube in $\ensuremath{\mathcal{A}}$. We shall sometimes write $a_i$ ($1\leq i\leq n$) for the ``initial'' ribs of this cube. We denote by $\iota$ the functor $\ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ that sends an object $A\in\ensuremath{\mathcal{A}}$ to the unique morphism $A\rightarrow 0$. For $n\geq 1$, we write $\iota^n$ for the composite functor $\iota\circ \dots \circ\iota\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathsf{Arr}^{n}\!}(\ensuremath{\mathcal{A}})$. As before, we denote by $\ensuremath{\mathsf{EffDes}}(\ensuremath{\mathcal{A}})$ the category of effective descent morphisms in $\ensuremath{\mathcal{A}}$. If $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ satisfies conditions $(P)$ and $(N)$, then by Proposition \ref{protocentral} (and by the strong right cancellation property of effective descent morphisms) the reflection $F_1\colon \ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})\rightarrow\ensuremath{\mathcal{F}}_1$ restricts to a reflection $\ensuremath{\mathsf{EffDes}}(\ensuremath{\mathcal{A}})\rightarrow\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})$, where $\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})$ is the category of normal extensions with respect to $\Gamma_{\ensuremath{\mathcal{F}}}$. We shall prove below that this is still a torsion-free reflection and, moreover, that also for $n\geq 2$, the categories $\ensuremath{\mathcal{F}}_n$ restrict to suitably defined torsion-free subcategories $\ensuremath{\mathsf{NExt}}^n_{\ensuremath{\mathcal{F}}}(\ensuremath{\mathcal{A}})$ of the categories of \emph{$n$-fold extensions}, of which we now recall the definition. If $\ensuremath{\mathcal{E}}$ is a class of morphisms in $\ensuremath{\mathcal{A}}$, then we shall write $\ensuremath{\mathcal{E}}^1$ for the class of morphisms in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ defined as follows: a morphism $(f,f')\colon a\rightarrow b$ in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ lies in $\ensuremath{\mathcal{E}}^1$ if every morphism in the commutative diagram \[ \vcenter{\xymatrix{ A \ar@/^/@{->}[drr]^{f} \ar@/_/@{->}[drd]_{a} \ar@{.>}[rd]^r \\ & P\ar@{}[rd]|<{\copy\pullbackbox} \ar@{.>}[r] \ar@{.>}[d] & B \ar@{.>}@{->}[d]^{b} \\ & A' \ar@{->}[r]_{f'} & B'}} \] lies in $\ensuremath{\mathcal{E}}$. Here, $r$ is the unique factorisation to the pullback $P=A'\times_{B'} B$. \begin{remark}\label{exactmaltsev} Note that if $\ensuremath{\mathcal{E}}$ is a class of regular epimorphisms in a regular category $\ensuremath{\mathcal{A}}$, then any commutative square in $\ensuremath{\mathcal{A}}$ of morphisms in $\ensuremath{\mathcal{E}}$ is a pushout as soon as it is a pullback. Consequently, any element of $\ensuremath{\mathcal{E}}^1$ is a pushout square. If we choose $\ensuremath{\mathcal{E}}$ to be the class of \emph{all} regular epimorphisms in the regular category $\ensuremath{\mathcal{A}}$, then also the converse holds---every pushout square of morphisms in $\ensuremath{\mathcal{E}}$ lies in $\ensuremath{\mathcal{E}}^1$---if and only if $\ensuremath{\mathcal{A}}$ is an exact Mal'tsev category \cite{CKP} (recall that a Mal'tsev category is one where every (internal) reflexive relation is an (internal) equivalence relation). Hence, the converse holds in particular if $\ensuremath{\mathcal{A}}$ is a semi-abelian category. \end{remark} Let $\ensuremath{\mathcal{E}}$ be a class of morphisms in $\ensuremath{\mathcal{A}}$. Call \emph{$\ensuremath{\mathcal{E}}$-extensions} the elements $f\in\ensuremath{\mathcal{E}}$, and write $\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})$ for the full subcategory of $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ determined by $\ensuremath{\mathcal{E}}$. Then $\ensuremath{\mathcal{E}}$ induces a class $\ensuremath{\mathcal{E}}^1$ of double morphisms defined as above, whose elements will be called \emph{double $\ensuremath{\mathcal{E}}$-extensions}. The corresponding full subcategory of $\ensuremath{\mathsf{Arr}}^2(\ensuremath{\mathcal{A}})$ will be denoted by $\ensuremath{\mathsf{Ext}}^2_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})$. Inductively, $\ensuremath{\mathcal{E}}$ determines, for \emph{any} $n\geq 1$, a class of morphisms $\ensuremath{\mathcal{E}}^n=(\ensuremath{\mathcal{E}}^{n-1})^1$ in $\ensuremath{\mathsf{Arr}}^n(\ensuremath{\mathcal{A}})$, the elements of which we call \emph{$(n+1)$-fold $\ensuremath{\mathcal{E}}$-extensions}. We write $\ensuremath{\mathsf{Ext}}^{n+1}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})$ for the corresponding full subcategory of $\ensuremath{\mathsf{Arr}}^{n+1}(\ensuremath{\mathcal{A}})$. Our main interest is in the situation where $\ensuremath{\mathcal{E}}$ is the class of all normal epimorphisms in a homological category $\ensuremath{\mathcal{A}}$ in which every normal epimorphism is an effective descent morphism. We shall usually denote this class by $\ensuremath{\mathcal{N}}$. In this case, $\ensuremath{\mathcal{E}}=\ensuremath{\mathcal{N}}$ satisfies the list of conditions below (see \cite{Ev}). Here we write $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ for the full subcategory of $\ensuremath{\mathcal{A}}$ determined by the objects $A\in\ensuremath{\mathcal{A}}$ for which there exists in $\ensuremath{\mathcal{E}}$ at least one arrow $f:A\rightarrow B$ or one arrow $g:C\rightarrow A$. Note that if $\ensuremath{\mathcal{E}}=\ensuremath{\mathcal{N}}$, then we have that $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}} =\ensuremath{\mathcal{A}}$. \begin{conditions}\label{extension} On a class $\ensuremath{\mathcal{E}}$ of morphisms in a finitely complete pointed category $\ensuremath{\mathcal{A}}$ we consider the following conditions: \begin{enumerate} \item every $f\in\ensuremath{\mathcal{E}}$ is a normal epimorphism; \item $\ensuremath{\mathcal{E}}$ contains all isomorphisms in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, and $0\in \ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$; \item $\ensuremath{\mathcal{E}}$ is closed under pulling back (in $\ensuremath{\mathcal{A}}$) along morphisms in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$; \item $\ensuremath{\mathcal{E}}$ is closed under composition, and if a composite $g\circ f$ of morphisms in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ is in $\ensuremath{\mathcal{E}}$, then also $g\in\ensuremath{\mathcal{E}}$; \item\label{kernelextension} $\ensuremath{\mathcal{E}}$ is completely determined by the class of objects $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ in the following way: a normal epimorphism $f\colon A\rightarrow B$ is in $\ensuremath{\mathcal{E}}$ if and only if both its domain $A$ and its kernel $K[f]$ lie in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$; \item For any morphism \[ \xymatrix{ 0\ar[r] & K \ar[r] \ar[d]_k & A \ar[r] \ar[d]_a & B \ar@{=}[d] \ar[r] & 0\\ 0 \ar[r] & L \ar[r] & C \ar[r] & B\ar[r] & 0,} \] of short exact sequences in $\ensuremath{\mathcal{A}}$, one has: if $k\in\ensuremath{\mathcal{E}}$ and $a$ lies in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, then $a\in\ensuremath{\mathcal{E}}$. \end{enumerate} \end{conditions} \begin{remark}\label{remarksplit} An important consequence of conditions (2) and (4) above is that $\ensuremath{\mathcal{E}}$ contains all split epimorphisms in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. \end{remark} We have the following lemma: \begin{lemma}\label{up}\cite{Ev} If $\ensuremath{\mathcal{E}}$ is a class of morphisms in a homological category $\ensuremath{\mathcal{A}}$ satisfying Conditions \ref{extension}, then the class $\ensuremath{\mathcal{E}}^1$ of morphisms in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$ satisfies Conditions \ref{extension} as well. \end{lemma} Note that we have that $(\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}}))_{\ensuremath{\mathcal{E}}^1}=\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})$. Hence, inductively, for any $n\geq 1$, the class $\ensuremath{\mathcal{E}}^n$ of $n$-fold $\ensuremath{\mathcal{E}}$-extensions satisfies Conditions \ref{extension} as soon as this is the case for $\ensuremath{\mathcal{E}}$, and we have that $(\ensuremath{\mathsf{Arr}}^n(\ensuremath{\mathcal{A}}))_{\ensuremath{\mathcal{E}}^n}=\ensuremath{\mathsf{Ext}}^{n}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})$ (where $\ensuremath{\mathcal{E}}^0=\ensuremath{\mathcal{E}}$, $\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}^1(\ensuremath{\mathcal{A}})=\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})$ and $\ensuremath{\mathsf{Arr}}^1(\ensuremath{\mathcal{A}})=\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$). \begin{remark} Condition \ref{extension}.6 is of importance for Lemma \ref{up}, but shall otherwise not be needed in what follows. \end{remark} Let us then show that the torsion theories $(\ensuremath{\mathcal{T}}_n,\ensuremath{\mathcal{F}}_n)$ restrict to torsion theories in the categories $\ensuremath{\mathsf{Ext}}^n_{\ensuremath{\mathcal{N}}}(\ensuremath{\mathcal{A}})$ (for $\ensuremath{\mathcal{N}}$ the class of all normal epimorphisms in $\ensuremath{\mathcal{A}}$), where the torsion-free parts consist of what we shall call \emph{$n$-fold normal extensions}. For this, we consider the following lemmas. For an (internal) equivalence relation $R$ on an object $A$ in $\ensuremath{\mathcal{A}}$, we write $\ensuremath{\mathsf{DiscFib}}(R)$ for the category of \emph{discrete fibrations} over $R$, i.e.~of morphisms \[ \xymatrix{ R' \ar@<.8 ex>[r]^{\pi_1'} \ar@<-.8 ex>[r]_{\pi_2'} \ar[d]_r & A' \ar[d]^a \\ R \ar@<.8 ex>[r]^{\pi_1} \ar@<-.8 ex>[r]_{\pi_2} & A,} \] of equivalence relations in $\ensuremath{\mathcal{A}}$ into $R$, such that the commutative square $a\circ \pi_2'=\pi_2\circ r$ is a pullback. \begin{lemma}\label{descentlemma} Let $\ensuremath{\mathcal{E}}$ be a class of morphisms in a homological category $\ensuremath{\mathcal{A}}$, satisfying Conditions \ref{extension}, and let $p\in\ensuremath{\mathcal{E}}$ be an effective descent morphism in $\ensuremath{\mathcal{A}}$. Then $p$ is a monadic extension (with respect to $\ensuremath{\mathcal{E}}$) in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. \end{lemma} \begin{proof} Let $p\colon E\rightarrow B$ be an effective descent morphism in $\ensuremath{\mathcal{A}}$ such that $p\in\ensuremath{\mathcal{E}}$. We first prove that $p$ is then also an effective descent morphism in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. Since $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ is closed under pullback along $p$ (by Condition \ref{extension}.3), we can apply Corollary $3.9$ from \cite{JST}. Thus it suffices to prove that, for any morphism $f\colon A\rightarrow B$ in $\ensuremath{\mathcal{A}}$ such that the pullback $P=E\times_B A$ lies in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, one also has that $A\in\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$: \[ \xymatrix{ P \ar[r]^{f^*(p)} \ar[d] \ar@{}[rd]|<<{\copy\pullbackbox} & A \ar[d]^f \\ E \ar[r]_p & B.} \] Since the category $\ensuremath{\mathcal{A}}$ is homological and $p$ is a normal epimorphism, $f^*(p)$ is a normal epimorphism as well. Hence, if we can prove that $K[f^*(p)]\in\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, it will follow from Condition \ref{extension}.5 that $f^*(p)\in\ensuremath{\mathcal{E}}$ and, in particular, that $A\in\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. Since we have that $K[f^*(p)]\cong K[p]$, it suffices for this to note that $K[p]\in\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ because $p\in\ensuremath{\mathcal{E}}$. We have just proved that $p$ is an effective descent morphism in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. This means that the functor $p^*\colon (\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\downarrow B)\rightarrow (\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\downarrow E)$ is monadic or, equivalently (see \cite{JST}), that the functor $(\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\downarrow B)\rightarrow \ensuremath{\mathsf{DiscFib}}(R[p])$ which sends a morphism $f\colon A\rightarrow B$ in \\ $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ to the discrete fibration obtained by pulling back $f$ along $p$ and then taking kernel pairs, pictured as the left hand square in the diagram \[ \xymatrix{ R[f^*(p)] \ar@{}[rd]|<<{\copy\pullbackbox} \ar@<.8 ex>[r]\ar@<-.8 ex>[r] \ar[d] & P \ar[r]^{f^*(p)} \ar@{}[rd]|<<{\copy\pullbackbox} \ar[d] & A \ar[d]^f\\ R[p] \ar@<.8 ex>[r]\ar@<-.8 ex>[r] & E\ar[r]_p & B} \] is an equivalence of categories. To see that $p$ is a monadic extension, it suffices now to note that this category equivalence restricts to an equivalence $\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(B)\rightarrow \ensuremath{\mathsf{DiscFib}}(R[p])\cap \ensuremath{\mathcal{E}}$ due to the pullback-stability of the class of extensions $\ensuremath{\mathcal{E}}$ and to the fact that $\ensuremath{\mathcal{E}}$ has the strong right cancellation property (Conditions \ref{extension}.3 and \ref{extension}.4). \end{proof} \begin{lemma}\label{torsionrestricts} Let $\ensuremath{\mathcal{A}}$ be a homological category, and $\ensuremath{\mathcal{E}}$ a class of morphisms in $\ensuremath{\mathcal{A}}$ satisfying Conditions \ref{extension}. If $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ is a torsion theory in $\ensuremath{\mathcal{A}}$ such that for any object $A\in\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ the reflection unit $\eta_A\colon A\rightarrow F(A)$ lies in $\ensuremath{\mathcal{E}}$, then the reflection $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ and coreflection $T\colon\ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{T}}$ restrict to functors $F\colon \ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\rightarrow \ensuremath{\mathcal{F}}\cap\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ and $T\colon \ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\rightarrow \ensuremath{\mathcal{T}}\cap\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, and $(\ensuremath{\mathcal{T}}\cap\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}},\ensuremath{\mathcal{F}}\cap\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}})$ is a torsion theory in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. Furthermore, if $f\in\ensuremath{\mathcal{E}}$ is an effective descent morphism in $\ensuremath{\mathcal{A}}$, then $f$ is a trivial extension (respectively a normal extension) with respect to $\Gamma_{\ensuremath{\mathcal{F}}\cap\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}}$ if and only if $f$ is a trivial extension (respectively a normal extension) with respect to $\Gamma_{\ensuremath{\mathcal{F}}}$. \end{lemma} \begin{proof} Consider, for any $A\in\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, the associated short exact sequence \[ \xymatrix{ 0 \ar[r] & T(A) \ar[r] & A \ar[r]^-{\eta_A} & F(A) \ar[r] & 0.} \] By assumption, the unit $\eta_A$ is in $\ensuremath{\mathcal{E}}$, which implies that both $F(A)$ (by definition of $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$) and $T(A)$ (as the kernel of an extension---by Condition \ref{extension}.3) lie in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. Since $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ is a full subcategory of $\ensuremath{\mathcal{A}}$, this implies that the sequence above is a short exact sequence in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. Furthermore, for any objects $T\in\ensuremath{\mathcal{T}}\cap\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ and $F\in\ensuremath{\mathcal{F}}\cap\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ we have that \[ \ensuremath{\mathrm{Hom}}_{\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}}(T,F)=\ensuremath{\mathrm{Hom}}_{\ensuremath{\mathcal{A}}}(T,F)=\{0\}, \] so that $(\ensuremath{\mathcal{T}}\cap\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}},\ensuremath{\mathcal{F}}\cap\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}})$ is indeed a torsion theory in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. The latter part of the statement follows readily from Lemma \ref{descentlemma} and Condition \ref{extension}.3. \end{proof} \begin{lemma}\label{unitlemma} With the same assumptions as in Lemma \ref{torsionrestricts}: if $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ satisfies condition $(N)$ then for any $f\in\ensuremath{\mathcal{E}}$ the unit $\eta^1_f\colon f\rightarrow F_1(f)$ lies in $\ensuremath{\mathcal{E}}^1$. \end{lemma} \begin{proof} By Proposition \ref{firstderivedtt}, $(\ensuremath{\mathcal{T}}_1,\ensuremath{\mathcal{F}}_1)$ is a torsion theory in $\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}})$, and the unit $\eta^1_f\colon f\rightarrow F_1(f)$ for any $f$ is given by the commutative square \[ \xymatrix{ A \ar[d]_f \ar[r]^-{q_{T(K[f])}} & A/T(K[f]) \ar[d]^{F_1(f)} \\ B \ar@{=}[r] & B,} \] where $q_{T(K[f])}$ is the cokernel of the normal monomorphism $\ensuremath{\mathsf{ker\,}}(f)\circ t_{K[f]}$. Now suppose that $f$ lies in $\ensuremath{\mathcal{E}}$. Then its kernel $K[f]$ must be in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, which implies that $T(K[f])\in\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ by Lemma \ref{torsionrestricts}. Consequently, $q_{T(K[f])}\in\ensuremath{\mathcal{E}}$, by Condition \ref{extension}.5, and then also $F_1(f)\in\ensuremath{\mathcal{E}}$, by Condition \ref{extension}.4. From this we conclude that $\eta^1_f\in \ensuremath{\mathcal{E}}^1$. \end{proof} Finally, Lemmas \ref{descentlemma}---\ref{unitlemma} together with Propositions \ref{firstderivedtt} and \ref{protocentral} give the following. As before, we write $\ensuremath{\mathcal{N}}$ for the class of normal epimorphisms in $\ensuremath{\mathcal{A}}$. \begin{theorem}\label{higherderivedT1} Let $\ensuremath{\mathcal{A}}$ be a homological category in which every normal epimorphism is an effective descent morphism. Then any torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ in $\ensuremath{\mathcal{A}}$ satisfying conditions $(P)$ and $(N)$ induces, for any $n\geq 1$, a torsion theory $(\ensuremath{\mathcal{T}}_n,\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}^n(\ensuremath{\mathcal{A}}))$ in the category $\ensuremath{\mathsf{Ext}}^n_{\ensuremath{\mathcal{N}}}(\ensuremath{\mathcal{A}})$ of $n$-fold $\ensuremath{\mathcal{N}}$-extensions. Here $\ensuremath{\mathcal{T}}_n$ is the replete image of $\ensuremath{\mathcal{T}}$ by the functor $\iota^n\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathsf{Arr}^{n}\!}(\ensuremath{\mathcal{A}})$ and $\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}^n(\ensuremath{\mathcal{A}})=\ensuremath{\mathcal{F}}_n\cap\ensuremath{\mathsf{Ext}}^n_{\ensuremath{\mathcal{N}}}(\ensuremath{\mathcal{A}})$ consists of all $n$-fold $\ensuremath{\mathcal{N}}$-extensions that are normal with respect to $\Gamma_{\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}^{n-1}(\ensuremath{\mathcal{A}})}$. Moreover, for any $n\geq 1$, and any $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$, we have that \[ A\in\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}^n(\ensuremath{\mathcal{A}}) \Leftrightarrow \bigcap_{1\leq i\leq n}K[a_i]\in\ensuremath{\mathcal{F}}. \] \end{theorem} \begin{remark} For $n\geq 1$, and under the conditions of the theorem above, we call \emph{$n$-fold normal extensions} the objects of $\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}^n(\ensuremath{\mathcal{A}})$ with respect to the Galois structure $\Gamma_{\ensuremath{\mathcal{F}}}$. \end{remark} Theorem \ref{protofactorisation} together with Lemma \ref{torsionrestricts} also imply the following: \begin{theorem}\label{reflectivehigher} With the same assumptions and notations as in Theorem \ref{higherderivedT1}, for any $n\geq 0$, if $(\mathbb E_n,\mathbb M_n)$ is the factorisation system induced by the reflection $F_n\colon \ensuremath{\mathsf{Ext}}^n_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})\rightarrow\ensuremath{\mathsf{NExt}}_{\ensuremath{\mathcal{F}}}^n(\ensuremath{\mathcal{A}})$, then any $(n+1)$-fold extension $f\colon A\rightarrow B$ factors uniquely (up to isomorphism) as a composite $f=m\circ e$ of $(n+1)$-fold $\ensuremath{\mathcal{N}}$-extensions, where $e$ is stably in $\mathbb E_n$ and $m$ is an $(n+1)$-fold normal extension. \end{theorem} \begin{proof} If $f=m\circ e$ is the factorisation in $\ensuremath{\mathsf{Arr}}^n(\ensuremath{\mathcal{A}})$ given by Theorem \ref{protofactorisation}, then $m$ is an ($n+1$)-fold $\ensuremath{\mathcal{N}}$-extension by Lemma \ref{torsionrestricts}, and $e$ by Condition \ref{extension}.5, since its kernel, which lies in $\ensuremath{\mathcal{T}}_n$, is an $n$-fold $\ensuremath{\mathcal{N}}$-extension. \end{proof} \section{Birkhoff subcategories with a protoadditive reflector}\label{Birkhoffsection} Consider a torsion-free subcategory $\ensuremath{\mathcal{F}}$ of a homological category $\ensuremath{\mathcal{A}}$, and write, as usual, $F$ for the reflector $\ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$ and $T$ for the associated radical. Assume that $\ensuremath{\mathcal{F}}$ satisfies conditions $(P)$ ($F$ is protoadditive) and $(N)$ (for any morphism $f\colon A\rightarrow B$ in $\ensuremath{\mathcal{A}}$, the induced monomorphism $T(K[f])\rightarrow A$ is normal). In the previous section, we have explained how $\ensuremath{\mathcal{F}}$ induces a chain of ``derived" torsion theories $(\ensuremath{\mathcal{T}}_n,\ensuremath{\mathcal{F}}_n)$ ($n\geq 1$) in the categories $\ensuremath{\mathsf{Ext}}^n_{\ensuremath{\mathcal{N}}}(\ensuremath{\mathcal{A}})$ of $n$-fold $\ensuremath{\mathcal{N}}$-extensions (for $\ensuremath{\mathcal{N}}$ the class of normal epimorphisms in $\ensuremath{\mathcal{A}}$) where, for each $n\geq 1$, $\ensuremath{\mathcal{F}}_n$ consists of all $n$-fold $\ensuremath{\mathcal{N}}$-extensions that are normal with respect to the Galois structure $\Gamma_{\ensuremath{\mathcal{F}}_{n-1}}$. In a similar manner, in \cite{EGV}, ``higher dimensional" Galois structures had been obtained starting from any \emph{Birkhoff subcategory} $\ensuremath{\mathcal{B}}$ of a semi-abelian category $\ensuremath{\mathcal{A}}$. While for this to work there is no need for the reflector $\ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{B}}$ to be protoadditive, the situation where it is so is of interest and will be studied in the present section. Recall from \cite{JK} that a Birkhoff subcategory of an exact category $\ensuremath{\mathcal{A}}$ is a full reflective subcategory $\ensuremath{\mathcal{B}}$ of $\ensuremath{\mathcal{A}}$ closed under subobjects and regular quotients; or, equivalently, a full replete (regular epi)-reflective subcategory $\ensuremath{\mathcal{B}}$ of $\ensuremath{\mathcal{A}}$, with reflector $I\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{B}}$, such that, for any regular epimorphism $f\colon A\rightarrow B$ in $\ensuremath{\mathcal{A}}$, the canonical square \begin{equation}\label{unitsquare} \xymatrix{ A \ar[r]\ar[d]_f & I(A) \ar[d]^{I(f)}\\ B \ar[r] & I(B)} \end{equation} is a pushout. Note that this last condition translates to \eqref{unitsquare} being a double $\ensuremath{\mathcal{N}}$-extension, whenever $\ensuremath{\mathcal{A}}$ is a semi-abelian category---since any semi-abelian category is exact Mal'tsev (see Remark \ref{exactmaltsev}). \begin{example} By Birkhoff's theorem characterising equational classes, a full subcategory $\ensuremath{\mathcal{B}}$ of a variety of universal algebras $\ensuremath{\mathcal{A}}$ is a subvariety if and only if $\ensuremath{\mathcal{B}}$ is closed in $\ensuremath{\mathcal{A}}$ under subobjects, quotients and products. It follows that a Birkhoff subcategory of a variety is the same as a subvariety---whence its name. Note that a Birkhoff subcategory is indeed closed under products---it is, in fact, closed under arbitrary limits---since it is a reflective subcategory. \end{example} We shall be needing the following important property of Birkhoff subcategories, which was first observed in \cite{Gran}: \begin{lemma}\label{Marino} The reflector $I\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{B}}$ into a Birkhoff subcategory $\ensuremath{\mathcal{B}}$ of a semi-abelian category $\ensuremath{\mathcal{A}}$ preserves pullbacks of normal epimorphisms along split epimorphisms. In particular, $I$ preserves finite products. \end{lemma} \begin{proof} Consider a commutative cube \[ \xymatrix{ & I(P) \ar@{}[rrdd] \ar@{.>}[dd] \ar[rr] && I(C) \ar[dd] \\ P \ar@{}[rrdd]|<<{\copy\pullbackbox} \ar[ur] \ar[rr] \ar[dd] && C \ar[ur] \ar[dd] & \\ & I(A) \ar@{.>}[rr] && I(B) \\ A \ar[ur] \ar[rr] && B \ar[ur] &} \] in $\ensuremath{\mathcal{A}}$, where the front square is the pullback of a split epimorphism $A\rightarrow B$ along a normal epimorphism $C\rightarrow B$, and the skew morphisms are the reflection units. Since $\ensuremath{\mathcal{B}}$ is a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$, we have that the left and right hand sides are double $\ensuremath{\mathcal{N}}$-extensions, so that the cube is a three-fold $\ensuremath{\mathcal{N}}$-extension as a split epimorphism of double $\ensuremath{\mathcal{N}}$-extensions (via Remark \ref{remarksplit} and Lemma \ref{up}). This implies that the induced square \[ \xymatrix{ P \ar[r] \ar[d] & I(P) \ar[d]\\ A\times_B C \ar[r] & I(A)\times_{I(B)}I(C)} \] is a double $\ensuremath{\mathcal{N}}$-extension. In particular, it is a pushout, so that the right hand vertical map is indeed an isomorphism, because the left hand one is so by assumption. To see that $I$ preserves binary (hence, finite-) products, it suffices to take $B=0$ in the above, and note that $I(0)=0$ as $I$ preserves the initial object. \end{proof} \begin{remark} Note that for the second part of the lemma above to be true, the assumption that $\ensuremath{\mathcal{B}}$ is a Birkhoff subcategory can be weakened: it suffices that $\ensuremath{\mathcal{B}}$ is a (normal epi)-reflective subcategory of $\ensuremath{\mathcal{A}}$ (because a split epimorphism of $\ensuremath{\mathcal{N}}$-extensions is always a double $\ensuremath{\mathcal{N}}$-extension). In fact, as soon as $\ensuremath{\mathcal{B}}$ is a (normal epi)-reflective subcategory of a homological category $\ensuremath{\mathcal{A}}$, the reflector $I\colon\ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{B}}$ will preserve pullbacks of split epimorphisms along split epimorphisms (this even holds more generally in a regular Mal'tsev category). \end{remark} Let us, from now on, assume that $\ensuremath{\mathcal{A}}$ is a semi-abelian category. We know from \cite{JK} that any Birkhoff subcategory $\ensuremath{\mathcal{B}}$ of $\ensuremath{\mathcal{A}}$ determines an admissible Galois structure $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}=(\ensuremath{\mathcal{A}},\ensuremath{\mathcal{B}},I,H,\ensuremath{\mathcal{N}})$, where $I\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{B}}$ is the reflector, $H\colon \ensuremath{\mathcal{B}}\rightarrow\ensuremath{\mathcal{A}}$ the inclusion functor and $\ensuremath{\mathcal{N}}$ the class of all normal epimorphisms in $\ensuremath{\mathcal{A}}$. We shall write $[\cdot]_{\ensuremath{\mathcal{B}}}\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{A}}$ for the associated radical. For $A\in\ensuremath{\mathcal{A}}$, we denote by $\eta_A\colon A\rightarrow I(A)$ the reflection unit and by $\kappa_A\colon [A]_{\ensuremath{\mathcal{B}}}\rightarrow A$ the normal monomorphism $\ensuremath{\mathsf{ker\,}}(\eta_A)$. Note that the normal epimorphisms in $\ensuremath{\mathcal{A}}$ coincide with the regular epimorphisms because $\ensuremath{\mathcal{A}}$ is protomodular, and the regular epimorphisms with the effective descent morphisms because $\ensuremath{\mathcal{A}}$ is exact. Since a semi-abelian category is, in particular, homological, the class $\ensuremath{\mathcal{N}}$ satisfies Conditions \ref{extension}. Recall from \cite{JK} that every central extension with respect to $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}$ is a normal extension. The category of normal extensions with respect to $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}$ is denoted $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}(\ensuremath{\mathcal{A}})$. Just as in the case of a torsion theory satisfying Conditions $(P)$ and $(N)$, we have that $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}(\ensuremath{\mathcal{A}})$ is a reflective subcategory of the category of effective descent morphisms in $\ensuremath{\mathcal{A}}$, and we know from \cite{EGV} that the reflection $I_1(f)$ in $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}(\ensuremath{\mathcal{A}})$ of a normal epimorphism $f\colon A\rightarrow B$ can be obtained as follows: $I _1(f)$ is the normal epimorphism $A/[f]_{1,{\ensuremath{\mathcal{B}}}}\rightarrow B$ induced by $f$, where the normal monomorphism $[f]_{1,{\ensuremath{\mathcal{B}}}}\rightarrow A$ is obtained as the composite $\kappa^1_f=\kappa_A\circ [\pi_2]_{\ensuremath{\mathcal{B}}}\circ\ensuremath{\mathsf{ker\,}} [\pi_1]_{\ensuremath{\mathcal{B}}}$ (where $\pi_1$ and $\pi_2$ denote the projections from the kernel pair $R[f]$ of $f$): \[ \xymatrix{ [f]_{1,{\ensuremath{\mathcal{B}}}}=K[[\pi_1]_{\ensuremath{\mathcal{B}}}] \ar[r]^-{\ensuremath{\mathsf{ker\,}} [\pi_{1}]_{\ensuremath{\mathcal{B}}}} \ar[d] & [R[f]]_{\ensuremath{\mathcal{B}}} \ar[d]_{\kappa_{R[f]}} \ar@<0.8 ex>[r]^-{[\pi_1]_{\ensuremath{\mathcal{B}}}} \ar@<-0.8 ex>[r]_-{[\pi_2]_{\ensuremath{\mathcal{B}}}} & [A]_{\ensuremath{\mathcal{B}}} \ar[d]^{\kappa_A}\\ K[\pi_1] \ar[r]_-{\ensuremath{\mathsf{ker\,}} (\pi_{1})} & R[f] \ar@<0.8 ex>[r]^-{\pi_1} \ar@<-0.8 ex>[r]_-{\pi_2} & A} \] In Section \ref{coveringmorphisms}, we proved that, for any torsion-free subcategory $\ensuremath{\mathcal{F}}$ of a homological category $\ensuremath{\mathcal{A}}$ with protoadditive reflector $F\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{F}}$, the normal extensions with respect to $\Gamma_{\ensuremath{\mathcal{F}}}$ are exactly the effective descent morphisms $f\colon A\rightarrow B$ such that $K[f]\in\ensuremath{\mathcal{F}}$. As shown in \cite{EG} one has the same characterisation for the normal extensions with respect to $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}$, where $\ensuremath{\mathcal{B}}$ is a Birkhoff subcategory of a semi-abelian category $\ensuremath{\mathcal{A}}$ with protoadditive reflector $I\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{B}}$. It turns out that the protoadditivity of $I$ is also necessary for this to be true, as soon as $\ensuremath{\mathcal{B}}$ satisfies condition $(N)$: for any normal epimorphism $f\colon A\rightarrow B$ in $\ensuremath{\mathcal{A}}$, the induced monomorphism $\ensuremath{\mathsf{ker\,}} (f)\circ \kappa_{K[f]}\colon [K[f]]_{\ensuremath{\mathcal{B}}}\rightarrow A$ is normal. More precisely, we have the following proposition: \begin{proposition}\label{characterisationbyextensions} For a Birkhoff subcategory $\ensuremath{\mathcal{B}}$ of a semi-abelian category $\ensuremath{\mathcal{A}}$, the following conditions are equivalent: \begin{enumerate} \item the reflector $I \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{B}}$ is protoadditive; \item the associated radical $[\cdot]_{\ensuremath{\mathcal{B}}}\colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{A}}$ is protoadditive; \item \begin{itemize} \item[(a)] for any normal epimorphism $f\colon A\rightarrow B$, the induced monomorphism $[K[f]]_{\ensuremath{\mathcal{B}}}\rightarrow A$ is normal; \item[(b)] the normal extensions are precisely the normal epimorphisms $f$ with $K[f] \in \ensuremath{\mathcal{B}}$; \end{itemize} \item \begin{itemize} \item[(a)] for any normal epimorphism $f\colon A\rightarrow B$, the induced monomorphism $[K[f]]_{\ensuremath{\mathcal{B}}}\rightarrow A$ is normal; \item[(b)] for any normal epimorphism $f \colon A \rightarrow B$, the reflection in $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}(\ensuremath{\mathcal{A}})$ is given by the induced morphism $\overline{f} \colon A/[K[f]]_{\ensuremath{\mathcal{B}}} \rightarrow B$. \end{itemize} \end{enumerate} \end{proposition} \begin{proof} The equivalence $(1) \Leftrightarrow (2)$ follows from Proposition \ref{reflector=radical}, and the implication $(1) \Rightarrow (3a)$ from Lemma \ref{compositeisnormal }. For $(1)\Rightarrow (3b)$ it suffices to note that the proof of Proposition \ref{protocentral} remains valid. To see that the implication $(3) \Rightarrow (4b)$ holds, consider a normal epimorphism $f\colon A\rightarrow B$. By the ``double quotient'' isomorphism theorem (see Theorem $4.3.10$ in \cite{BB}), the kernel of the induced morphism $\overline{f}\colon A/[K[f]]_{\ensuremath{\mathcal{B}}}\rightarrow B$ is $K[f]/[K[f]]_{\ensuremath{\mathcal{B}}}$, which lies in $\ensuremath{\mathcal{B}}$, hence $\overline{f}$ is a normal extension. To see that $\overline{f}$ is the reflection of $f$ in $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}(\ensuremath{\mathcal{A}})$, consider a normal extension $g\colon C \rightarrow D$ and a morphism $(a,b)\colon f\rightarrow g$ of normal epimorphisms. We need to show that there is a (unique) morphism $\overline{a}$ such that the diagram \[ \xymatrix@=30pt{ A \ar@{-<}`u[r]`[rr]^a[rr] \ar[r] \ar[d]_{f} & \frac{A}{[K[f]]_{\ensuremath{\mathcal{B}}}} \ar[d]^{\overline{f}} \ar@{.>}[r]^{\overline{a}} & C \ar[d]^g \\ B \ar@{=}[r] & B \ar[r]_b & D. } \] commutes. For this, it suffices to note that there is a commutative square \[ \xymatrix{ [K[f]]_{\ensuremath{\mathcal{B}}} \ar[r] \ar[d]_{\ensuremath{\mathsf{ker\,}}(f)\circ \kappa_{K[f]}} & [K[g]]_{\ensuremath{\mathcal{B}}} \ar[d]^{\ensuremath{\mathsf{ker\,}}(g)\circ \kappa_{K[g]}} \\ A\ar[r]_a & C} \] and that $[K[g]]_{\ensuremath{\mathcal{B}}}=0$ because $g$ is a normal extension, so that $a\circ\ensuremath{\mathsf{ker\,}}(f)\circ \kappa_{K[f]}=0$. $(4) \Rightarrow (1)$ Consider a split short exact sequence \begin{equation}\label{split2} \xymatrix{0 \ar[r]& K \ar[r]^k & A \ar@<-.8 ex> [r]_f & B \ar@<-.8ex>[l]_s \ar[r] &0 } \end{equation} in $\ensuremath{\mathcal{A}}$, and the induced diagram \[ \xymatrix@=35pt{& [K]_{\ensuremath{\mathcal{B}}} \ar[d]_{\kappa_{K[f]}}\ar[r]^-{[\ensuremath{\mathsf{ker\,}}(\pi_1)]_{\ensuremath{\mathcal{B}}}} & [R[f]]_{\ensuremath{\mathcal{B}}} \ar@<.8ex>[r]^{[\pi_1]_{\ensuremath{\mathcal{B}}}} \ar@<-.8ex>[r]_{[\pi_2]_{\ensuremath{\mathcal{B}}}} \ar[d]_{\kappa_{R[f]}}& [A]_{\ensuremath{\mathcal{B}}} \ar[d]^{\kappa_A} \ar@<-.8 ex> [r]_{[f]_{\ensuremath{\mathcal{B}}}} & [B]_{\ensuremath{\mathcal{B}}} \ar[d]^{\kappa_B} \ar@<-.8ex>[l]_{[s]_{\ensuremath{\mathcal{B}}}} & \\ & K \ar[r]_-{\ensuremath{\mathsf{ker\,}}(\pi_1)} &R[f] \ar@<.8ex>[r]^{\pi_1} \ar@<-.8ex>[r]_{\pi_2} & A \ar@<-.8 ex> [r]_f & B \ar@<-.8ex>[l]_s & } \] obtained by factorising $k$ through the kernel pair $R[f]$ of $f$, and applying the radical $[\cdot]_{\ensuremath{\mathcal{B}}}$. The assumption says that $[f]_{1,\ensuremath{\mathcal{B}}}= [K]_{\ensuremath{\mathcal{B}}}$ so that $[\ensuremath{\mathsf{ker\,}}(\pi_1)]_{\ensuremath{\mathcal{B}}}$ is the kernel of $[\pi_1]_{\ensuremath{\mathcal{B}}}$; it follows that $[\pi_2]_{\ensuremath{\mathcal{B}}} \circ [\ensuremath{\mathsf{ker\,}}(\pi_1)]_{\ensuremath{\mathcal{B}}}$ is the normalisation of the equivalence relation $[R[f]]_{\ensuremath{\mathcal{B}}}$ on $[A]$. Since the reflector $I \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{B}}$ preserves kernel pairs of split epimorphisms by Lemma \ref{Marino}, one concludes that the functor $[\cdot]_{\ensuremath{\mathcal{B}}} \colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{A}}$ preserves the split short exact sequence \eqref{split2}. \end{proof} \begin{remark} Note that conditions $(3a)$ and $(4a)$ say that for any normal monomorphism $k\colon K\rightarrow A$ the composite $k\circ\kappa_K$ is a normal monomorphism as well, and we could equivalently have written ``any morphism" instead of ``any normal epimorphism" in the statement of these conditions. The reason we stated it the way we did is that, as we shall explain below, the proposition can be ``relativised"---in such a way that it depends on a choice of class $\ensuremath{\mathcal{E}}$ of morphisms in $\ensuremath{\mathcal{A}}$ satisfying Conditions \ref{extension}---and in the relative version, the morphism $[K[f]]_{\ensuremath{\mathcal{B}}}\rightarrow A$ might not be defined if $f$ is not in $\ensuremath{\mathcal{E}}$. \end{remark} \begin{remark} Proposition \ref{characterisationbyextensions} shows, in particular, that it is meaningful to consider the conditions $(P)$ and $(N)$ from the previous sections beyond the context of torsion theories. \end{remark} \begin{remark}\label{composingce} Let $(\mathbb E,\mathbb M)$ be the reflective (pre)factorisation system associated with a Birkhoff subcategory $\ensuremath{\mathcal{B}}$ of a semi-abelian category $\ensuremath{\mathcal{A}}$, and let $\mathbb E'$ and $\mathbb M^*$ be the induced classes of morphisms ``stably in $\mathbb E$" and ``locally in $\mathbb M$", respectively, as considered in Section \ref{coveringmorphisms}. Then $(\mathbb E',\mathbb M^*)$ need not be a (pre)factorisation system in general. In fact, normal extensions fail to be stable under composition, even if the reflector $I\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{B}}$ is protoadditive, in contrast to the normal extensions associated with a torsion theory satisfying condition $(P)$, which we discussed in Section \ref{coveringmorphisms}. For instance, let $\ensuremath{\mathcal{A}}=\ensuremath{\mathsf{Ab}}$ be the variety of abelian groups, and $\ensuremath{\mathcal{B}}=\mathsf{B}_2$ the Burnside variety of exponent $2$ ($\mathsf{B}_2$ consists of all abelian groups $A$ such that $a+a=0$ for every $a\in A$). Then the reflector $\ensuremath{\mathsf{Ab}}\rightarrow\mathsf{B}_2$ is additive, but the composite of two normal extensions need not be normal: if we denote by $C_n$ the cyclic group of order $n$, then the unique map $C_2\rightarrow 0$ is a normal extension, as is the only non-trivial morphism $C_4\rightarrow C_2$. However, the composite $C_4\rightarrow 0$ is not. Note that the composite $g\circ f\colon A\rightarrow B\rightarrow C$ of two normal extensions is a normal extension as soon as $[g\circ f]_{1,\ensuremath{\mathcal{B}}}$ is $\ensuremath{\mathcal{B}}$-perfect, i.e. $I([g\circ f]_{1,\ensuremath{\mathcal{B}}})=0$. Indeed, if $q$ is the canonical normal epimorphism $A\rightarrow A/[g\circ f]_{1,\ensuremath{\mathcal{B}}}$, then the assumption that $I([g\circ f]_{1,\ensuremath{\mathcal{B}}})=0$ implies that $q$ lies in $\mathbb E$, since $I$ preserves cokernels. In fact, we have that $q$ lies in $\mathbb E'$, since pulling back yields isomorphic kernels, and preserves normal epimorphisms. From \cite{CJKP} we recall that $\mathbb M^*\subseteq (\mathbb E')^{\downarrow}$ which is easily seen to imply that also composites of morphisms in $\mathbb M^*$ lie in $(\mathbb E')^{\downarrow}$. In particular, we have that $q \downarrow (g\circ f)$, since, by assumption, we have that $f$ and $g$ lie in $\mathbb M^*$. As $g\circ f=I_1(g\circ f)\circ q$, it follows that $q$ is a split monomorphism, hence an isomorphism, and we can conclude that $g\circ f$ is a normal extension. \end{remark} Recall from \cite{Ev,EGV} that the notion of Birkhoff subcategory can be ``relativised" as follows. Let $\ensuremath{\mathcal{E}}$ be a class of morphisms in a semi-abelian category $\ensuremath{\mathcal{A}}$ satisfying Conditions \ref{extension}, and $\ensuremath{\mathcal{B}}$ a reflective subcategory of the category $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. Denote by $I\colon \ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\rightarrow\ensuremath{\mathcal{B}}$ the reflector, by $H\colon \ensuremath{\mathcal{B}}\rightarrow \ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ the inclusion functor, and write $\eta$ for the unit of the reflection. Then $\ensuremath{\mathcal{B}}$ is called a \emph{strongly $\ensuremath{\mathcal{E}}$-Birkhoff subcategory} of $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ if the square \eqref{unitsquare} is a double $\ensuremath{\mathcal{E}}$-extension for any $\ensuremath{\mathcal{E}}$-extension $f\colon A\rightarrow B$. This determines an admissible Galois structure $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}=(\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}},\ensuremath{\mathcal{B}},I,H,\ensuremath{\mathcal{E}})$ with respect to which the central and normal extensions coincide, just as in the ``absolute" case. The full subcategory of $\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})$ of all normal $\ensuremath{\mathcal{E}}$-extensions with respect to $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}$---denoted $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$---is reflective in $\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})$, and the construction of the reflector $I_1\colon \ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})\rightarrow\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ is formally the same as in the ``absolute" case. $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ is, in fact, a strongly $\ensuremath{\mathcal{E}}^1$-Birkhoff subcategory of $\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})=(\ensuremath{\mathsf{Arr}}(\ensuremath{\mathcal{A}}))_{\ensuremath{\mathcal{E}}^1}$, where $\ensuremath{\mathcal{E}}^1$ denotes, as before, the class of double $\ensuremath{\mathcal{E}}$-extensions. This fact allows us to define \emph{double normal $\ensuremath{\mathcal{E}}$-extensions} (with respect to $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}$) as those double $\ensuremath{\mathcal{E}}$-extensions that are normal with respect to the Galois structure $\Gamma_{(\ensuremath{\mathcal{B}}_1,\ensuremath{\mathcal{E}}^1)}$, where $\ensuremath{\mathcal{B}}_1=\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$, and then to define \emph{three-fold normal $\ensuremath{\mathcal{E}}$-extensions}, and so on. For each $n\geq 1$, we use the notation $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}^{n}$ for the induced Galois structure $(\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}^n(\ensuremath{\mathcal{A}}), \ensuremath{\mathcal{B}}_n,I_n,H_n,\ensuremath{\mathcal{E}}^n)$, where \[ \ensuremath{\mathcal{B}}_n=\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}}_{n-1},\ensuremath{\mathcal{E}}^{n-1})}(\ensuremath{\mathsf{Arr}}^{n-1}(\ensuremath{\mathcal{A}}))=\ensuremath{\mathsf{NExt}}^n_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}}). \] Similar to the case $n=1$, for $n\geq 2$ and any $n$-fold $\ensuremath{\mathcal{E}}$-extension $A$, we write $\eta^n_A\colon A\rightarrow I_n(A)$ for the reflection unit, and $\kappa^n_A\colon [A]_{n,\ensuremath{\mathcal{B}}}\rightarrow A_{\textrm{top}}$ for the morphism in $\ensuremath{\mathcal{A}}$ which appears as the ``top" morphism in the diagram of the kernel $\ensuremath{\mathsf{ker\,}}(\eta^n_A)\colon K[\eta^n_A]\rightarrow A$. Note that we have that $\iota^n[A]_{n,\ensuremath{\mathcal{B}}}=K[\eta^n_A]$, where the functor $\iota^n\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathsf{Arr}}^n(\ensuremath{\mathcal{A}})$ is as in Section \ref{sectionderived}. We refer the reader to the articles \cite{Ev,EGV} for more details, and proofs of the statements above. Replacing ``$\ensuremath{\mathcal{A}}$'' by ``$\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$" and ``normal epimorphism" by ``$\ensuremath{\mathcal{E}}$-extension" in Lemma \ref{Marino} and Proposition \ref{characterisationbyextensions} provides us with relative versions of these results. One easily verifies that the proofs remain valid. We obtain, in particular, for each $n\geq 1$, a characterisation of the $n$-fold normal extensions with respect to $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}$, if the reflector $I$ is protoadditive. Indeed, in this case also the $I_n$ are protoadditive. In fact, we have: \begin{lemma}\label{centralisationisprotoadditive} $I\colon \ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\rightarrow\ensuremath{\mathcal{B}}$ is protoadditive if and only if $I_1\colon \ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})\rightarrow\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ is protoadditive. \end{lemma} \begin{proof} The ``only if" part of this lemma has already been considered in \cite{EG}: it essentially follows from the implications $(1)\Rightarrow (2)$ and $(1)\Rightarrow (4)$ in the ``relative version" of Proposition \ref{characterisationbyextensions}, and the $3\times 3$ lemma. Now, suppose that $I_1$ is protoadditive. Since, by the ``relative version" of Lemma \ref{Marino}, $I\colon \ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\rightarrow\ensuremath{\mathcal{B}}$ preserves, for $A\in\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, the product $A\times A$, it follows from the construction of $I_1$ that $I_1(A\rightarrow 0)=I(A)\rightarrow 0$. It is then immediate to conclude that also $I$ is protoadditive. \end{proof} We are now in a position to prove the following theorem. As before, we write $\ensuremath{\mathcal{N}}$ for the class of normal epimorphisms in $\ensuremath{\mathcal{A}}$. If $A$ is an $n$-fold $\ensuremath{\mathcal{N}}$-extension, the ``initial ribs'' in the diagram of $A$ are denoted $a_i$ ($1\leq i\leq n$), and its ``top vertex'' (the domain of the morphisms $a_i$) $A_{\textrm{top}}$. \begin{theorem}\label{characterisationbyextensionshigher} For a Birkhoff subcategory $\ensuremath{\mathcal{B}}$ of a semi-abelian category $\ensuremath{\mathcal{A}}$, the following conditions are equivalent: \begin{enumerate} \item the reflector $I\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{B}}$ is protoadditive; \item the associated radical $[\cdot]_{\ensuremath{\mathcal{B}}}\colon \ensuremath{\mathcal{A}} \rightarrow \ensuremath{\mathcal{A}}$ is protoadditive; \item the following conditions hold for any $n\geq 1$: \begin{itemize} \item[(a)] the canonical monomorphism $[\bigcap_{1\leq i\leq n}K[a_i]]_{\ensuremath{\mathcal{B}}}\rightarrow A_{\textrm{top}}$ is normal for any $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$; \item[(b)] the $n$-fold normal extensions are precisely the $n$-fold $\ensuremath{\mathcal{N}}$-extensions $A$ with $\bigcap_{1\leq i\leq n}K[a_i]\in\ensuremath{\mathcal{B}}$; \end{itemize} \item the following conditions hold for any $n\geq 1$: \begin{itemize} \item[(a)] the canonical monomorphism $[\bigcap_{1\leq i\leq n}K[a_i]]_{\ensuremath{\mathcal{B}}}\rightarrow A_{\textrm{top}}$ is normal for any $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$; \item[(b)] for any $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$, the reflection in $\ensuremath{\mathsf{NExt}}^n_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}(\ensuremath{\mathcal{A}})$ is given by the quotient $A/\iota^n[\bigcap_{1\leq i\leq n}K[a_i]]_{\ensuremath{\mathcal{B}}}$; \end{itemize} \item either (3) or (4) holds for some $n\geq 1$. \end{enumerate} \end{theorem} \begin{proof} $(1) \Leftrightarrow (2)$ was proved in Proposition \ref{characterisationbyextensions}. To see that $(5)$ implies $(1)$, we note that $I_k$ preserves binary products, for any $k\geq 0$, by the ``relative version" of Lemma \ref{Marino}. Taking this into account, we see from the construction of $I_n$ that $I_n(\iota^nA)=\iota_nI(A)$, for any $n\geq 1$ and any $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$, so that the validity of conditions $(a)$ and $(b)$ for some $n\geq 1$ implies that for $n=1$. Proposition \ref{characterisationbyextensions} then implies that $I$ is protoadditive. The other implications follow easily by induction on $n$, using Proposition \ref{characterisationbyextensions} and its ``relative version", and Lemma \ref{centralisationisprotoadditive}. \end{proof} \section{Composition of Birkhoff and protoadditive reflections} We have seen in Section \ref{sectionderived} that a torsion theory $(\ensuremath{\mathcal{T}},\ensuremath{\mathcal{F}})$ on a homological category $\ensuremath{\mathcal{A}}$ satisfying conditions $(P)$ and $(N)$ induces a chain of torsion theories $(\ensuremath{\mathcal{T}}_n,\ensuremath{\mathcal{F}}_n)$ on the categories $\ensuremath{\mathsf{Ext}}^n_{\ensuremath{\mathcal{N}}}(\ensuremath{\mathcal{A}})$ of $n$-fold $\ensuremath{\mathcal{N}}$-extensions such that, for each $n\geq 1$, the torsion-free subcategory $\ensuremath{\mathcal{F}}_n$ consists of all $n$-fold $\ensuremath{\mathcal{N}}$-extensions that are normal extensions with respect to the Galois structure $\Gamma_{\ensuremath{\mathcal{F}}_{n-1}}$ associated with the torsion theory $(\ensuremath{\mathcal{T}}_{n-1},\ensuremath{\mathcal{F}}_{n-1})$. Moreover, an $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$ with ``initial ribs" $a_i$ ($1\leq i\leq n$) is normal if and only if the intersection $\bigcap_{1\leq i\leq n}K[a_i]$ lies in $\ensuremath{\mathcal{F}}$. Similarly, a Birkhoff subcategory $\ensuremath{\mathcal{B}}$ of a semi-abelian category $\ensuremath{\mathcal{A}}$ induces a chain of ``strongly $\ensuremath{\mathcal{N}}^{n}$-Birkhoff subcategories" $\ensuremath{\mathcal{B}}_n$ of the categories $\ensuremath{\mathsf{Ext}}^n_{\ensuremath{\mathcal{N}}}(\ensuremath{\mathcal{A}})$, where, for each $n\geq 1$, $\ensuremath{\mathcal{N}}^n$ denotes the class of all $(n+1)$-fold $\ensuremath{\mathcal{N}}$-extensions. Moreover, in the case where the reflector $I\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{B}}$ is protoadditive, the $n$-fold normal (=central) extensions admit the same simple description as in the example of a torsion theory satisfying $(P)$ and $(N)$, as we have explained in Section \ref{Birkhoffsection}. In general, it is not always easy to characterise the $n$-fold normal extensions (for $n\geq 1$) with respect to a particular Birkhoff subcategory. However, we are going to show that the problem can sometimes be simplified by decomposing the considered adjunction into a pair of adjunctions such that one of the reflectors is protoadditive. We shall explain this in the present section. In fact, we shall consider, more generally, composite adjunctions \begin{equation}\label{compositeadj} \xymatrix@=30pt{ {\ensuremath{\mathcal{A}} \, } \ar@<1ex>[r]_-{^{\perp}}^-{I} & {\, \ensuremath{\mathcal{B}} \, } \ar@<1ex>[l]^H \ar@<1ex>[r]_-{^{\perp}}^-{J} & \ensuremath{\mathcal{C}} \ar@<1ex>[l]^G } \end{equation} where $\ensuremath{\mathcal{A}}$ is semi-abelian, $\ensuremath{\mathcal{B}}$ a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$, and where $\ensuremath{\mathcal{C}}$ can be \emph{any} (normal epi)-reflective subcategory of $\ensuremath{\mathcal{B}}$, admissible with respect to $\ensuremath{\mathcal{N}}$, with a protoadditive reflector (but not necessarily Birkhoff). As we shall see, such a situation induces a chain of Galois structures of higher normal extensions such that, for $n\geq 1$, an $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$ in $\ensuremath{\mathcal{A}}$ is normal with respect to $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{N}})}$ if and only if it is normal with respect to $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}$ and the intersection $\bigcap_{1\leq i\leq n}K[a_i]$ lies in $\ensuremath{\mathcal{C}}$. Here we have written, as before, $a_i$ for the ``initial ribs" of $A$. First, we consider the one-dimensional case (see also \cite{DEG}): \begin{proposition}\label{composite} Consider the composite reflection \eqref{compositeadj} where $\ensuremath{\mathcal{A}}$ is a semi-abelian category, $\ensuremath{\mathcal{B}}$ a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$ and $\ensuremath{\mathcal{C}}$ a (normal epi)-reflective subcategory of $\ensuremath{\mathcal{B}}$, admissible with respect to normal epimorphisms, with protoadditive reflector $J$. Then the composite reflector $J\circ I$ is admissible with respect to normal epimorphisms and, for any normal epimorphism $f \colon A \rightarrow B$ in $\ensuremath{\mathcal{A}}$, the following conditions are equivalent: \begin{enumerate} \item $f \colon A \rightarrow B$ is a normal extension with respect to $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{N}})}$; \item $f \colon A \rightarrow B$ is a central extension with respect to $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{N}})}$; \item $K[f] \in \ensuremath{\mathcal{C}} $ and $f \colon A \rightarrow B$ is a $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}$-normal extension. \end{enumerate} \end{proposition} \begin{proof} The admissibility of $J\circ I$ is clear, while the implication $(1) \Rightarrow (2)$ holds by definition. $(2) \Rightarrow (3)$ Let $p:E \rightarrow B$ be an normal epimorphism such that $p^*(f)$ is $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{N}})}$-trivial. Then in the following commutative diagram the composite of the left hand pointing squares is a pullback (here $\eta$ and $\mu$ are the reflection units): \[ \xymatrix@=35pt{ JI(E\times_BA) \ar[d]_{JI(p^*(f))} & I(E\times_BA) \ar[d]_{I(p^*(f))} \ar[l]_-{\mu_{I(E\times_BA)}} & E\times_B A \ar@{}[rd]|<<{\copy\pullbackbox}\ar[d]_{p^*(f)} \ar[l]_-{\eta_{E\times_BA}} \ar[r] & A \ar[d]^f\\ JI(E) & I(E) \ar[l]^{\mu_{I(E)}} & E \ar[l]^{\eta_E} \ar[r]_p & B} \] This implies, on the one hand, that $p^*(f)$ and $\eta_{E\times_BA}$ are jointly monomorphic, and, consequently, that the middle square is a pullback, since it is a double $\ensuremath{\mathcal{N}}$-extension, because $\ensuremath{\mathcal{B}}$ is a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$. Hence, $f$ is a $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}$-central extension, and we know that, with respect to a Birkhoff subcategory, the central and normal extensions coincide \cite{JK}. On the other hand, since also the right hand square is a pullback, we have isomorphisms \[ K[JI(p^*(f))] \cong K[p^*(f)] \cong K[f], \] so that $K[f]\in\ensuremath{\mathcal{C}}$, since $\ensuremath{\mathcal{C}}$, being a reflective subcategory, is closed under limits in $\ensuremath{\mathcal{A}}$. $(3) \Rightarrow (1)$ Now let $f \colon A \rightarrow B$ be a normal epimorphism satisfying $(3)$. Consider the commutative diagram $$ \xymatrix@=40pt{R[f] \ar[d]_{\pi_1} \ar[r]^-{\eta_{R[f]}} & I(R[f]) \ar[d]^{I(\pi_1)} \ar[r]^-{\mu_{I(R[f])}} & JI (R[f]) \ar[d]^{JI(\pi_1)} \\ A \ar[r]_{\eta_A}& I(A) \ar[r]_{\mu_{I(A)}} &JI(A)} $$ where $\pi_1$ is the first projection of the kernel pair of $f$. By assumption, its left hand square is a pullback. Consequently, there is an isomorphism $K[\pi_1] \cong K[ I(\pi_1)]$, so that $K[I(\pi_1)]$ lies in $\ensuremath{\mathcal{C}}$ because $K[\pi_1]\cong K[f]$ lies in $\ensuremath{\mathcal{C}}$, by assumption. Since the reflector $J$ is protoadditive and the category $\ensuremath{\mathcal{A}}$ is protomodular, this implies that also the right hand square is a pullback. \end{proof} We continue with a higher dimensional version of this result. For this, let us first of all remark that Proposition \ref{composite} can be ``relativised" with respect to a class $\ensuremath{\mathcal{E}}$ of morphisms in the semi-abelian category $\ensuremath{\mathcal{A}}$ satisfying Conditions \ref{extension}. More precisely, if we have a composite adjunction \begin{equation}\label{relativecompositeadj} \xymatrix@=30pt{ {\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}} \, } \ar@<1ex>[r]_-{^{\perp}}^-{I} & {\, \ensuremath{\mathcal{B}} \, } \ar@<1ex>[l]^H \ar@<1ex>[r]_-{^{\perp}}^-{J} & \ensuremath{\mathcal{C}} \ar@<1ex>[l]^G } \end{equation} with $\ensuremath{\mathcal{B}}$ a strongly $\ensuremath{\mathcal{E}}$-Birkhoff subcategory of $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ and $\ensuremath{\mathcal{C}}$ a full $\ensuremath{\mathcal{E}}$-reflective subcategory of $\ensuremath{\mathcal{B}}$, admissible with respect to $\ensuremath{\mathcal{E}}$, with protoadditive reflector $J$, then an $\ensuremath{\mathcal{E}}$-extension $f\colon A\rightarrow B$ is normal with respect to $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}=(\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}},\ensuremath{\mathcal{C}},J\circ I,H\circ G,\ensuremath{\mathcal{E}})$ if and only if it is a $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}$-central extension if and only if $K[f]\in\ensuremath{\mathcal{C}}$ and $f$ is a $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}$-normal extension. We leave it to the reader to verify that the proof of Proposition~\ref{composite} remains valid under our assumptions. Now let us consider a composite reflection \eqref{relativecompositeadj} satisfying the conditions above. Write $\eta$ and $\mu$ for the units of the reflections $I$ and $J$, respectively, and $[-]_{\ensuremath{\mathcal{C}}}\colon \ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\rightarrow\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ for the radical induced by the $\ensuremath{\mathcal{E}}$-reflection $J\circ I$. We have the following property: \begin{lemma}\label{Mathieu} For any $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}$-normal extension $f\colon A\rightarrow B$, the monomorphism $\ensuremath{\mathsf{ker\,}}(f)\circ \ensuremath{\mathsf{ker\,}}(\mu\circ\eta)_{K[f]}\colon [K[f]]_{\ensuremath{\mathcal{C}}}\rightarrow A$ is normal. \end{lemma} \begin{proof} First of all note that the radical $[-]_{\ensuremath{\mathcal{C}}}\colon \ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\rightarrow\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ is well-defined since, for any object $A$ of $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, the unit $(\mu\circ \eta)_A\colon A\rightarrow JI(A)$ is an $\ensuremath{\mathcal{E}}$-extension, so that its kernel lies indeed in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. Now consider the short exact sequence \[ \xymatrix{ 0\ar[r] & K[f] \ar[r]^{\ensuremath{\mathsf{ker\,}}(\pi_1)} & R[f] \ar[r]^{\pi_1} & A \ar[r] & 0} \] where $\pi_1$ denotes the first projection of the kernel pair of $f$. It is preserved by $I$ since $\pi_1$ is a trivial extension and $\eta_{K[f]}\colon K[f]\rightarrow I(K[f])$ an isomorphism. Hence, it is preserved by $J\circ I$ since $J$ is protoadditive. In particular, we have that $JI(\ensuremath{\mathsf{ker\,}}(\pi_1))$ is a monomorphism, so that the left hand square in the morphism \[ \xymatrix{ 0 \ar[r] & [K[f]]_{\ensuremath{\mathcal{C}}} \ar[d] \ar[r] & K[f] \ar[d]^{\ensuremath{\mathsf{ker\,}}(\pi_1)} \ar[r] & JI(K[f]) \ar[d]^{JI(\ensuremath{\mathsf{ker\,}}(\pi_1))} \ar[r] & 0\\ 0 \ar[r] & [R[f]]_{\ensuremath{\mathcal{C}}} \ar[r] & R[f] \ar[r] & JI(R[f]) \ar[r] & 0} \] of short exact sequences is a pullback. It follows that the monomorphism $\ensuremath{\mathsf{ker\,}}(\pi_1)\circ \ensuremath{\mathsf{ker\,}}((\mu\circ\eta)_{K[f]})$ is normal. Hence, so is its normal image along the second projection $\pi_2\colon R[f]\rightarrow A$ of the kernel pair of $f$, and this is exactly the monomorphism $\ensuremath{\mathsf{ker\,}}(f)\circ \ensuremath{\mathsf{ker\,}}(\mu\circ\eta)_{K[f]}\colon [K[f]]_{\ensuremath{\mathcal{C}}}\rightarrow A$. \end{proof} The above lemma, together with the ``relative'' version of Proposition \ref{composite}, now allows us to prove that the pair of reflections \eqref{relativecompositeadj} induces a pair of reflections ``at the level of extensions'', in the following sense: \begin{lemma}\label{doublecentralisation} The pair of reflections \eqref{relativecompositeadj} induces new reflections \[ \xymatrix{ \ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}}) \ar@<1 ex>[r]^-{I_1} \ar@{}[r]|-{\perp} & \ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}}) \ar@{}[r]|{\perp} \ar@<1 ex>[l] \ar@<1 ex>[r]^{J_1} & \ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}}) \ar@<1 ex>[l]} \] where $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ is a strongly $\ensuremath{\mathcal{E}}^1$-Birkhoff subcategory of $\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})$ with reflector $I_1$ and $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ is an $\ensuremath{\mathcal{E}}^1$-reflective subcategory of $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$, admissible with respect to the class of $\ensuremath{\mathcal{E}}^1$-extensions in $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$, with protoadditive reflector $J_1$. We have that $J_1$ sends a $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}$-normal extension $f\colon A\rightarrow B$ to the induced $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}$-normal extension $J_1(f)\colon A/[K[f]]_{\ensuremath{\mathcal{C}}}\rightarrow B$. \end{lemma} \begin{proof} We already know that $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ is a strongly $\ensuremath{\mathcal{E}}^1$-Birkhoff subcategory of $\ensuremath{\mathsf{Ext}}_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{A}})$. Let us then prove, for any $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}$-normal extension $f\colon A\rightarrow B$, that the induced $\ensuremath{\mathcal{E}}$-extension $J_1(f)\colon A/[K[f]]_{\ensuremath{\mathcal{C}}}\rightarrow B$ is indeed its reflection in $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$. (Note that the monomorphism $[K[f]]_{\ensuremath{\mathcal{C}}}\rightarrow A$ is normal, by Lemma \ref{Mathieu}.) On the one hand we have that $J_1(f)$ is a $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}$-normal extension since $K[J_1(f)]=K[f]/[K[f]_{\ensuremath{\mathcal{C}}}=J(K[f])$ by the ``double quotient'' isomorphism theorem, and because the reflector $F\colon \ensuremath{\mathcal{B}}\rightarrow\ensuremath{\mathcal{C}}$ is protoadditive, by assumption. On the other hand, if $g\colon C\rightarrow D$ is a $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}$-normal extension as well, and $(a,b)\colon f\rightarrow g$ is a morphism of $\ensuremath{\mathcal{E}}$-extensions, there exists a (unique) morphism $\overline{a}$ such that the diagram $$\xymatrix@1@=30pt{ A \ar@{->}`u[r]`[rr]^a[rr] \ar[r] \ar[d]_{f} & \frac{A}{[K[f]]_{\ensuremath{\mathcal{C}}}} \ar[d]^{J_1(f)} \ar@{.>}[r]^{\overline{a}} & C \ar[d]^g \\ B \ar@{=}[r] & B \ar[r]_b & D } $$ commutes. Indeed, it suffices to note that there is a commutative square \[ \xymatrix{ [K[f]]_{\ensuremath{\mathcal{C}}} \ar[r] \ar[d] & [K[g]]_{\ensuremath{\mathcal{C}}} \ar[d] \\ A\ar[r]_a & C} \] and that $[K[g]]_{\ensuremath{\mathcal{C}}}=0$. It follows that $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ is a reflective subcategory of $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ with reflector $J_1$. Since, for any $\ensuremath{\mathcal{C}}$-normal extension $f\colon A\rightarrow B$, the reflection unit \[ \xymatrix{ A \ar[r] \ar[d]_f & \frac{A}{[K[f]]_{\ensuremath{\mathcal{C}}}} \ar[d]\\ B \ar@{=}[r] & B} \] is clearly a double $\ensuremath{\mathcal{E}}$-extension, $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ is an $\ensuremath{\mathcal{E}}^1$-reflective subcategory of $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$. Next we prove that the reflector $J_1$ is protoadditive. To this end we consider a split short exact sequence \[ \xymatrix{ 0 \ar[r] & K_1 \ar[r] \ar[d]_{k} & A_1 \ar@<-.8 ex> [r] \ar[d]^a & B_1 \ar[d]^b \ar[r] \ar@<-.8ex>[l] & 0\\ 0 \ar[r] & K_0 \ar[r] & A_0 \ar@<-.8 ex> [r] & B_0 \ar[r] \ar@<-.8ex>[l] & 0} \] in $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ and we note that both rows are split short exact sequences in $\ensuremath{\mathcal{A}}$. By taking kernels vertically and then applying the radical $[-]_{\ensuremath{\mathcal{C}}}\colon \ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}\rightarrow\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, whose restriction to $\ensuremath{\mathcal{B}}\rightarrow\ensuremath{\mathcal{B}}$ is protoadditive by Proposition \ref{reflector=radical}, we obtain a split short exact sequence which is the first row in the diagram \[ \xymatrix{ 0 \ar[r] & [K[k]]_{\ensuremath{\mathcal{C}}} \ar[r] \ar[d] & [K[a]]_{\ensuremath{\mathcal{C}}} \ar[d] \ar@<-.8 ex> [r] & [K[b]]_{\ensuremath{\mathcal{C}}} \ar[d] \ar@<-.8ex>[l] \ar[r] & 0\\ 0 \ar[r] & K_1 \ar[r] \ar[d] & A_1 \ar[d] \ar@<-.8 ex> [r] & B_1 \ar[d] \ar@<-.8ex>[l] \ar[r] & 0\\ 0 \ar[r] & \frac{K_1}{[K[k]]_{\ensuremath{\mathcal{C}}}} \ar[r] & \frac{A_1}{[K[a]]_{\ensuremath{\mathcal{C}}}} \ar@<-.8 ex> [r] & \frac{B_1}{[K[b]]_{\ensuremath{\mathcal{C}}}} \ar@<-.8ex>[l] \ar[r] & 0} \] Since also the second row is split exact, by assumption, the third row is a split short exact sequence as well, by the $3\times 3$ lemma. If follows that the reflector $J_1$ is protoadditive, and this completes the proof. Finally, we prove that the reflector $J_1$ is admissible. For this, we consider a pullback \[ \xymatrix{ & D \ar@{}[rrdd]|<<{\copy\pullbackbox} \ar@{.>}[dd] \ar[rr] && B \ar@{=}[dd] \\ P \ar@{}[rrdd]|<<{\copy\pullbackbox} \ar[ur]^p \ar[rr] \ar[dd] && A \ar[ur]_{f} \ar[dd] & \\ & D \ar@{.>}[rr] && B \\ C \ar[ur]^{g} \ar[rr] && \frac{A}{[K[f]]_{\ensuremath{\mathcal{C}}}} \ar[ur]_{J_1(f)} &} \] in $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ of a reflection unit $f\rightarrow J_1(f)$ along some double $\ensuremath{\mathcal{E}}$-extension $g\rightarrow J_1(f)$, and we assume that $g\in \ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$. Notice that it is a pointwise pullback in $\ensuremath{\mathcal{A}}$. We have to prove that its image in $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ by $J_1$ is still a pullback. Since the reflection unit $f\rightarrow J_1(f)$ is sent to an isomorphism, this amounts to proving that $J_1(p)\rightarrow J_1(g)$ is an isomorphism as well. For this it suffices to show that the morphism $P/[K[p]]_{\ensuremath{\mathcal{C}}}\rightarrow C/[K[g]]_{\ensuremath{\mathcal{C}}}$ is an isomorphism. Now, by taking kernels in the cube above, we obtain a pullback \[ \xymatrix{ K[p] \ar[r] \ar@{}[rd]|<{\copy\pullbackbox} \ar[d] & K[f] \ar[d] \\ K[g] \ar[r] & J(K[f])} \] in $\ensuremath{\mathcal{B}}$. Note that the object in the right hand lower corner is indeed $J(K[f])=K[f]/[K[f]]_{\ensuremath{\mathcal{C}}}$ by the ``double quotient'' isomorphism theorem, and that $K[g]\in \ensuremath{\mathcal{C}}$ because $g\in \ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$, by assumption. Moreover, we have that $K[g]\rightarrow J(K[f])$ is an $\ensuremath{\mathcal{E}}$-extension, by Condition \ref{extension}.5 for the class $\ensuremath{\mathcal{E}}^1$. Consequently, using the admissibility of $J$, we find that the image by $J$ of the above square is a pullback in $\ensuremath{\mathcal{C}}$, which implies that $J(K[p])\rightarrow J(K[g])$ is an isomorphism. It follows that in the diagram \[ \xymatrix{ 0 \ar[r] & [K[p]]_{\ensuremath{\mathcal{C}}} \ar[r] \ar[d] & K[p] \ar[r] \ar[d] & J(K[p]) \ar@{=}[d] \ar[r] & 0\\ 0 \ar[r] & [K[g]]_{\ensuremath{\mathcal{C}}} \ar[r] & K[g] \ar[r] & J(K[g]) \ar[r] & 0} \] of short exact sequences in $\ensuremath{\mathcal{B}}$, the left hand square is a pullback. Since, of course, also \[ \xymatrix{ K[p] \ar[r] \ar[d] & P \ar[d] \\ K[g] \ar[r] & C } \] is a pullback, we have that the left hand square in the diagram \[ \xymatrix{ 0 \ar[r] & [K[p]]_{\ensuremath{\mathcal{C}}} \ar[r] \ar[d] & P \ar[r] \ar[d] & \frac{P}{[K[p]]_{\ensuremath{\mathcal{C}}}} \ar[d] \ar[r] & 0\\ 0 \ar[r] & [K[g]]_{\ensuremath{\mathcal{C}}} \ar[r] & C \ar[r] & \frac{C}{[K[g]]_{\ensuremath{\mathcal{C}}}} \ar[r] & 0} \] of short exact sequences in $\ensuremath{\mathcal{A}}$ is a pullback, and this implies that the $\ensuremath{\mathcal{E}}$-extension $P/[K[p]]_{\ensuremath{\mathcal{C}}}\rightarrow C/[K[g]]_{\ensuremath{\mathcal{C}}}$ is a monomorphism (in $\ensuremath{\mathcal{A}}$), hence an isomorphism. \end{proof} Thanks to this lemma, we can repeatedly apply the ``relative" version of Proposition \ref{composite}, and we obtain: \begin{theorem}\label{highercomposite} Let $\ensuremath{\mathcal{A}}$ be a semi-abelian category, $\ensuremath{\mathcal{B}}$ a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$, and $\ensuremath{\mathcal{C}}$ a (normal epi)-reflective subcategory, admissible with respect to normal epimorphisms, such that the reflector $J\colon \ensuremath{\mathcal{B}}\rightarrow\ensuremath{\mathcal{C}}$ is protoadditive. Then, for any $n\geq 1$ and any $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$ in $\ensuremath{\mathcal{A}}$, the following conditions are equivalent: \begin{enumerate} \item $A$ is an $n$-fold normal extension with respect to $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{N}})}$; \item $A$ is an $n$-fold central extension with respect to $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{N}})}$; \item $\bigcap_{1\leq i\leq n}K[a_i]\in \ensuremath{\mathcal{C}}$ and $A$ is an $n$-fold normal extension with respect to~$\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}$. \end{enumerate} \end{theorem} We are mainly interested in the situation where, in the composite reflection \eqref{compositeadj}, $\ensuremath{\mathcal{C}}$ is a Birkhoff subcategory of $\ensuremath{\mathcal{B}}$, since in this case the construction of the composite reflectors $J_n\circ I_n$ ($n\geq 1$) obtained from Lemma \ref{doublecentralisation} can be simplified, as we shall see below. However, the case of a torsion-free $\ensuremath{\mathcal{C}}$ is of interest as well: \begin{proposition}\label{compositetorsion} Let $\ensuremath{\mathcal{A}}$ be a semi-abelian category, $\ensuremath{\mathcal{B}}$ a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$, and $\ensuremath{\mathcal{C}}$ a (normal epi)-reflective subcategory of $\ensuremath{\mathcal{B}}$ whose reflector $F\colon \ensuremath{\mathcal{B}}\rightarrow\ensuremath{\mathcal{C}}$ is protoadditive. If $\ensuremath{\mathcal{C}}$ is a torsion-free subcategory of $\ensuremath{\mathcal{B}}$, then, for each $n\geq 1$, $\ensuremath{\mathsf{NExt}}^n_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{N}})}(\ensuremath{\mathcal{A}})$ is a torsion-free subcategory of $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{N}})}^n(\ensuremath{\mathcal{A}})$. \end{proposition} \begin{proof} It suffices to prove, for any composite adjunction \eqref{relativecompositeadj} with $\ensuremath{\mathcal{A}}$ semi-abelian, $\ensuremath{\mathcal{E}}$ a class of morphisms in $\ensuremath{\mathcal{A}}$ satisfying Conditions \ref{extension}, $\ensuremath{\mathcal{B}}$ a strongly $\ensuremath{\mathcal{E}}$-Birkhoff subcategory of $\ensuremath{\mathcal{A}}$, and $\ensuremath{\mathcal{C}}$ an $\ensuremath{\mathcal{E}}$-reflective subcategory of $\ensuremath{\mathcal{B}}$ which is torsion-free and whose reflector is protoadditive, that $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ is a torsion-free subcategory of $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$. But this follows easily from the construction of the reflector $J_1\colon \ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})\rightarrow \ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ given in Lemma \ref{doublecentralisation}, which shows us that the associated radical $[-]_{1,\ensuremath{\mathcal{C}}}\colon \ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})\rightarrow \ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ sends a $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}$-normal extension $f\colon A\rightarrow B$ to the unique morphism $[K[f]]_{\ensuremath{\mathcal{C}}}\rightarrow 0$, so that we clearly have that $J_1$ is idempotent. By Theorem \ref{torsiontheorem} the proof is then complete. \end{proof} From now on, let us assume that the category $\ensuremath{\mathcal{C}}$ in the composite adjunction \eqref{compositeadj} is a Birkhoff subcategory of $\ensuremath{\mathcal{B}}$. Since double $\ensuremath{\mathcal{N}}$-extensions are stable under composition, $\ensuremath{\mathcal{C}}$ is then also a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$. Theorem \ref{highercomposite} gives us a characterisation of the higher normal extensions with respect to $\Gamma_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{N}})}$, and Lemma~\ref{doublecentralisation} shows us how the functors $(J\circ I)_n$ are constructed. Using the following lemma, we shall be able to simplify this construction, by giving a description of the functors $[-]_{n,\ensuremath{\mathcal{C}}}$ in terms of $[-]_{n,\ensuremath{\mathcal{B}}}$ and $[-]_{\ensuremath{\mathcal{C}}}$ ($n\geq1$). Note that, in a semi-abelian category $\ensuremath{\mathcal{A}}$, any two normal subobjects $M\rightarrow A$ and $N\rightarrow A$ admit a supremum (in the lattice of normal subobjects of $A$) which can be obtained as the kernel of the ``diagonal" $A\rightarrow P\cong A/(M\vee N)$ in the pushout diagram \[ \xymatrix{ A \ar[r] \ar[d] & A/N \ar[d]_>>{\copy\pushoutbox} \\ A/M \ar[r] & P.} \] Any subobject $S\rightarrow A$ admits a \emph{normal closure} $\overline{S}^A\rightarrow A$ obtained as the kernel of the cokernel of $S\rightarrow A$. \begin{lemma} Let $\ensuremath{\mathcal{E}}$ be a class of morphisms in a semi-abelian category $\ensuremath{\mathcal{A}}$ satisfying Conditions \ref{extension}, and $\ensuremath{\mathcal{B}}$ and $\ensuremath{\mathcal{C}}$ strongly $\ensuremath{\mathcal{E}}$-Birkhoff subcategories of $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$ such that $\ensuremath{\mathcal{C}}\subseteq \ensuremath{\mathcal{B}}$. If the comparison reflector $\ensuremath{\mathcal{B}}\rightarrow\ensuremath{\mathcal{C}}$ is protoadditive, then we have, for any $\ensuremath{\mathcal{E}}$-extension $f\colon A\rightarrow B$, the identity \[ [f]_{1,\ensuremath{\mathcal{C}}}=[f]_{1,\ensuremath{\mathcal{B}}}\vee \overline{[K[f]]}^{A}_{\ensuremath{\mathcal{C}}}. \] \end{lemma} \begin{proof} We know from the proof of Lemma \ref{doublecentralisation} that the reflection in $\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{C}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$ of an $\ensuremath{\mathcal{E}}$-extension $f\colon A\rightarrow B$ is given by the induced $\ensuremath{\mathcal{E}}$-extension \begin{equation}\label{complexreflection} J_1\circ I_1(f)=J_1(A/[f]_{1,\ensuremath{\mathcal{B}}}\rightarrow B)=\frac{A/[f]_{1,\ensuremath{\mathcal{B}}}}{[K[f]/[f]_{1,\ensuremath{\mathcal{B}}}]_{\ensuremath{\mathcal{C}}}}\rightarrow B \end{equation} (Notice that $K[f]/[f]_{1,\ensuremath{\mathcal{B}}}$ is indeed the kernel of $A/[f]_{1,\ensuremath{\mathcal{B}}}\rightarrow B$ by the ``double quotient" isomorphism theorem.) Now consider the following commutative diagram: \[ \xymatrix{ [K[f]]_{\ensuremath{\mathcal{C}}} \ar[r] \ar[rd] & \overline{[K[f]]}^{A}_{\ensuremath{\mathcal{C}}} \ar[r] \ar[d] & A \ar[r] \ar[d] & A/\overline{[K[f]]}^A_{\ensuremath{\mathcal{C}}} \ar[d]_>>{\copy\pushoutbox}\\ &[K[f]/[f]_{1,\ensuremath{\mathcal{B}}}]_{\ensuremath{\mathcal{C}}}\ar[r] & A/[f]_{1,\ensuremath{\mathcal{B}}} \ar[r] & A/([f]_{1,\ensuremath{\mathcal{B}}}\vee \overline{[K[f]]}^A_{\ensuremath{\mathcal{C}}})} \] Since the canonical morphism $K[f]\rightarrow K[f]/[f]_{1,\ensuremath{\mathcal{B}}}$ is an $\ensuremath{\mathcal{E}}$-extension, and $\ensuremath{\mathcal{C}}$ is a strongly $\ensuremath{\mathcal{E}}$-Birkhoff subcategory of $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, we have that the skew morphism in the diagram is an $\ensuremath{\mathcal{E}}$-extension as well, which implies that $\overline{[K[f]]}^{A}_{\ensuremath{\mathcal{C}}}\rightarrow [K[f]/[f]_{1,\ensuremath{\mathcal{B}}}]_{\ensuremath{\mathcal{C}}}$ is an epimorphism. Since the right hand square is a pushout, this shows us that, in the bottom row, the right hand morphism is the cokernel of the left hand one, which is a normal monomorphism by Lemma \ref{Mathieu}. Together with \eqref{complexreflection}, this yields the needed identity. \end{proof} By repeatedly applying the previous lemma, we find, for any $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$ with ``top'' object $A_{\textrm{top}}$ and ``initial ribs'' $a_i$ ($1\leq i\leq n$), that \begin{eqnarray*} [A]_{n,\ensuremath{\mathcal{C}}} & = & [A]_{n,\ensuremath{\mathcal{B}}}\vee \overline{[K[A]]}^{A_{\textrm{top}}}_{n-1,\ensuremath{\mathcal{C}}}\\ &=& [A]_{n,\ensuremath{\mathcal{B}}}\vee \overline{[K[A]]}^{A_{\textrm{top}}}_{n-1,\ensuremath{\mathcal{B}}}\vee \overline{[K[K[A]]]}^{A_{\textrm{top}}}_{n-2,\ensuremath{\mathcal{C}}}\\ &=& [A]_{n,\ensuremath{\mathcal{B}}}\vee \overline{[K[K[A]]]}^{A_{\textrm{top}}}_{n-2,\ensuremath{\mathcal{C}}}\\ &=& \cdots\\ &=& [A]_{n,\ensuremath{\mathcal{B}}} \vee \overline{\big[\bigcap_{1\leq i\leq n}K[a_i]\big]_{\ensuremath{\mathcal{C}}}}^{A_{\textrm{top}}} \end{eqnarray*} Here we used the fact that taking joins commutes with taking normal closures, and that $[K[A]]_{n-1,\ensuremath{\mathcal{B}}}\subseteq [A]_{n,\ensuremath{\mathcal{B}}}$ (as well as $[K[K[A]]]_{n-2,\ensuremath{\mathcal{B}}}\subseteq [K[A]]_{n-1,\ensuremath{\mathcal{B}}}$ , and so on), which follows easily, for instance from the previous lemma, by taking the two reflective subcategories $\ensuremath{\mathcal{B}}$ and $\ensuremath{\mathcal{C}}$ to be the same. Thus we have proved the following theorem: \begin{theorem}\label{compositecommutator} Consider the composite adjunction \eqref{compositeadj} where $\ensuremath{\mathcal{A}}$ is a semi-abelian category, $\ensuremath{\mathcal{B}}$ a Birkhoff subcategory of $\ensuremath{\mathcal{A}}$ and $\ensuremath{\mathcal{C}}$ a Birkhoff subcategory of $\ensuremath{\mathcal{B}}$ (hence, also of $\ensuremath{\mathcal{A}}$) such that the reflector $J\colon \ensuremath{\mathcal{B}}\rightarrow\ensuremath{\mathcal{C}}$ is protoadditive. Then, for any $n\geq 1$ and $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$ in $\ensuremath{\mathcal{A}}$ with ``initial ribs $a_i$" ($1\leq i\leq n$), we have the identity \[ [A]_{n,\ensuremath{\mathcal{C}}}= [A]_{n,\ensuremath{\mathcal{B}}} \vee \overline{\big[\bigcap_{1\leq i\leq n}K[a_i]\big]_{\ensuremath{\mathcal{C}}}}^{A_{\textrm{top}}} \] \end{theorem} The formula given by the above theorem can be further simplified in the following situation. Let $\ensuremath{\mathcal{B}}$ and $\ensuremath{\mathcal{B}}'$ be Birkhoff subcategories of a semi-abelian category $\ensuremath{\mathcal{A}}$ such that either of the reflectors is protoadditive---say the reflector $I'\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{B}}'$. In this case, the restriction of this reflector to a functor $\ensuremath{\mathcal{B}}\rightarrow \ensuremath{\mathcal{B}}\cap\ensuremath{\mathcal{B}}'$ is protoadditive as well, so that if we put $\ensuremath{\mathcal{C}}=\ensuremath{\mathcal{B}}\cap\ensuremath{\mathcal{B}}'$, we are indeed in the situation of Theorem \ref{compositecommutator}. We will obtain a simplified description of the functors $[-]_{n,\ensuremath{\mathcal{B}}\cap\ensuremath{\mathcal{B}}'}\colon \ensuremath{\mathsf{Ext}}^n_{\ensuremath{\mathcal{N}}}(\ensuremath{\mathcal{A}})\rightarrow\ensuremath{\mathsf{Ext}}^n_{\ensuremath{\mathcal{N}}}(\ensuremath{\mathcal{A}})$ ($n\geq 0$) from the next lemma together with Theorem \ref{characterisationbyextensionshigher}. The latter tells us that $[A]_{n,\ensuremath{\mathcal{B}}'}=[\bigcap_{1\leq i\leq n}K[a_1]]_{\ensuremath{\mathcal{B}}'}$ for any $n\geq 1$ and any $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$ in $\ensuremath{\mathcal{A}}$ with ``initial ribs $a_i$" ($1\leq i\leq n$). \begin{lemma}\label{intersectionlemma} Let $\ensuremath{\mathcal{E}}$ be a class of morphisms in a semi-abelian category $\ensuremath{\mathcal{A}}$ satisfying Conditions \ref{extension}, and $\ensuremath{\mathcal{B}}$ and $\ensuremath{\mathcal{B}}'$ strongly $\ensuremath{\mathcal{E}}$-Birkhoff subcategories of $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$. Then \[ \ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}}\cap\ensuremath{\mathcal{B}}',\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})=\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})\cap\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}}',\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}}) \] and \[ [A]_{\ensuremath{\mathcal{B}}\cap\ensuremath{\mathcal{B}}'}=[A]_{n,\ensuremath{\mathcal{B}}}\vee [A]_{n,\ensuremath{\mathcal{B}}'}. \] \end{lemma} \begin{proof} For any $\ensuremath{\mathcal{E}}$-extension $f\colon A\rightarrow B$, consider the following commutative cube, where $\pi_1$ is the first projection of the kernel pair of $f$: \[ \xymatrix@=20pt{ & I'(R[f]) \ar@{}[rrdd] \ar@{.>}[dd] \ar[rr] && I'(I(R[f])) \ar[dd] \\ R[f] \ar[ur] \ar[rr] \ar[dd]_{\pi_1} && I(R[f]) \ar[ur] \ar[dd] & \\ & I'(A) \ar@{.>}[rr] && I'(I(A)) \\ A \ar[ur] \ar[rr] && I(A) \ar[ur] &} \] When $f\in\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}}\cap\ensuremath{\mathcal{B}}',\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$, the composite of the front and the right side squares is a pullback. Since the front square is a double $\ensuremath{\mathcal{E}}$-extension by the strong $\ensuremath{\mathcal{E}}$-Birkhoff property of $\ensuremath{\mathcal{B}}$, this implies that it is a pullback, hence $f\in \ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$. Similarly, one shows that $f\in \ensuremath{\mathsf{NExt}}_{({\ensuremath{\mathcal{B}}'},\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$. Conversely, if $f$ is both $\Gamma_{(\ensuremath{\mathcal{B}},\ensuremath{\mathcal{E}})}$-normal and $\Gamma_{({\ensuremath{\mathcal{B}}'},{\ensuremath{\mathcal{E}}})}$-normal, then the left hand and the front squares are pullbacks, and then also the right hand and back ones, since both $I$ and $I'$ preserve pullbacks of split epimorphisms along $\ensuremath{\mathcal{E}}$-extensions by the ``relative version'' of Lemma \ref{Marino}. Hence, $f\in\ensuremath{\mathsf{NExt}}_{(\ensuremath{\mathcal{B}}\cap\ensuremath{\mathcal{B}}',\ensuremath{\mathcal{E}})}(\ensuremath{\mathcal{A}})$. The second part of the statement follows from the fact that, for any $A$ in $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, the following square is a pushout in $\ensuremath{\mathcal{A}}$, since $\ensuremath{\mathcal{B}}$ and $\ensuremath{\mathcal{B}}'$ are Birkhoff subcategories of $\ensuremath{\mathcal{A}}_{\ensuremath{\mathcal{E}}}$, \[ \xymatrix{ A \ar@{}[rd]|>>{\copy\pushoutbox}\ar[r] \ar[d] & I'(A) \ar[d]\\ I(A) \ar[r] & I'(I(A)),} \] which implies that $I'(I(A))=A/([A]_{\ensuremath{\mathcal{B}}}\vee [A]_{\ensuremath{\mathcal{B}}'})$. \end{proof} \begin{theorem}\label{compositeintersection} Let $\ensuremath{\mathcal{B}}$ and $\ensuremath{\mathcal{B}}'$ be Birkhoff subcategories of a semi-abelian category $\ensuremath{\mathcal{A}}$ such that the reflector $I'\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathcal{B}}'$ is protoadditive. Then, for any $n\geq 1$ and any $n$-fold $\ensuremath{\mathcal{N}}$-extension $A$ in $\ensuremath{\mathcal{A}}$ with ``initial ribs $a_i$" ($1\leq i\leq n$), there is an identity \[ [A]_{n,\ensuremath{\mathcal{B}}\cap \ensuremath{\mathcal{B}}'}=[A]_{n,\ensuremath{\mathcal{B}}}\vee [\bigcap_{1\leq i\leq n}K[a_i]]_{\ensuremath{\mathcal{B}}'}. \] \end{theorem} The functors $[-]_{n,\ensuremath{\mathcal{C}}}$ were used in \cite{EGV} to define the Hopf formulae for homology. Hence, the previous two theorems give us a simple description of the Hopf formulae: we now recall their definition. As before, we consider a semi-abelian category $\ensuremath{\mathcal{A}}$. By a \emph{projective presentation} of an object $A\in\ensuremath{\mathcal{A}}$ we mean a normal epimorphism $p\colon P\rightarrow A$ such that $P$ is projective with respect to normal epimorphisms: for any normal epimorphism $f\colon B\rightarrow C$ the map $\ensuremath{\mathrm{Hom}}_{\ensuremath{\mathcal{A}}}(P,f)\colon \ensuremath{\mathrm{Hom}}_{\ensuremath{\mathcal{A}}}(P,B)\rightarrow \ensuremath{\mathrm{Hom}}_{\ensuremath{\mathcal{A}}}(P,C)$ obtained by postcomposing with $f$ is surjective. We shall assume, from now on, that $\ensuremath{\mathcal{A}}$ has \emph{enough projectives}, i.e. that there exists a projective presentation of any object $A\in\ensuremath{\mathcal{A}}$. As before, we consider a Birkhoff subcategory $\ensuremath{\mathcal{B}}$ of $\ensuremath{\mathcal{A}}$ with reflector $I\colon \ensuremath{\mathcal{A}}\rightarrow\ensuremath{\mathcal{B}}$. Then for an object $A\in\ensuremath{\mathcal{A}}$ with projective presentation $p\colon P\rightarrow A$ the \emph{Hopf formula} for the second homology was defined in \cite{EverVdL1} as the quotient \begin{equation}\label{Hopfformula} \frac{[P]_{\ensuremath{\mathcal{B}}}\cap K[p]}{[p]_{1,\ensuremath{\mathcal{B}}}} \end{equation} As was shown in \cite{EverVdL1,EverVdL2} this object is independent, up to isomorphism, of the chosen projective presentation of $A$ and, when $\ensuremath{\mathcal{A}}$ is monadic over $\ensuremath{\mathsf{Set}}$, is isomorphic to the first Barr-Beck derived functor \cite{Barr-Beck} of $I$ in $A$ for the associated comonad on $\ensuremath{\mathcal{A}}$. We shall denote the quotient \eqref{Hopfformula} by $H_2(A,\ensuremath{\mathcal{B}})$. \begin{example} When $\ensuremath{\mathcal{A}}$ is the variety of groups and $\ensuremath{\mathcal{B}}$ the subvariety of abelian groups, then the above defined ``Hopf formula" coincides with the classical Hopf formula for the second (integral) homology of a group $A$. \end{example} For $n\geq 1$, an \emph{$n$-fold projective presentation} of an object $A\in\ensuremath{\mathcal{A}}$ is an $n$-fold $\ensuremath{\mathcal{N}}$-extension $P$ such that the ``bottom vertex" in the diagram of $P$ (an $n$-dimensional cube in $\ensuremath{\mathcal{A}}$) is $A$ and all other ``vertices" are projective objects. It is easily seen (see \cite{Ev}) that such an $n$-fold projective presentation exists for every object $A$ as soon as $\ensuremath{\mathcal{A}}$ has enough projectives. One defines (see \cite{Ev,EGV}) the \emph{Hopf formula for the $(n+1)$st homology} of $A$ as the quotient \[ \frac{[P_{\textrm{top}}]_{\ensuremath{\mathcal{B}}}\cap \bigcap_{1\leq i\leq n}K[p_i]}{[P]_{n,\ensuremath{\mathcal{B}}}} \] where $P_{\textrm{top}}$ denotes the projective object that appears as the ``top vertex" in the diagram of $P$, and the $p_i$ ($1\leq i\leq n$) denote the ``initial ribs": the $n$ morphisms starting from $P_{\textrm{top}}$. Once again, this quotient is independent, up to isomorphism, of the choice of $n$-fold presentation $P$ of $A$ (see \cite{Ev}), and when $\ensuremath{\mathcal{A}}$ is monadic over $\ensuremath{\mathsf{Set}}$, it is isomorphic to the first Barr-Beck derived functor \cite{Barr-Beck} of $I$ in $A$ for the associated comonad on $\ensuremath{\mathcal{A}}$ (see \cite{EGV}). It will be denoted by $H_{n+1}(A,\ensuremath{\mathcal{B}})$. \begin{corollary}\label{compositehopf} With the same notations and assumptions as in Theorem \ref{compositecommutator}, and with the extra assumption that $\ensuremath{\mathcal{A}}$ has enough projectives, we have, for any object $A\in\ensuremath{\mathcal{A}}$ and $n\geq 1$, the identity \[ H_{n+1}(A,\ensuremath{\mathcal{C}})=\frac{[P_{\textrm{top}}]_{\ensuremath{\mathcal{C}}}\cap \bigcap_{1\geq i\geq n}K[p_i]}{[P]_{n,\ensuremath{\mathcal{B}}}\vee \overline{[\bigcap_{1\leq i\leq n}K[p_i]]}^{P_{\textrm{top}}}_{\ensuremath{\mathcal{C}}}} \] where $P$ is an arbitrary $n$-fold presentation of $A$, with ``top" object $P_{\textrm{top}}$ and ``initial ribs" $p_i$ ($1\leq i\leq n$). \end{corollary} \begin{corollary}\label{compositehopf2} With the same notations and assumptions as in Theorem \ref{compositeintersection}, and with the extra assumption that $\ensuremath{\mathcal{A}}$ has enough projectives, we have, for any object $A\in\ensuremath{\mathcal{A}}$ and $n\geq 1$, the identity \[ H_{n+1}(A,\ensuremath{\mathcal{B}}\cap\ensuremath{\mathcal{B}}')=\frac{([P_{\textrm{top}}]_{\ensuremath{\mathcal{B}}}\vee [P_{\textrm{top}}]_{\ensuremath{\mathcal{B}}'}) \cap \bigcap_{1\geq i\geq n}K[p_i]}{[P]_{n,\ensuremath{\mathcal{B}}}\vee [\bigcap_{1\leq i\leq n}K[p_i]]_{\ensuremath{\mathcal{B}}'}} \] where $P$ is an arbitrary $n$-fold presentation of $A$, with ``top" object $P_{\textrm{top}}$ and ``initial ribs" $p_i$ ($1\leq i\leq n$). \end{corollary} We conclude this section with some examples of situations where Theorems \ref{compositecommutator} and \ref{compositeintersection} and Corollaries \ref{compositehopf} and \ref{compositehopf2} apply. \noindent {\bf Groups with coefficients in abelian Burnside groups.} An example of the situation of Corollary \ref{compositehopf} is provided by any abelian Birkhoff subcategory $\ensuremath{\mathcal{C}}$ of a semi-abelian category $\ensuremath{\mathcal{A}}$, by taking for $\ensuremath{\mathcal{B}}$ the category of abelian objects in $\ensuremath{\mathcal{A}}$ \cite{BG1}. Indeed, in this case the reflector $\ensuremath{\mathcal{B}}\rightarrow \ensuremath{\mathcal{C}}$ is necessarily additive, hence protoadditive. For instance, $\ensuremath{\mathcal{A}}$ could be the variety $\ensuremath{\mathsf{Gp}}$ of groups and $\ensuremath{\mathcal{C}}$ the Burnside subvariety $B_k$ of abelian groups of exponent $k$ ($k\geq 1$), which consists of all abelian groups $A$ such that $ka=a+\dots +a=0$ for every element $a\in A$: \[ \xymatrix@=30pt{ {\ensuremath{\mathsf{Gp}} \, } \ar@<1ex>[r]_-{^{\perp}}^-{ab} & {\, \ensuremath{\mathsf{Ab}} \, } \ar@<1ex>[l]^H \ar@<1ex>[r]_-{^{\perp}}^-{J} & B_k \ar@<1ex>[l]^G } \] Let us denote, for any group $A$, the (normal) subgroup $\{ ka | a\in A\}$ by $kA$. Then from Lemma \ref{intersectionlemma} (with $\ensuremath{\mathcal{B}}=\ensuremath{\mathsf{Ab}}$ and $\ensuremath{\mathcal{B}}'$ the Burnside variety of arbitrary groups of exponent $k$, not necessarily abelian) we infer that $[A]_{B_k}$ is the (internal) product $kA\cdot [A,A]$ of $kA$ with the (ordinary) commutator subgroup $[A,A]$ of $A$. Since we have, for any $n\geq 1$, a description of the radical $[-]_{n,\ensuremath{\mathsf{Ab}}}$ in terms of group commutators (see \cite{EGV}), Corollary \ref{compositehopf} provides us with a description of the Hopf formulae. For instance, for $n=1$, we obtain, for any group $A$ and projective presentation $p\colon P\rightarrow A$ of $A$: \[ H_2(A,B_k)=\frac{(kP\cdot [P,P])\cap K[p]}{[K[p],P]\cdot kK[p]}, \] where the symbol $\cdot$ denotes the usual product of subgroups. Note that $kK[p]$ is a normal subgroup of $P$, and that the product of normal subgroups gives the \emph{supremum} as normal subgroups, in this situation. \noindent {\bf Semi-abelian compact algebras with coefficients in totally disconnected compact algebras.} Let $\mathbb T$ be a semi-abelian theory. By considering the abelian objects in the semi-abelian category $\mathsf{HComp}^{\mathbb{T}}$ of compact (Hausdorff) algebras we get the Birkhoff subcategory ${\ensuremath{\mathsf{Ab}} (\mathsf{HComp}^{\mathbb{T}}) }$ of $\mathsf{HComp}^{\mathbb{T}}$, called the category of \emph{abelian compact algebras} \cite{BC}. The abelianisation functor $\ensuremath{\mathsf{ab}} \colon \mathsf{HComp}^{\mathbb{T}} \rightarrow \ensuremath{\mathsf{Ab}}(\mathsf{HComp}^{\mathbb{T}})$ sends an algebra $A$ to its quotient $A/\overline{[A,A]}$ by the (topological) closure $\overline{[A,A]}$ in $A$ of the ``algebraic'' commutator $[A,A]$ computed in the semi-abelian variety $\mathsf{Set}^{\mathbb T}$. We then have the following Birkhoff reflection \begin{equation}\label{{abel}} \xymatrix{ {\mathsf{HComp}^{\mathbb{T}} }\,\, \ar@<1ex>[r]^-{\ensuremath{\mathsf{ab}}} & {\ensuremath{\mathsf{Ab}} (\mathsf{HComp}^{\mathbb{T}}) } \ar@<1ex>[l]^-{V}_-{_{\perp}}} \end{equation} where $V$ is the inclusion functor. In general, the categorical commutator (in the sense of Huq \cite{Huq}, see also \cite{BB}) of two normal closed subalgebras is simply given by the (topological) closure of the ``algebraic'' commutator in the corresponding category $\mathsf{Set}^{\mathbb T}$ of algebras: \begin{lemma}\label{closurecommutator} Let $h\colon H\rightarrow A$ and $k\colon K\rightarrow A$ be two normal closed subalgebras of a compact algebra $A$. Then the commutator of $H$ and $K$ is given by $$[H,K]_{\mathsf{HComp}^{\mathbb{T}}}=\overline{[H,K]}_{\mathsf{Set}^{\mathbb{T}}}.$$ \end{lemma} \begin{proof} By using the fact that the canonical morphism $H+K\rightarrow H\times K$ is an open surjection, it is easy to see that any morphism $\varphi\colon H\times K\rightarrow A$ in the category $\mathsf{Set}^{\mathbb{T}}$ such that $\varphi\circ (1_H,0)=h$ and $\varphi\circ (0,1_K)=k$ is also a morphism in the category $\mathsf{HComp}^{\mathbb{T}}$. We now show that the quotient $q\colon A\rightarrow A/\overline{[H,K]}_{\mathsf{Set}^{\mathbb{T}}}$ is universal in making $H$ and $K$ commute. On the one hand, since $[H,K]_{\mathsf{Set}^{\mathbb T}}\subseteq \overline{[H,K]}_{\mathsf{Set}^{\mathbb T}}$, one certainly has that $q(H)$ and $q(K)$ commute in $\mathsf{Set}^{\mathbb{T}}$, hence in $\mathsf{HComp}^{\mathbb{T}}$. On the other hand, given any other quotient $f\colon A\rightarrow B$ in $\mathsf{HComp}^{\mathbb{T}}$ such that $f(H)$ and $f(K)$ commute, we have that $$f(\overline{[H,K]}_{\mathsf{Set}^{\mathbb T}}) \subseteq \overline{f[H,K]}_{\mathsf{Set}^{\mathbb T}}= \overline{[f(H),f(K)]}_{\mathsf{Set}^{\mathbb T}} =\overline{0}=0,$$ from which it follows that there is a unique $a\colon A/\overline{[H,K]}_{\mathsf{Set}^{\mathbb T}}\rightarrow B$ such that $a\circ q=f$. \end{proof} We obtain an instance of the situation of Corollary \ref{compositehopf2} by choosing $\ensuremath{\mathcal{A}}$ to be the category $\mathsf{HComp}^{\mathbb{T}}$, $\ensuremath{\mathcal{B}}$ the category $\ensuremath{\mathsf{Ab}}(\mathsf{HComp}^{\mathbb{T}})$ of abelian compact algebras, and $\ensuremath{\mathcal{B}}'$ the category $\mathsf{TotDis}^{\mathbb T}$ of compact totally disconnected algebras. The intersection $\ensuremath{\mathcal{B}}\cap\ensuremath{\mathcal{B}}'$ in this case is the category $\ensuremath{\mathsf{Ab}} (\mathsf{TotDis}^{\mathbb{T}})$ of abelian totally disconnected algebras. We know from Example \ref{exproto}.\ref{exdisc} that the reflector $\mathsf{HComp}^{\mathbb{T}}\rightarrow\mathsf{TotDis}^{\mathbb T}$ is protoadditive, and the category $\mathsf{HComp}^{\mathbb{T}}$ has enough regular projectives, since it is monadic over the category of sets (see \cite{Man, BC}). Hence, we can indeed apply Corollary \ref{compositehopf2}: for instance, given a projective presentation $p \colon P \rightarrow A$ of a compact algebra $A$, the second homology algebra $H_2(A, \ensuremath{\mathsf{Ab}} (\mathsf{TotDis}^{\mathbb{T}}))$ of $A$ is given by: $$H_2(A, \ensuremath{\mathsf{Ab}} (\mathsf{TotDis}^{\mathbb{T}}) )= \frac{(\overline{[P,P]} \vee \Gamma_0 (P) )\cap K[p]}{\overline{[K[p] , P]} \vee \Gamma_0 (K[p]) } ,$$ where we have used Lemma \ref{closurecommutator} to compute the denominator. For some specific algebraic theories we can give a description of higher dimensional homology objects via Hopf formulae. For instance, let $\mathbb{T}$ be the theory of groups, so that $\ensuremath{\mathcal{A}}={\mathsf{Grp(HComp)}}$ is the category of compact groups, $\ensuremath{\mathcal{B}}={\mathsf{Ab(HComp)}}$ the category of abelian compact groups, $\ensuremath{\mathcal{B}}'$ the category of profinite groups (since a topological group is totally disconnected and compact if and only if it is profinite) and $\ensuremath{\mathcal{B}}\cap \ensuremath{\mathcal{B}}'=\mathsf{Ab(Prof)}$ the category of abelian profinite groups. We can then consider a double projective presentation $$ \label{doublext} \xymatrix{F \ar[r]^{} \ar[d] & F/K_1 \ar[d] \\ F/K_2 \ar[r] & G} $$ of a semi-abelian compact group $G$, so that $K_1$ and $K_2$ are closed normal subgroups of a free compact group $F$ with the property that both $F/K_1$ and $F/K_2$ are free. Then the third homology group of $G$ with coefficients in $\mathsf{Ab(Prof)}$ is given by $$H_3 (G, {\mathsf{Ab(Prof)} } ) = \frac{ (\overline{[F,F]}\cdot \Gamma_0(F))\cap K_1 \cap K_2}{\overline{[K_1,K_2]}\cdot \overline{[K_1 \cap K_2, F]}\cdot \Gamma_0(K_1 \cap K_2)},$$ where the symbol $\cdot$ denotes the product of normal subgroups, and the closure is the topological closure. \noindent {\bf Internal groupoids with coefficients in abelian objects.} Let $\ensuremath{\mathcal{A}}$ be a semi-abelian category with enough regular projectives and $\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})$ the category of internal groupoids in $\ensuremath{\mathcal{A}}$. We obtain another instance of Corollary \ref{compositehopf2}, by taking for Birkhoff subcategories of $\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})$ the category $\mathsf{Ab}(\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}}))$ of abelian objects in the category of groupoids in $\ensuremath{\mathcal{A}}$ and $\ensuremath{\mathcal{A}}$ (via the discrete functor $D\colon \ensuremath{\mathcal{A}}\rightarrow \ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})$). Their intersection is the category $\ensuremath{\mathsf{Ab}}(\ensuremath{\mathcal{A}})$ of abelian objects of $\ensuremath{\mathcal{A}}$. We know from Example \ref{exproto}.\ref{exgroupoids} that the connected components functor $\pi_0\colon \ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})\rightarrow \ensuremath{\mathcal{A}}$ is protoadditive, and it was shown in \cite{EG} that the category ${\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}}) }$ has enough regular projectives whenever $\ensuremath{\mathcal{A}}$ has enough regular projectives. Hence, we can apply Corollary \ref{compositehopf2} in this situation. For instance, let us consider a projective presentation $p=(p_0,p_1) \colon P \rightarrow A$ $$ \xymatrix{P_1 \ar[r]^{p_1} \ar@<-1.2 ex>[d]_{d} \ar@<+1.2 ex>[d]^{c}& A_1 \ar@<-1.2 ex>[d]_{d} \ar@<+1.2 ex>[d]^{c} \\ P_0 \ar[r]_{p_0} \ar[u] & A_0 \ar[u] } $$ of an internal groupoid $A= (A_1, A_0, m,d,c,i)$. We write $\sem{P,P}$ for the internal groupoid $([P_1, P_1], [P_0,P_0], \overline{m}, \overline{d}, \overline{c}, \overline{i})$, where the arrows $\overline{d}, \overline{c}$ and $\overline{i}$ are the restrictions of $d,c$ and $i$ to the largest commutators $[P_1,P_1]$ and $[P_0,P_0]$ of $P_1$ and $P_0$, respectively, and $\overline{m}$ the induced groupoid composition. In other words, $\sem{P,P}$ is the kernel, in the category ${\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}})}$, of the quotient sending the groupoid $P$ to a reflexive graph in $\mathsf{Ab}(\ensuremath{\mathcal{A}})$, universally (recall from \cite{Jon} that the category $\mathsf{\mathsf{Ab}(Gpd}(\ensuremath{\mathcal{A}}))$ is isomorphic to the category of reflexive graphs in $\mathsf{Ab}(\ensuremath{\mathcal{A}})$). Similar notation is used for the groupoid $\sem{K[p],P}$. If we write $\Gamma_0 (P)$ and $\Gamma_0 (K[p])$ for the full subgroupoids of the connected components of $0$ in $P$ and in $K[p]$, respectively, then we can express the second homology groupoid of $A$ with coefficients in $\ensuremath{\mathsf{Ab}}(\ensuremath{\mathcal{A}})$ as the quotient \[ H_2(A,\mathsf{Ab}(A))=\frac{K[p] \cap (\Gamma_0(P) \vee \sem{P,P}) }{\sem{K[p],P} \vee \Gamma_0(K[p])}. \] Note that $\vee$ indicates the supremum, as normal subobjects, in the category $\mathsf{Gpd} (\ensuremath{\mathcal{A}})$. Now, let $\mathsf{Gpd}^k (\ensuremath{\mathcal{A}})$ denote the category of $k$-fold internal groupoids in $\ensuremath{\mathcal{A}}$, defined inductively by $\mathsf{Gpd}^k (\ensuremath{\mathcal{A}}) = \mathsf{Gpd} ( \mathsf{Gpd}^{k-1} (\ensuremath{\mathcal{A}}))$. It is clear that, also for $k\geq 2$, $\ensuremath{\mathcal{A}}$ is a Birkhoff subcategory of $\ensuremath{\mathsf{Gpd}}^k(\ensuremath{\mathcal{A}})$ with protoadditive reflector $\pi_0\circ \dots \circ \pi_0^k$, \[ \xymatrix{ \mathsf{Gpd}^k({\ensuremath{\mathcal{A}} }) \ar@<1ex>[r]^-{\pi_0^k} & {\mathsf{Gpd}^{k-1}({\ensuremath{\mathcal{A}} })\quad } \ar@<1ex>[l]^-{D^k}_-{_{\perp}} {\cdots \quad } \mathsf{Gpd}^2({\ensuremath{\mathcal{A}} }) \ar@<1ex>[r]^-{\pi_0^2} & {\ensuremath{\mathsf{Gpd}}(\ensuremath{\mathcal{A}}) }\,\, \ar@<1ex>[r]^-{\pi_0} \ar@<1ex>[l]^-{D^2}_-{_{\perp}} & {\ensuremath{\mathcal{A}}, \, } \ar@<1ex>[l]^-{D}_-{_{\perp}} } \] and that ${\ensuremath{\mathsf{Gpd}}^k(\ensuremath{\mathcal{A}})}$ has enough regular projectives. Hence, Corollary \ref{compositehopf2} provides us, for any $k\geq 1$, with a description of the homology objects of $k$-fold internal groupoids with coefficients in $\mathsf{Ab}(\ensuremath{\mathcal{A}})=\ensuremath{\mathcal{A}}\cap\ensuremath{\mathsf{Ab}}(\ensuremath{\mathsf{Gpd}}^k(\ensuremath{\mathcal{A}}))$, similar to the one above. \end{document}
arXiv
\begin{document} \title{The Matrix Chain Algorithm\to Compile Linear Algebra Expressions} \section{Introduction} The need to translate linear algebra operations into efficient code arises in a multitude of applications. For instance, expressions such as $$b = S^H H^H \left(\sigma HH^H + Q \right)^{-1}r$$ and $$x = \left( \Sigma^T \Sigma + D^2 \right)^{-1} \Sigma^T b$$ occur in information theory \cite{albataineh2014}, and regularization \cite{noschese2016}, respectively. Given such expressions, we are interested in the automatic generation of code that is at least as fast and as numerically stable as what an expert would produce. Conceptually, the problem is similar to how compilers cast scalar expressions in terms of the available instruction set. The corresponding problem for linear algebra expressions (involving matrices) is much more challenging, and requires expertise in both numerical linear algebra and high-performance computing. On the one hand, one wants to take advantage of highly optimized building blocks for matrix operations, such as those as provided by the BLAS~\cite{dongarra1990} and LAPACK~\cite{anderson1999} libraries. On the other hand, transformations based on associativity, commutativity and distributivity play an essential role. Further complication comes from the fact that matrices frequently have structures and properties that can be exploited both to transform---and thus simplify---expressions, and to evaluate them more efficiently. The application of this kind of knowledge affects not only the computational cost, but also the necessary amount of storage space, and numerical accuracy. At the moment, there are two options for dealing with complex matrix expressions. One either has to map the expressions to kernels manually, or use high-level programming languages and environments such as Matlab and R. The first option involves a lengthy, error-prone process that usually requires a numerical linear algebra expert. The second option, using high-level programming languages, is a very convenient alternative in terms of productivity, but rarely leads to the same performance levels as code produced by an expert. As a simple example, consider an expression containing the inverse operator: in Matlab, this is directly mapped to an explicit matrix inversion, even though a solution that relies on linear systems is usually both faster and numerically more stable; in this case, it is up to the user to rewrite the inverse in terms of the slash ({\tt/}) or backslash ({\tt\textbackslash}) operators, which solve linear systems. Products are another example: Let $M_1, M_2 \in \mathbb{R}^{n \times n}$, $x \in \mathbb{R}^{n}$. Depending on whether $M_1 M_2 x$ is computed from the left, that is, parenthesized as $(M_1 M_2) x$, or from the right ($M_1 (M_2 x)$), the calculation requires either $\mathcal{O}(n^3)$ or $\mathcal{O}(n^2)$ scalar operations. In Matlab, products are always evaluated from left to right~\cite{matlabdoc:short}. In other high-level languages such as Mathematica~\cite{mathematicadoc:short} and Julia~\cite{bezanson2012}, the situation is analogous. \begin{figure} \caption{Grammar describing the expression we are concerned with. } \label{eq:rule1} \label{eq:rule2} \label{eq:rule3} \label{eq:rule4} \label{grammar} \end{figure} Our end goal is a compiler that takes a mathematical description of a linear algebra problem and finds an efficient mapping onto high-performance routines offered by libraries. In this document, we are concerned with the mapping of expressions consisting of products, as described by the grammar in Figure \ref{grammar} (e.g., $X := A B^T C$ and $x := A^{-1} B y$, where $A, B, C, X$ are matrices, and $x$ and $y$ are vectors), onto a set $K$ of computational kernels (e.g.: \texttt{C:=A*B}, \texttt{C:= A$^\text{\texttt -1}$*B}, \texttt{B:= A$^\text{\texttt -1}$}, \dots). For a given performance metric, we are interested in the optimal mapping. This problem can be seen as a generalization of the matrix chain problem: Given a \emph{matrix chain}, a product $M_1 \cdots M_k$ of matrices with different sizes, the question is how to parenthesize it so that the result can be computed with the minimal number of scalar operations. Our approach uses an extended version of the $\mathcal{O}(n^3)$ dynamic programming matrix chain algorithm presented in \cite{cormen1990}. We refer to the problem as the ``Generalized Matrix Chain Problem'' (GMCP) and call the presented algorithm ``GMC algorithm''. \section{Generalizations} We extend the original matrix chain algorithm in four ways: \paragraph{Operations} The GMC algorithm is able to deal with the transpose and inverse as additional operators. The combination of those operators with the multiplication leads to a rich set of different expressions, for example $A B^T$, $A^{-1} B$, and $A^{-1} B^{-T}$. While mathematically all those expressions can be computed as a composition of explicit unary operations ($X:= A^{-1}$ and $X:= A^{T}$) and a plain multiplication ($X:=AB$), this is in many cases not advisable for performance and stability reasons. The selection of the best sequence of kernels is done by a search-based approach inspired by the linear algebra compiler CLAK~\cite{fabregat-traver2013a}. \paragraph{Properties} Many linear algebra operations can be sped up by taking advantage of the properties of the involved matrices. For example, the multiplication of two lower triangular matrices requires $n^3/3$ scalar operations, as opposed to $2n^3$ operations for the multiplication of two full matrices \cite{higham2008}. Furthermore, properties propagate with the application of kernels. Take the product $A B^T$ as an example. If $A$ is lower triangular and $B$ is upper triangular, it is possible to infer that the entire product is lower triangular as well. The GMC algorithm symbolically infers the properties of intermediate operands and uses those properties to select the most suitable kernels. \paragraph{Cost Function} The original matrix chain algorithm minimizes the number of scalar operations (FLOPs) necessary to compute the matrix chain. In the GMC algorithm, we allow the use of an arbitrary metric, which could be performance (FLOPS/sec), numerical accuracy, memory consumption, or a combination of multiple objectives. \paragraph{Indices} The grammar (Figure \ref{grammar}) allows matrices to be annotated with indices. Consider the assignment $X_{ij}:= A_i B C d_j$ as an example. Instead of one single chain, a two-dimensional grid of chains has to be computed. Clearly, some segments are common to multiple chains; for performance reasons it is therefore beneficial to reuse them. The GMC algorithm is able to find the optimal solution for indexed chains like this one. \section{The Algorithm} \begin{figure} \caption{The GMC algorithm.} \label{pseudocode} \end{figure} Figure \ref{pseudocode} shows the full algorithm. Its complexity is $$\mathcal{O}(n^3(k^3 + \gamma + p))\text{,}$$ where $n$ is the length of the matrix chain, $k$ is the number of kernels, $\gamma$ is the number of indices occurring in the chain and $p$ is the number of properties that are considered. We stress that the $k^3$ term is an upper bound that will not be reached in practice. \section{Conclusion and Future work} We consider the GMC algorithm to be an important step towards the development of a compiler for linear algebra problems that finds optimized mappings to kernels by applying domain specific knowledge. In the future, we will address problems like common subexpression elimination and memory allocation. \addcontentsline{toc}{section}{References} \end{document}
arXiv
\begin{definition}[Definition:Bounded Below Mapping/Real-Valued/Unbounded] Let $f: S \to \R$ be a real-valued function. Then $f$ is '''unbounded below on $S$''' {{iff}} it is not bounded below on $S$: :$\neg \exists L \in \R: \forall x \in S: L \le \map f x$ Category:Definitions/Boundedness \end{definition}
ProofWiki
\begin{definition}[Definition:Norm/Bounded Linear Functional/Inner Product Space/Definition 4] Let $\mathbb F$ be a subfield of $\C$. Let $\struct {V, \innerprod \cdot \cdot}$ be an inner product space over $\mathbb F$ with $V \ne \set 0$. Let $L : V \to \mathbb F$ be a bounded linear functional. Let $\norm \cdot$ be the inner product norm for $\struct {V, \innerprod \cdot \cdot}$. The '''norm''' of $L$ is the infimum: :$\norm L = \inf \set {c > 0: \forall v \in V : \size {L v} \le c \norm v}$ As $L$ is bounded, it is assured that $\norm L < \infty$. \end{definition}
ProofWiki
\begin{definition}[Definition:P-Seminorm] Let $\struct {X, \Sigma, \mu}$ be a measure space. Let $p \in \R$, $p \ge 1$. Let $\map {\LL^p} \mu$ be Lebesgue $p$-space for $\mu$. The '''$p$-seminorm''' on $\map {\LL^p} \mu$ is the mapping $\norm \cdot_p : \map {\LL^p} \mu \to \R_{\ge 0}$ defined by: :$\ds \forall f \in \map {\LL^p} \mu: \norm f_p := \paren {\int \size f^p \rd \mu}^{1/p}$ That the '''$p$-seminorm''' is in fact a seminorm is proved on $p$-Seminorm is Seminorm. \end{definition}
ProofWiki
\begin{document} \title{Heterogeneously Coupled Maps: \hub dynamics and emergence across connectivity layers. } \begin{abstract} The aim of this paper is to rigorously study dynamics of Heterogeneously Coupled Maps (HCM). Such systems are determined by a network with heterogeneous degrees. Some nodes, called hubs, are very well connected while most nodes interact with few others. The local dynamics on each node is chaotic, coupled with other nodes according to the network structure. Such high-dimensional systems are hard to understand in full, nevertheless we are able to describe the system over exponentially large time scales. In particular, we show that the dynamics of hub nodes can be very well approximated by a low-dimensional system. This allows us to establish the emergence of macroscopic behaviour such as coherence of dynamics among hubs of the same connectivity layer (i.e. with the same number of connections), and chaotic behaviour of the poorly connected nodes. The HCM we study provide a paradigm to explain why and how the dynamics of the network can change across layers. \end{abstract} {\bf Keywords: Coupled maps, ergodic theory, heterogeneous networks} \let\fnsymbol{footnote}\relax \footnote{\emph{Emails:} [email protected], [email protected], [email protected]} \footnote{\emph{Mathematics Subject Classification (2010):} Primary 37A30, 37C30, 37C40, 37D20, 37Nxx; Secondary O5C80} \tableofcontents \section{Introduction} Natural and artificial complex systems are often modelled as distinct units interacting on a network. Typically such networks have a heterogeneous structure characterised by different scales of connectivity \cite{BA}. Some nodes called {\em hubs} are highly connected while the remaining nodes have only a small number of connections (see Figure \ref{Fig1a} for an illustration). Hubs provide a short pathway between nodes making the network well connected and resilient and play a crucial role in the description and understanding of complex networks. In the brain, for example, hub neurons are able to synchronize while other neurons remain out of synchrony. This particular behaviour shapes the network dynamics towards a healthy state \cite{bonifazi2009gabaergic}. Surprisingly, disrupting synchronization between hubs can lead to malfunction of the brain. The fundamental dynamical role of hub nodes is not restricted to neuroscience, but is found in the study of epidemics \cite{Epidemic}, power grids \cite{Motter}, and many other fields. Large-scale simulations of networks suggest that the mere presence of hubs hinders global collective properties. That is, when the heterogeneity in the degrees of the network is strong, complete synchronization is observed to be unstable \cite{Nishikawa}. However, in certain situations hubs can undergo a transition to collective dynamics \cite{Paths,Hubs,CAS}. Despite the large amount of recent work, a mathematical understanding of dynamical properties of such networks remains elusive. In this paper, we introduce the concept of Heterogeneously Coupled Maps (referred to as HCM in short), where the heterogeneity comes from the network structure modelling the interaction. HCM describes the class of problems discussed above incorporating the non-linear and extremely high dimensional behaviour observed in these networks. High dimensional systems are notoriously difficult to understand. HCM is no exception. Here, our approach is to describe the dynamics at the expense of an arbitrary small, but fixed fluctuation, over exponentially large time scales. In summary, we obtain (i) \emph{Dimensional reduction for hubs for finite time.} Fixing a given accuracy, we can describe the dynamics of the hubs by a low dimensional model for a finite time $T$. The true dynamics of a hub and its low dimensional approximation are the same up to the given accuracy. The time $T$ for which the reduction is valid is exponentially large in the network size. For example, in the case of a star network (see Section \ref{Sec:StarNetExamp}), we can describe the hubs with $1$\% accuracy in networks with $10^6$ nodes for a time up to roughly $T = e^{30}$ for a set of initial conditions of measure roughly $1-e^{-10}$. This is arguably the only behaviour one will ever see in practice. (ii) \emph{Emergent dynamics changes across connectivity levels}. The dynamics of hubs can drastically change depending on the degree and synchronization (or more generally phase lockng) naturally emerges between hub nodes. This synchronization is not due to a direct mutual interaction between hubs (as in the usual \say{Huygens} synchronization) but results from the common environment that the hub nodes experience. Before presenting the general setting and precise statements in Section \ref{Sec:SettRes}, we informally discuss these results and illustrate the rich dynamics that emerges in HCM due to heterogeneity. \subsection{Emergent Dynamics on Heterogeneously Coupled Maps (HCM).} Figure \ref{Fig1a} is a schematic representation of a heterogeneous network with three different types of nodes: massively connected hubs (on top), moderately connected hubs having half as many connections of the previous ones (in the middle), and low degree nodes (at the bottom). Each one of this three types constitutes a connectivity layer, meaning a subset of the nodes in the network having approximately the same degree. When uncoupled, each node is identical and supports chaotic dynamics. Adding the coupling, different behaviour can emerge for the three types of nodes. In fact, we will show examples where the dynamics of the hub at the top approximately follows a periodic motion, the hub in the middle stays near a fixed point, and the nodes at the bottom remain chaotic. Moreover, this behaviour persists for exponentially large time in the size of the network, and it is robust under small perturbations. \begin{figure} \caption{The dynamics across connectivity layers change depending on the connectivity of the hubs. We will exhibit an example where the hubs with the highest number of connections (in red, at the top) have periodic dynamics. In the second connectivity layer, where hubs have half of the number of connection (in blue, in the middle), the dynamics sits around a fixed point. In the bottom layer of poorly connected nodes the dynamics is chaotic. (Only one hub has been drawn on the top two layers for clarity of the picture). } \label{Fig1a} \end{figure}\\ \noindent \noindent {\bf Synchronization because of common environment}. Our theory uncovers the mechanism responsible for high correlations among the hubs states, which is observed in experimental and numerical observations. The mechanism turns out to be different from synchronization (or phase lockng) due to mutual interaction, i.e. different from \say{Huygens} synchronization. In HCM, hubs display highly correlated behaviour even in the absence of direct connections between themselves. The poorly connected layer consisting of a huge number of weakly connected nodes plays the role of a kind of {\em \say{heat bath}} providing a common forcing to the hubs which is responsible for the emergence of coherence. \noindent \subsection{Hub Synchronization and Informal Statement of Theorem~\ref{Thm:Main}} {\bf The Model.} {\it A network of coupled dynamical systems} is the datum $(G,f,h,\alpha)$, where $G$ is a labelled graph of the set of nodes $\mathcal N=\{1,...,N\}$, $f : \mathbb T \rightarrow \mathbb T$ is the local dynamics at each node of the graph, $h\colon \mathbb T\times\mathbb T \to \mb R$ is a coupling function that describes pairwise interaction between nodes, and $\alpha\in \mb R$ is the coupling strength. We take $f$ to be a Bernoulli map, $z \mapsto \sigma z \, \mbox{mod 1}$, for some integer $\sigma>1$. This is in agreement with the observation that the local dynamics is chaotic in many applications \cite{Izhikevich2007dynamical,weiss1988,shil2001}. The graph $G$ can be represented by its adjacency matrix $A = (A_{in})$ which determines the connections among nodes of the graph. If $A_{in} = 1$, then there is an directed edge of the graph going from $n$ and pointing at $i$. $A_{in}=0$ otherwise. The degree $d_i:=\sum_{n=1}^N A_{in}$ is the number of incoming edges at $i$. For sake of simplicity, in this introductory section we consider undirected graphs ($A$ is symmetric), unless otherwise specified, but our results hold in greater generality (see Section~\ref{Sec:SettRes}). The dynamics on the network is described by \begin{equation} {z}_i(t+1) = {f}({ z}_i(t)) + \frac{\alpha}{\Delta} \sum_{n=1}^N A_{in} h(z_i(t), z_n(t))\mod 1,\quad\mbox{ for } i=1,\dots,N. \label{md1} \end{equation} In the above equations, $\Delta$ is a structural parameter of the network equal to the maximum degree. Rescaling of the coupling strength in (\ref{md1}) dividing by $\Delta$ allows to scope the parameter regime for which interactions contribute with an order one term to the evolution of the hubs. For the type of graphs we will be considering we have that the degree $d_i$ of the nodes $1,\dots,L$ are much smaller than the incoming degrees of nodes $L+1,\dots,N$. A prototypical sequence of heterogeneous degrees is \begin{equation}\label{eq:layeredER} {\bf d}(N) = ( \underbrace{d,\dots, d}_{L}, \underbrace{\kappa_m\Delta,\dots,\kappa_m\Delta}_{M_m}, \dots, \underbrace{\kappa_{2} \Delta ,\dots, \kappa_{2} \Delta }_{M_{2}}, \underbrace{ \Delta,\dots, \Delta }_{M_1} ). \end{equation} with $\kappa_m<\dots < \kappa_2 <1$ fixed and $d/\Delta$ small when $N$ is large, then we will refer to blocks of nodes corresponding to $(\kappa_{i} \Delta ,\dots, \kappa_{i} \Delta)$ as the {\em $i$-th connectivity layer} of the network, and to a graph $G$ having sequence of degrees prescribed by Eq. \eqref{eq:layeredER} as a \emph{layered heterogeneous graph}. (We will make all this more precise below.) It is a consequence of stochastic stability of uniformly expanding maps, that for very small coupling strengths, the network dynamics will remain chaotic. That is, there is an $\alpha_0>0$ such that for all $0\le \alpha < \alpha_0$ and any large $N$, the system will preserve an ergodic absolutely continuous invariant measure \cite{Keller2}. When $\alpha$ increases, one reaches a regime where the less connected nodes still feel a small contribution coming from interactions, while the hub nodes receive an order one perturbation. In this situation, uniform hyperbolicity and the absolutely continuous invariant measure do not persist in general. \noindent {\bf The Low-Dimensional Approximation for the Hubs.} Given a hub $i_j\in\mathcal N$ in the $i$-th connectivity layer, our result gives a one-dimensional approximation of its dynamics in terms of $f$, $h$, $\alpha$ and the connectivity $\kappa_i$ of the layer. The idea is the following. Let $z_1,\dots,z_N\in \mathbb T$ be the state of each node, and assume that this collection of $N$ points are spatially distributed in $\mathbb T$ approximately according to the invariant measure $m$ of the local map $f$ (in this case the Lebesgue measure on $\mathbb T$). Then the coupling term in (\ref{md1}) is a \emph{mean field} (Monte-Carlo) approximation of the corresponding integral: \begin{equation} \frac{\alpha}{\Delta} \sum_{n=1}^N A_{i_jn} h(z_{i_j}, z_n) \approx \alpha \kappa_i\int h(z_{i_j},y) dm(y) \label{Eq:MeanField} \end{equation} where $d_{i_j}$ is the incoming degree at $i_j$ and $ \kappa_{i}:=d_{i_j} / \Delta$ is its normalized incoming degree. The parameter $\kappa_i$ determines the effective coupling strength. Hence, the right hand side of expression (\ref{md1}) at the node $i_j$ is approximately equal to the {\em reduced} map \begin{eqnarray}\label{Eq:RedEqInt} g_{i_j}(z_{i_j}):=f(z_{i_j})+\alpha \kappa_i\int h(z_{i_j},y) dm(y), \end{eqnarray} Equations (\ref{Eq:MeanField}) and (\ref{Eq:RedEqInt}) clearly show the \say{heat bath} effect that the common environment has on the highly connected nodes. \noindent {\it Ergodicity ensures the persistence of the heat bath role of the low degree nodes.} It turns out that the joint behaviour at poorly connected nodes is essentially ergodic. This will imply that at each moment of time the cumulative average effect on hub nodes is predictable and far from negligible. In this way, the low degree nodes play the role of a heat bath providing a sustained forcing to the hubs. Theorem~\ref{Thm:Main} below makes this idea rigorous for a suitable class of networks. We state the result precisely and in full generality in Section~\ref{Sec:SettRes}. For the moment assume that the number of hubs is small, does not depend on the total number $N$ of nodes, and that the degree of the poorly connected nodes is relatively small, namely only a logarithmic function of $N$. For these networks our theorem implies the following \\ \begin{minipage}[c]{14cm} {\it \emph{\textbf{Theorem A (Informal Statement in Special Case).}} Consider the dynamics \eqref{md1} on a layered heterogeneous graph. If the degrees of the hubs are sufficiently large, i.e. $\Delta = O(N^{1/2+\varepsilon})$, and the reduced dynamics $g_j$ are hyperbolic, then for any hub $j$ \[ z_{j}(t+1)=g_j(z_j(t))+\xi_j(t), \] where the size of fluctuations $\xi_j(t)$ is below any fixed threshold for $0\leq t\leq T$, with $T$ exponentially large in $\Delta$, and any initial condition outside a subset of measure exponentially small in $\Delta$. } \\ \end{minipage}\\ \noindent {\it Hub Synchronization Mechanism.} When $\xi_j(t)$ is small and $g_j$ has an attracting periodic orbit, then $z_j(t)$ will be close to this attracting orbit after a short time and it will remain close to the orbit for an exponentially large time $T$. As a consequence, if two hubs have approximately the same degree $d_j$, even if they share no common neighbour, they feel the same mean effect from the \say{heat bath} and so they appear to be attracted to the same periodic orbit (modulo small fluctuations) exhibiting highly coherent behaviour. The dimensional reduction provided in Theorem A is robust persisting under small perturbation of the dynamics $f$, of the coupling function $h$ and under addition of small independent noise. Our results show that the fluctuations $\xi(t)$, as functions of the initial condition, are small in the $C^0$ norm on most of the phase space, but notice that they can be very large with respect to the $C^1$ norm. Moreover, they are correlated, and with probability one, $\xi(t)$ will be large for some $t>T$. \noindent {\bf Idea of the Proof.} The proof of this theorem consists of two steps. Redefining {\it ad hoc} the system in the region of phase space where fluctuations are above a chosen small threshold, we obtain a system which exhibits good hyperbolic properties that we state in terms of invariant cone-fields of expanding and contracting directions. We then show that the set of initial conditions for which the fluctuations remain below this small threshold up to time $T$ is large, where $T$ is estimated as in the above informal statement of the theorem. \subsection{Dynamics Across Connectivity Scales: Predictions and Experiments}\label{subsec:predictions+experiments} In the setting above, consider $f(z) =2z $ mod\,1 and the following simple coupling function: \begin{equation}\label{h} h(z_i,z_n) = - \sin 2\pi z_i + \sin 2\pi z_n. \end{equation} Since $\int_{0}^1 \sin (2\pi y)\, dy=0$, the reduced equation, see Eq. (\ref{Eq:RedEqInt}), becomes \begin{eqnarray}\label{g} g_j(z_j) = T_{\alpha \kappa_j}(z_j) \mbox{ where }T_\beta(z)= 2 z - \beta \sin (2\pi z) \mod 1 . \end{eqnarray} A bifurcation analysis shows that for $\beta\in I_{E}:=[0,1/2\pi)$ the map is globally expanding, while for $\beta \in I_{F}:=(1/2\pi,3/2\pi)$ it has an attracting fixed point at $y=0$. Moreover, for $\beta \in I_{p}:=(3/2\pi,4/2\pi]$ it has an attracting periodic orbit of period two. In fact, it follows from a recent result in \cite{MR3336841} that the set of parameters $\beta$ for which $T_\beta$ is hyperbolic, as specified by Definition~\ref{Def:AxiomA} below, is open and dense. (See Proposition~\ref{Prop:AppTbeta1} and \ref{Prop:AppTbeta2} in the Appendix for a rigorous treatment). Figure~\ref{Fig2} shows the graphs and bifurcation diagram of $T_\beta$ varying $\beta$. \begin{figure} \caption{On the left the graphs of $T_\beta$ for $\beta=0,0.2,0.4,0.6$. On the right the bifurcation diagram for the reduced dynamics of hubs. We considered the identification $\mathbb{T} = [-1/2,1/2]/\!\! \sim$. We obtained the diagram numerically. To build the bifurcation diagram we reported a segment of a typical orbit of length $10^3$, for a collection of values of the parameter $\beta$.} \label{Fig2} \end{figure} \subsubsection{Predicted Impact of the Network Structure}\label{Sec:PredImpNet} To illustrate the impact of the structure, we fix the coupling strength $\alpha = 0.6$ and consider a heterogeneous network with four levels of connectivity including three types of hubs and poorly connected nodes. The first highly connected hubs have $\kappa_1=1$. In the second layer, hubs have half of the number of connections of the first layer $\kappa_2=1/2$. And finally, in the last layer, hubs have one fourth of the connections of the main hub $\kappa_3=1/4$. The parameter $\beta_j=\alpha\kappa_j$ determines the effective coupling, and so for the three levels $j=1,2,3$ we predict different types of dynamics looking at the bifurcation diagram. The predictions are summarised in Table \ref{Dyn}. \begin{table}[htbp] \centering \begin{tabular}{lcl} Connectivity Layer & Effective Coupling $\beta$ & Dynamics \\ \hline \hline hubs with $\kappa_1 =1$ & 0.6& Periodic \\ hubs with $\kappa_2 =1/2$ & 0.3& Fixed Point \\ hubs with $\kappa_3 = 1/4$ & 0.15 & Uniformly Expanding \\ \hline \end{tabular} \caption{Dynamics across connectivity scales } \label{Dyn} \end{table} \subsubsection{Impact of the Network structure in Numerical Simulations of Large-Scale Layered Random Networks} We have considered the above situation in numerical simulations where we took a layered random network, described in equation (\ref{eq:layeredER}) above, with $N=10^5$, $\Delta = 500$, $w = 20$, $m=2$, $M_1=M_2=20$, $\kappa=1$ and $\kappa_2=1/2$. The layer with highest connectivity is made of $20$ hubs connected to $500$ nodes, and the second layer is made of $20$ hubs connected to $250$ nodes. The local dynamics is again given by $f(z)= 2z $ $\mod1$, the coupling as in Eq. (\ref{h}). We fixed the coupling strength at $\alpha=0.6$ as in Section \ref{Sec:PredImpNet} so that Table \ref{Dyn} summarises the theoretically predicted dynamical behaviour for the two layers. We choose initial conditions for each of the $N$ nodes independently and according to the Lebesgue measure. Then we evolve this $10^5$ dimensional system for $10^6$ iterations. Discarding the $10^6$ initial iterations as transients, we plotted the next $300$ iterations. The result is shown in Figure \ref{Simulation}. In fact, we found essentially the same picture when we only plotted the first $300$ iterations, with the difference that the first $10$ iterates or so are not yet in the immediate basin of the periodic attractors. The simulated dynamics in Figure \ref{Simulation} is in excellent agreement with the predictions of Table \ref{Dyn}. \begin{figure} \caption{Simulation results of the dynamics of a layered graph with two layers of hubs. We plot the return maps $z_i(t) \times z_i (t+1)$. The solid line is the low dimensional approximation of the hub dynamics given by Eq. (\ref{g}). The red circles are points taken from the hub time-series. In the first layers of hubs ($\kappa=1$) we observe a dynamics very close to the periodic orbit predicted by $g_1$, in the second layer ($\kappa=1/2$) the dynamics of the hubs stay near a fixed point, and in the third layer ($\kappa=1/4$) the dynamics is still uniformly expanding. } \label{Simulation} \end{figure} \subsection{Impact of Network Structure on Dynamics: Theorems~\ref{MTheo:B} and \ref{MTheo:C} } \noindent The importance of network structure in shaping the dynamics has been highlighted by many studies \cite{Gol-Stewart2006, Field-etal-2011, nijholt2016graph} where network topology and its symmetries shape bifurcations patterns and synchronization spaces. Here we continue with this philosophy and show the dynamical feature that are to be expected in HCM. In particular one has that fixing the local dynamics and the coupling, the network structure dictates the resulting dynamics. In fact we show that \\ \begin{minipage}[c]{14cm} {\it there is an open set of coupling functions such that homogeneous networks globally synchronize but heterogeneous networks do not. However, in heterogeneous networks, hubs can undergo a transition to coherent behaviour. } \end{minipage} \noindent In Subsection~\ref{Sec:SettRes} the content of this claim is given a rigorous formulation in Theorems~\ref{MTheo:B} and \ref{MTheo:C}. \subsubsection{Informal Statement of Theorem~\ref{MTheo:B} on Coherence of Hub Dynamics} Consider a graph $G$ with sequence of degrees given by Eq. (\ref{eq:layeredER}) with $M:=\sum_{k=1}^m M_k $, each $M_i$ being the number of nodes in the $i-$th connectivity layer. Assume \begin{equation} \Delta = \mathcal O(N^{1/2 + \varepsilon})\mbox{, }M = \mathcal O(\log N) \mbox{~and~} d =\mathcal O(\log N) \label{eq:layeredER2} \end{equation} which implies that $L\approx N$ when $N$ is large. Suppose that $f(x)=2x\mod 1$ and that $h(z_i,z_n)$ is as in Eq. \eqref{h}. \ \def\mbox{dist}{\mbox{dist}} \begin{minipage}[c]{14cm} {\it \emph{\textbf{Theorem B (Informal Statement in Special Case).}} For every connectivity layer $i$ and hub node ${i_j}$ in this layer, there exists an interval $I \subset \mathbb R$ of coupling strengths so that for any $\alpha \in I$, the reduced dynamics $T_{\alpha\kappa_i}$ (Eq. \eqref{g}) has at most two periodic attractors $\{\overline z(t)\}_{t=1}^p$ and $\{-\overline z(t)\}_{t=1}^p$ and there is $s\in\{\pm 1\}$ and $t_0\in[p-1]$ \[ \mbox{dist}(z_{i_j}(t+t_0), s \overline z(t\mbox{ mod } p))\le \xi \] for $1/\xi\le t\le T$, with $T$ exponentially large in $\Delta$, and for any initial condition outside a set of small measure. } \\ \end{minipage} \noindent Note that in order to have $1/\xi \ll T$ one needs $\Delta$ to be large. Theorem~\ref{MTheo:B} proves that one can generically tune the coupling strength or the hub connectivity so that the hub dynamics follow, after an initial transient, a periodic orbit. \subsubsection{Informal Statement of Theorem C Comparing Dynamics on Homogeneous and Heterogeneous Networks} {\bf Erd\"os-R\'enyi model for homogeneous graphs} In contrast to layered graphs which are prototypes of heterogeneous networks, the classical Erd\"os-R\'enyi model is a prototype of a homogenous random graph. By homogeneous, we mean that the expected degrees of the nodes are the same. This model defines an undirected random graph where each link in the graph is a Bernoulli random variable with the same success probability $p$ (see Definition~\ref{Def:ErdRen} for more details). We choose $p > \log N / N$ so that in the limit that $N\rightarrow\infty$ almost every random graph is connected (see \cite{Bollobas}). \ \noindent {\bf Diffusive Coupling Functions} The coupling functions satisfying \[ h (z_i,z_j) = - h(z_j,z_i) \mbox{~and~} h(z,z)=0. \] are called {\em diffusive} The function $h$ is sometimes required to satisfy $\partial_1h(z,z)>0$ to ensure that the coupling has an \say{attractive} nature. Even if this is not necessary to our computations, the examples in the following and in the appendix satisfy this assumption. For each network $G$, we consider the corresponding system of coupled maps defined by (\ref{md1}). In this case the subspace \begin{equation}\label{Eq:SyncManif} \mathcal{S} := \{ (z_1,...,z_N) \in \mathbb{T^N} \, : \, z_1 = z_2 =\cdots =z_N \} \end{equation} is invariant. $\mathcal S$ is called the {\em synchronization manifold} on which all nodes of the network follow the same orbit. Fixing the local dynamics $f$ and the coupling function $h$, we obtain the following dichotomy of stability and instability of synchronization depending on whether the graph is homogeneous or heterogeneous. \begin{minipage}[c]{14cm} {\it \emph{\textbf{Theorem C (Informal Statement).}} \begin{itemize} \item[a)] Take a diffusive coupling function $h(z_i,z_j)=\varphi(z_j-z_i)$ with $\frac{d\varphi}{dx}(0)\neq 0$. Then for almost every asymptotically large Erd\"os-R\'enyi graph and any diffusive coupling function in a sufficiently small neighbourhood of $h$ there is an interval $I\subset \mb R$ of coupling strengths for which $\mathcal S$ is stable (normally attracting). \item[b)] For any diffusive coupling function $h(x,y)$, and for any sufficiently large heterogeneous layered graph $G$ with sequence of degrees satisfying \eqref{eq:layeredER} and \eqref{eq:layeredER2}, $\mathcal S$ is unstable. \end{itemize} } \end{minipage}\\ \ \begin{example} Take $f(z)=2z\mod1$ and \[ h(z_i,z_j)=\sin(2\pi z_j-2\pi z_i)+\sin(2\pi z_j)-\sin(2\pi z_i). \] It follows from the proof of Theorem~\ref{MTheo:C} a) that almost every asymptotically large Erd\"os-R\'enyi graph has a stable synchronization manifold for some values of the coupling strength ($\alpha\sim 0.3$) while any sufficiently large layered heterogeneous graph do not have any stable synchronized orbit. However, in a layered graph $G$ the reduced dynamics for a hub node in the $i-$th layer is \begin{align*} g_{i_j}(z_{i_j})&=2z_{i_j}+\alpha \kappa_i\int \left[\sin(2\pi y-2\pi z_{i_j})+\sin(2\pi y)-\sin(2\pi z_{i_j})\right]dm(y)\mod 1\\ &=2z_{i_j}-{\alpha\kappa_i}\sin(2\pi z_{i_j})\mod 1\\ &=T_{{\alpha\kappa_i}}(z_{i_j}). \end{align*} By Theorem~\ref{MTheo:B} there is an interval for the coupling strength ($\alpha\kappa_i\sim 0.3$) for which $g_{i_j}$ has an attracting periodic sink and the orbit of the hubs in the layer follow this orbit (modulo small fluctuations) exhibiting coherent behaviour. \end{example} \noindent {\bf Acknowledgements:} The authors would like to thank Mike Field, Gerhard Keller, Carlangelo Liverani and Lai-Sang Young for fruitful conversations. We would also like to acknowledge the anonymous referee for finding many typos and providing useful comments. The authors also acknowledge funding by the European Union ERC AdG grant no 339523 RGDD, the Imperial College Scholarship Scheme and the FAPESP CEPID grant no 213/07375-0. \noindent \noindent \section{Setting and Statement of the Main Theorems } \label{Sec:SettRes} Let us consider a directed graph $G$ whose set of nodes is $\mathcal N=\{1,\dots,N\}$ and set of directed edges $\Epsilon\subset \mathcal N\times \mathcal N$. In this paper we will be only concerned with in-degrees of a node, namely the number of edges that point to that node (which counts the contributions to the interaction felt by that node). Furthermore we suppose, in a sense that will be later specified, that the in-degrees $d_1,\dots,d_L$ of the nodes $\{1,\dots,L\}$ are low compared to the size of the network while the in-degrees $d_{L+1},\dots,d_{N}$ of the nodes are comparable to the size of the network. For this reason, the first $L$ nodes will be called {\em low degree nodes} and the remaining $M=N-L$ nodes will be called {\em hubs}. Let $A$ be the adjacency matrix of $G$ \[ A = (A_{in})_{1\le i,n\le N} \] whose entry $A_{ij}$ is equal to one if the edge going from node $j$ to node $i$ is present, and zero otherwise. So $ d_{i}=\sum_{j=1}^{N} A_{ij}$. The important \emph{structural parameters} of the network are: \begin{itemize} \item $L, M$ the number of low degree nodes, resp. hubs; $N=L+M$, the total number of nodes; \item $\Delta:=\max_{i}d_{i}$, the maximum in-degree of the hubs; \item $\delta:=\max_{1<i\le L} d_i$, the maximum in-degree of the low degree nodes. \end{itemize} The building blocks of the dynamics are: \begin{itemize} \item the \emph{local dynamics}, $f:\mathbb T\rightarrow\mathbb T$, , $f(x)=\sigma x \mod1$, for some integer $\sigma\ge 2$; \item the \emph{coupling function}, $h:\mathbb T \times\mathbb T\rightarrow \mb R$ which we assume is $C^{10}$; \item the \emph{coupling strength}, $\alpha\in\mb R$. \end{itemize} We require the the coupling to be $C^{10}$ to ensure sufficiently fast decay of the Fourier coefficients in $h$. This is going to be useful in Section \ref{App:TruncSyst}. Expressing the coordinates as $z=(z_1,...,z_{N})\in\mathbb T^N$, the discrete-time evolution is given by a map $F:\mathbb T^{N}\rightarrow\mathbb T^{N}$ defined by $z':=F(z)$ with \begin{equation} z_i'= f(z_i)+\frac{\alpha}{\Delta}\sum_{n=1}^N A_{in}h(z_i,z_n)\mod 1 \quad , \quad i=1,...,N. \label{Eq:CoupDyn} \end{equation} Our main result shows that low and high degree nodes will develop different dynamics when $\alpha$ is not too small. To simplify the formulation of our main theorem, we write $z=(x,y)$, with $x=(x_1,...,x_L):=(z_1,\dots,z_L) \in\mathbb T^L$ and $y=(y_1,...,y_M):=(z_{L+1},\dots,z_N)\in\mathbb T^M$. Moreover, decompose \[ A=\left( \begin{array}{cc} A^{ll} & A^{lh} \\ A^{hl} & A^{hh} \end{array} \right) \] where $A^{ll}$ is a $L\times L$ matrix, etc. Also write $A^{l}=(A^{ll} \, A^{lh} )$ and $A^{h}=(A^{hl} \, A^{hh})$. In this notation we can write the map : \begin{align} x_i'&= f(x_i)+\frac{\alpha}{\Delta}\sum_{n=1}^N A_{in}h(x_i,z_n) \mod 1& i=1,...,L\label{Eq:CoupDyn1}\\ y_j'&=g_j (y_j)+\xi_j(z) \quad \,\, \mod 1& j=1,...,M\label{Eq:CoupDyn2} \end{align} where, denoting the Lebesgue measure on $\mathbb T$ as $m_1$, \begin{equation} g_j(y):=f(y)+\alpha\kappa_j \int h(y,x)dm_1(x)\mod 1, \quad\mbox{ }\quad \kappa_j:=\frac{d_{j+L}}{\Delta}, \label{Eq:MeanFieldMaps} \end{equation} and \begin{equation} \xi_j(z):=\alpha \left[ \frac{1}{\Delta}\sum_{n=1}^N A^{h} _{jn}h(y_j,z_n) - \kappa_j \int h(y_j,x)dm_1(x) \right].\label{Eq:average}\end{equation} Before stating our theorem, let us give an intuitive argument why we write $F$ in the form (\ref{Eq:CoupDyn1}) and (\ref{Eq:CoupDyn2}), and why for a very long time-horizon one can model the resulting dynamics quite well by \[ x_i'\approx f(x_i) \quad\mbox{ and }\quad y_j'\approxg_j(y_j). \] To see this, note that for a heterogeneous network, the number of nonzero terms in the sum in \eqref{Eq:CoupDyn1} is an order of magnitude smaller than $\Delta$. Hence when $N$ is large, the interaction felt by the low degree nodes becomes very small and therefore we have approximately $x_i'\approx f(x_i)$. So the low degree nodes are \say{essentiallly} uncorrelated from each other. Since the Lebesgue measure on $\mathbb T$, $m_1$, is $f$-invariant and since this measure is exact for the system, one can expect $x_i$, $i=1,\dots,L$ to behave as independent uniform random variables on $\mathbb T$, at least for \say{most of the time}. Most of the $d_j=\kappa_j \Delta$ incoming connections of hub $j$ are with low degree nodes. It follows that the sum in (\ref{Eq:average}) should converge to \[ \kappa_j \int h(y_j,x)dm_1(x) \] when $N$ is large, and so $\xi_j(z)$ should be close to zero. Theorem~\ref{Thm:Main} of this paper is a result which makes this intuition precise. In the following, we let $N_r(\Lambda)$ be the $r$-neighborhood of a set $\Lambda$ and we define one-dimensional maps $g_j:\mathbb T\rightarrow\mathbb T$, $j=1,\dots,M$ to be hyperbolic in a uniform sense. \begin{definition}[A Hyperbolic Collection of 1-Dimensional Map, see e.g. \cite{MS}]\label{Def:AxiomA} Given $\lambda\in (0,1)$, $r>0$ and $m,n\in\mb N$, we say that $g :\mathbb T\rightarrow\mathbb T$ is $(n,m,\lambda,r)$-hyperbolic if there exists an attracting set $\Lambda\subset \mathbb T$, with \begin{enumerate} \item $g(\Lambda)=\Lambda$, \item $|D_xg^n|<\lambda$ for all $x\in N_r(\Lambda)$, \item $|D_xg^n|>\lambda^{-1}$ for all $x\in N_r(\Upsilon)$ where $\Upsilon:=\mathbb T\backslash W^s(\Lambda)$, \item for each $x\notin N_r(\Upsilon)$, we have $g^k(x)\in N_r(\Lambda)$ for all $k\ge m$, \end{enumerate} where $W^s(\Lambda)$ is the union of the stable manifolds of the attractor \[ W^s(\Lambda):=\{x\in\mathbb T\mbox{ s.t. }\lim_{k\rightarrow\infty}d(g^k(x), \Lambda)=0\}. \] \end{definition} It is well known, see e.g. \cite[Theorem IV.B]{MS} that for each $C^2$ map $g\colon \mathbb T \to \mathbb T$ (with non-degenerate critical points), the attracting sets are periodic and have uniformly bounded period. If we assume that $g$ is also hyperbolic, we obtain a bound on the number of periodic attractors. A globally expanding map is hyperbolic since it correspond to the case where $\Lambda=\emptyset$. We now give a precise definition of what we mean by heterogeneous network. \begin{definition} We say that a network with parameters $L,M,\Delta,\delta$ is \emph{$\eta$-heterogeneous} with $\eta>0$ if there is $p,q\in[1,\infty)$ with $1=1/p+1/q$, such that the following conditions are met: \begin{align} \Delta^{-1}L^{1/p}\delta^{1/q}&<\eta\tag{H1}\label{Eq:ThmCond1}\\ \Delta^{-1/p}M^{2/p}&<\eta \tag{H2}\label{Eq:ThmCond2'}\\ \Delta^{-1}ML^{1/p}&<\eta \tag{H3}\label{Eq:ThmCond2}\\ \Delta^{-2}L^{1+2/p}\delta&<\eta\tag{H4}\label{Eq:ThmCond3} \end{align} \end{definition} \begin{remark} \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond3} arise as sufficient conditions for requiring that the coupled system $F$ is \say{close} to the product system $f\times\dots \times f\times g_1\times \dots\times g_M:\mathbb T^{L+M}\rightarrow \mathbb T^{L+M}$ and preserve good hyperbolic properties on most of the phase space. They are verified in many common settings, as is shown in Appendix~\ref{Sec:ApRandGrap}. An easy example to have in mind where those conditions are asymptotically satisfied as $N\to \infty$ for every $\eta>0$, is the case where $M$ is constant (so $L\sim N$) and $\delta\sim L^{\tau}$, and $\Delta\sim L^\gamma$ with $0\leq \tau<1/2$ and $(\tau+1)/2<\gamma<1$. In particular the layered heterogeneous graphs satisfying \eqref{eq:layeredER2} in the introduction to the paper have these properties. \end{remark} \setcounter{mtheorem}{0} \begin{mtheorem}\label{Thm:Main} Fix $\sigma$, $h$ and an interval $[\alpha_1,\alpha_2]\subset\mb R$ for the parameter $\alpha$. Suppose that for all $1\leq j\leq M$ and $\alpha\in[\alpha_1,\alpha_2] $, each of the maps $g_j$, $j=1,\dots,M$ is $(n, m,\lambda,r)$-hyperbolic. Then there exist $\xi_0,\eta, C>0$ such that if the network is $\eta-$heterogeneous, for every $0<\xi<\xi_0$ and for every $1\leq T\leq T_1$ with \[ T_1=\exp[C\Delta\xi^2], \] there is a set of initial conditions $\Omega_T\subset \mathbb T^{N}$ with \[ m_{N}(\Omega_T)\geq 1-\frac{(T+1)}{T_1}, \] such that for all $(x(0),y(0))\in\Omega_T$ \[ \left |\xi_j(z(t))\right|<\xi,\quad\forall 1\leq j\leq M\mbox{ and }1\leq t\leq T. \] \end{mtheorem} \begin{remark} The result hold under conditions \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond3} with $\eta$ sufficiently small, but uniform in the local dynamical parameters. Notice that $p$ has a different role in \eqref{Eq:ThmCond1}, \eqref{Eq:ThmCond2}, \eqref{Eq:ThmCond3} and in \eqref{Eq:ThmCond2'} so that a large $p$ helps the first one, but hinders the second and viceversa for a small $p$. \end{remark} The proof of Theorem \ref{Thm:Main} will be presented separately in the case where $g_j$ is an expanding map of the circle for all the hubs (Section \ref{Sec:ExpRedMapsGlob}), and when at least one of the $g_j$ have an attracting point (Section \ref{Sec:RedMapNonAtt}). The next theorem, is a consequence of results on density of hyperbolicity in dimension one and Theorem \ref{Thm:Main}. It shows that the hypothesis on hyperbolicity of the reduced maps $g_j$ is generically satisfied, and that generically one can tune the coupling strength to obtain reduced maps with attracting periodic orbits resulting in regular behaviour for the hub nodes. \setcounter{mtheorem}{1} \def\mbox{dist}{\mbox{dist}} \begin{mtheorem}[Coherent behaviour for hub nodes] \label{MTheo:B} For each $\sigma\in\mb N$, $\alpha\in \mb R,\kappa_j\in (0,1]$, there is an open and dense set $\Gamma\subset C^{10}(\mathbb T^2;\mb R)$ such that, for all coupling functions $h\in \Gamma$, $g_j\in C^{10}(\mathbb T,\mathbb T)$, defined by Eq. \eqref{Eq:MeanFieldMaps}, is hyperbolic (as in Definition \ref{Def:AxiomA}). There is an open and dense set $\Gamma'\subset C^{10}(\mathbb T^2;\mb R)$ such that for all $h\in\Gamma'$ there exists an interval $I\subset\mb R$ for which if $\alpha\kappa_j\in I$ then $g_j$ has a nonempty and finite periodic attractor. Furthermore, suppose that $h\in\Gamma'$, the graph $G$ satisfies the assumptions of Theorem \ref{Thm:Main} for some $\xi>0$ sufficiently small, and that for the hub $j\in\mathcal N$, $\alpha \kappa_j \in I$. Then there exists $C>0$ and $\chi\in (0,1)$ so that the following holds. Let $T_1:=\exp[C\Delta\xi^2]$. There is a set of initial conditions $\Omega_{T}\subset \mathbb T^{N}$ with \[ m_{N}(\Omega_T)\geq 1-\frac{(T+1)}{T_1} - \xi^{1-\chi} \] so that for all $z(0)\in\Omega_T$ there is a periodic orbit of $g_{j}$, $O=\{\overline z(k)\}_{k=1}^p$, for which \[ \mbox{dist}(z_{j}(t), \overline z(t \mbox{ mod } p))\le \xi \] for each $1/\xi\le t\le T\le T_1$. \end{mtheorem} \begin{proof} See Appendix \ref{appendix:thmc}. \end{proof} \begin{remark} In the setting of the theorem above, consider the case where two hubs $j_1,j_2\in\mathcal N$ have the same connectivity $\kappa$, and their reduced dynamics $g_{j_i}$ have a unique attracting periodic orbit. In this situation their orbits closely follow this unique orbit (as prescribed by the theorem) and, apart from a phase shift $\tau\in\mb N$, they will be close one to another resulting in highly coherent behaviour: \[ \mbox{dist}(z_{j_1}(t),z_{j_2}(t+\tau))\le2\xi \] under the same conditions of Theorem \ref{MTheo:B}. In general, the attractor of $g_{j_i}$ is the union of a finite number of attracting periodic orbits. Choosing initial conditions for the hubs's coordinates in the same connected component of the basin of attraction of one of the periodic orbits yield the same coherent behaviour as above. \end{remark} In the next theorem we show that for large heterogeneous networks, in contrast with the case of homogeneous networks, coherent behaviour of the hubs is the most one can hope for, and global synchronisation is unstable. \begin{definition}[Erd\"os-R\'enyi Random Graphs \cite{Bollobas}] \label{Def:ErdRen} For every $N$ and $p$, an Erd\"os-R\'enyi random graph is a discrete probability measure on the set $\mathcal G(N)$ of undirected graphs on $N$ vertices which assigns independently probability $p\in (0,1)$ to the presence on any of the edge. Calling $\mathbb P_p$ such probability and $(A_{ij})$ the symmetric adjacency matrix of a graph randomly chosen according to $\mathbb P_p$, $\{A_{ij}\}_{j\ge i}$ are i.i.d random variables equal to 1 with probability $p$, and to 0 with probability $1-p$. \end{definition} {\it \begin{mtheorem}[Stability and instability of synchrony] \label{MTheo:C} \ \begin{itemize} \item[a)] Take a diffusive coupling function $h(x,y)=\varphi(y-x)$ for some $\varphi:\mathbb T\rightarrow \mb R$ with $\frac{d\varphi}{dx}(0)\neq 0$. For any coupling function $h'$ in a sufficiently small neighbourhood of $h$, there is an interval $I\subset \mb R$ of coupling strengths such that for any $p$~$\in \left(\frac{\log N}{N},1\right]$ there exists a subset of undirected homogeneous graphs ${\mathcal G}_{Hom}(N) \subset \mathcal G(N)$, with $\mathbb P_p({\mathcal G}_{Hom}(N))\to 1$ as $N\to \infty$ so that for any $\alpha \in \mathcal I $ the synchronization manifold $\mathcal{S}$, defined in Eq. \eqref{Eq:SyncManif}, is locally exponentially stable (normally attracting) for each network coupled on $G\in {\mathcal G}_{Hom}(N)$. \item[b)] Take any sequence of graphs $\{G(N)\}_{N\in\mb N}$ where $G(N)$ has $N$ nodes and non-decreasing sequence of degrees $\boldsymbol d(N)=(d_{1,N},...,d_{N,N})$. Then, if $d_{N,N}/d_{1,N}\rightarrow\infty$ for $N\rightarrow\infty$, for any diffusive coupling $h$ and coupling strength $\alpha\in\mb R$ there is $N_0\in \mb N$ such that the synchronization manifold $\mathcal{S}$ is unstable for the network coupled on $G(N)$ with $N>N_0$. \end{itemize} \end{mtheorem} } \begin{proof} See Appendix \ref{App:RandGrap}. \end{proof} \setcounter{mtheorem}{2} \subsection{Literature Review and the Necessity of a New Approach for HCM} \label{subsec:literature} We just briefly recall the main lines of research on dynamical systems coupled in networks to highlight the need of a new perspective that meaningfully describe HCM. For more complete surveys see \cite{porter2014dynamical,Fern}. \begin{itemize} \item {\bf Bifurcation Theory} \cite{Gol-Stewart98,Gol-Stewart2006,Koiller-Young,Field-etal-2011,rink2015coupled}. In this approach typically there exists a low dimensional invariant set where the interesting behaviour happens. Often the equivariant group structure is used to obtain a center manifold reduction. In our case the networks are not assumed to have symmetries (e.g. random networks) and the relevant invariant sets are fractal like containing unstable manifolds of very high dimension (see Figure \ref{Fig:Attractor}). For these reasons it is difficult to frame HCM in this setting or use perturbative arguments. \item The study of {\bf Global Synchronization} \cite{Kuramoto84,Barahona2002,Mirollo2014,Pereira2014} deals with the convergence of orbits to a low-dimensional invariant manifold where all the nodes evolve coherently. HCM do not exhibit global synchronization. The synchronization manifold in Eq. \eqref{Eq:SyncManif} is unstable (see Theorem \ref{MTheo:C}). Furthermore, many works \cite{Balint,strogatz2000kuramoto} deal with global synchronization when the network if fully connected (all-to-all coupling) by studying the uniform mean field in the thermodynamic limit. On the other hand, we are interested in the case of a finite size system and when the mean field is not uniform across connectivity layers. \item The statistical description of {\bf Coupled Map Lattices} \cite{Kaneko1992,BunSinai,Baladi1,Baladi2,Keller1,Keller2,Keller3,chazottes2005dynamics,selley2016symmetry} deals with maps coupled on homogeneous graphs and considers the persistence and ergodic properties of invariant measures when the magnitude of the coupling strength goes to zero. In our case the coupling regime is such that hub nodes are subject to an order one perturbation coming from the dynamics. Low degree nodes still feel a small contribution from the rest of the network, however, its magnitude depends on the system size and to make it arbitrarily small the dimensionality of the system must increase as well. \end{itemize} It is worth mentioning that dynamics of coupled systems with different subsystems appears also in \emph{slow-fast system} dynamics \cite{MR3064670,MR3556527,MR2316999}. Here, loosely speaking, some (slow) coordinates evolve as \say{$id +\varepsilon h$} and the others have good ergodic properties. In this case one can apply ergodic averaging and obtain a good approximation of the slow coordinates for time up to time $T\sim\varepsilon^{-1}$. In our case, spatial rather than time ergodic averaging takes place and there is no dichotomy on the time scales at different nodes. Furthermore, the role of the perturbation parameter is played by $\Delta^{-1}$ and we obtain $T=\exp(C\Delta)$, rather than the polynomial estimate obtained in slow-fast systems. \section{Sketch of the Proof and the Use of a {\lq}Truncated{\rq} System} \label{sec:sketch} \subsection {A Trivial Example Exhibiting Main Features of HCM} \label{Sec:StarNetExamp} We now present a more or less trivial example which already presents all the main features of heterogeneous coupled maps, namely \begin{itemize} \item existence of a set of \emph{\say{bad} states} with large fluctuations of the mean field, \item control on the \emph{hitting time} to the bad set, \item \emph{finite time} exponentially large on the size of the network.\end{itemize} Consider the evolution of $N=L+1$ doubling maps on the circle $\mathbb T$ interacting on a {\em star network} with nodes $\{1,...,L+1\}$ and set of directed edges $\mathcal E=\{(i,L+1): 1\leq i\leq L\}$ (see Figure \ref{Fig:Star}). The hub node $\{L+1\}$ has an incoming directed edge from every other node of the network, while the other nodes have just the outgoing edge. Take as interaction function the diffusive coupling $h(x,y):=\sin(2\pi y)-\sin (2\pi x)$. Equations \eqref{Eq:CoupDyn1} and \eqref{Eq:CoupDyn2} then become \begin{figure} \caption{Star network with only incoming arrows.} \label{Fig:Star} \end{figure} \begin{align} x_i(t+1)&= 2x_i(t)&\mod 1\quad& 1\leq i\leq L\label{Eq:StarNetEq1}\\ y(t+1)\phantom{l}&= 2y(t)+ \frac{\alpha}{L}\sum_{i=1}^{L}\left[\sin(2\pi x_i(t))-\sin(2\pi y(t))\right]&\phantom{....}\mod 1.\quad&\label{Eq:StarNetEq2} \end{align} The low degree nodes evolve as an uncoupled doubling map making the above a \emph{skew-product} system on the base $\mathbb T^{L}$ akin to the one extensively studied in \cite{MR1862809}. One can rewrite the dynamics of the forced system (the hub) as \begin{equation}\label{Eq:StarHub} y(t+1)=2y(t)-\alpha\sin(2\pi y(t))+\frac{\alpha}{L}\sum_{i=1}^L\sin(2\pi x_i(t)) \end{equation} and notice that defining $g(y):=2y-\alpha\sin(2\pi y)\mod 1$, the evolution of $y(t)$ is given by the application of $g$ plus a noise term \begin{equation}\label{Eq:Fluct} \xi(t)=\frac{\alpha}{L}\sum_{i=1}^L\sin(2\pi x_i(t)) \end{equation} depending on the low degree nodes coordinates. The Lebesgue measure on $\mb T^L$ is invariant and mixing for the dynamics restricted to first $L$ uncoupled coordinates. The set of bad states where fluctuations \eqref{Eq:Fluct} are above a fixed threshold $\varepsilon>0$ is \begin{align*} \mathcal B_\varepsilon&:=\left\{x\in\mathbb T^L: \left|\frac{1}{L}\sum_{i=1}^L\sin(2\pi x_i)-\mathbb E_m[\sin(2\pi x)]\right|>\varepsilon\right\}\\ &=\left\{x\in\mathbb T^L: \left|\frac{1}{L}\sum_{i=1}^L\sin(2\pi x_i)\right|>\varepsilon\right\} \end{align*} Using large deviation results one can upper bound the measure of the set above as \[ m_L(\mathcal B_{\varepsilon})\leq \exp(-C\varepsilon^2 L). \] ($C>0$ is a constant uniform on $L$ and $\varepsilon$, see the Hoeffding Inequality in Appendix \ref{App:TruncSyst} for details). Since we know that the dynamics of the low degree nodes is ergodic with respect the measure $m_L$ we have the following information regarding the time evolution of the hub. \begin{itemize} \item The set $\mathcal B_\varepsilon$ has positive measure. Ergodicity of the invariant measure implies that a generic initial condition will visit $\mathcal B_\varepsilon$ in finite time, making any mean-field approximation result for infinite time hopeless. \item As a consequence of Kac Lemma, the average hitting time to the set $\mathcal B_\varepsilon$ is $m_L(\mathcal B_{\varepsilon})^{-1}\ge\exp(C\varepsilon^2 L)$, thus exponentially large in the dimension. \item From the invariance of the measure $m_L$, for every $1\leq T\leq \exp(C\varepsilon^2 L)$ there is $\Omega_T\subset \mathbb T^{L+1}$ with measure $m_{L+1}(\Omega_T)>1-T\exp(-C\varepsilon^2 L)$ such that $\forall x\in\Omega_T$ and for every $1\leq t\leq T$ \[ \left| \frac{1}{L}\sum_{i=1}^L\sin(2\pi x_i(t))\right|\leq\varepsilon. \] \end{itemize} \subsection{Truncated System} We obtain a description of the coupled system by restricting our attention to a subset of phase space where the evolution prescribed by equations \eqref{Eq:CoupDyn1} and \eqref{Eq:CoupDyn2} resembles the evolution of the uncoupled mean-field maps, and we redefine the evolution outside this subset in a convenient way. This leads to the definition of a {\em truncated } map $F_\varepsilon:\mathbb T^{N}\rightarrow\mathbb T^{N}$, for which the fluctuations of the mean field averages are artificially cut-off at the level $\varepsilon>0$, resulting in a well behaved hyperbolic dynamical system. In the following sections we will then determine existence and bounds on the invariant measure for this system and prove that the portion of phase space where the original system and the truncated one coincide is almost full measure with a remainder exponentially small in the parameter $\Delta$. Note that since $h\in C^{10}(\mathbb T^2;\mb R)$, its Fourier series \[ h(x,y)=\sum_{s=(s_1,s_2)\in \mathbb Z^2}c_{s}\theta_{s_1}(x)\upsilon_{s_2}(y), \] where $c_s\in\mb R$ and $\theta_{i}:\mathbb T\rightarrow [0,1]$ form a base of trigonometric functions, converges uniformly and absolutely on $\mathbb T^2$. Furthermore, for all $s\in\mathbb Z^2$ \begin{equation} |c_s|\leq \frac{\|h\|_{C^{10}}}{|s_1|^5|s_2|^5}. \end{equation} Taking $\overline \theta_{s_1}=\int \theta_{s_1}(x) dm_1(x) $ we get \begin{equation} \xi_j(z):=\alpha \sum_{s\in\mathbb Z^2}c_s \left[ \frac{1}{\Delta}\sum_{n=1}^L A^h_{jn} \theta_{s_1}(z_n) - \kappa_j \overline \theta_{s_1} \right]\upsilon_{s_2}(y_j)+\frac{\alpha}{\Delta}\sum_{n=1}^M A^h_{jn}h(y_j,y_n) \label{Eq:E'} \end{equation} For every $\varepsilon>0$ choose a $C^{\infty}$ map $\zeta_\varepsilon:\mb R\rightarrow\mb R$ with $\zeta_\varepsilon(t)=t$ for $|t|<\varepsilon$, $\zeta_\varepsilon(t)=2 \varepsilon$ for $|t|>2 \varepsilon$. So for each $\varepsilon>0$, the function $t\mapsto |D_t\zeta_\varepsilon|$ is uniformly bounded in $t$ and $\varepsilon$. We define the evolution for the truncated dynamics $F_\varepsilon\colon \mathbb T^{L+M}\rightarrow\mathbb T^{L+M}$ by the following modification of equations (\ref{Eq:CoupDyn1}) and (\ref{Eq:CoupDyn2}): \begin{align} x_i'&= f(x_i)+\frac{\alpha}{\Delta}\sum_{n=1}^N A_{in}h(x_i,z_n) \mod 1& i=1,...,L \label{Eq:CoupDyn1'} \\ y_j'&=g_j(y_j) +\xi_{j,\varepsilon}(z) \quad \,\, \mod 1& j=1,...,M\label{Eq:CoupDyn2'} \end{align} where the expression of $\xi_{j,\varepsilon}(z)$ modifies that of $\xi_{j}(z)$ in \eqref{Eq:E'}: \begin{equation} \xi_{j,\varepsilon}(z):= \alpha\sum_{s\in\mathbb Z^2}c_s \zeta_{\varepsilon |s_1|}\left(\frac{1}{\Delta}\sum_{i=1}^L A_{ji}\theta_{s_1}(x_i) - \kappa_j \overline \theta_{s_1} \right)\upsilon_{s_2}(y_j)+ \frac{\alpha}{\Delta}\sum_{n=1}^M A^h_{jn}h(y_j,y_n). \label{eq:xijeps} \end{equation} So the only difference between $F$ and $F_\varepsilon$ are the cut-off functions $\zeta_{\varepsilon|s| }$ appearing in (\ref{eq:xijeps}). For every $\varepsilon>0$, $j\in\{1,...,M\}$ and $s_1\in\mb Z$ define \begin{equation}\label{Eq:DefBadSetComp} \mathcal B_{\varepsilon}^{(s_1,j)}:=\left\{x\in\mathbb T^{L}:\left|\frac{1}{\Delta}\sum_{i=1}^L A^h_{ji}\theta_{s_1}(x_i)-\kappa_j\overline\theta_{s_1}\right|>\varepsilon|s_1|\right\}. \end{equation} The set where $F$ and $F_\varepsilon$ coincide is $\mathcal Q_\varepsilon\times\mathbb T^M$, with \begin{equation}\label{Eq:DefQdelta} \mathcal Q_\varepsilon:=\bigcap_{j=1}^M\bigcap_{s_1\in\mathbb Z}\mathbb T^L\backslash \mathcal B_\varepsilon^{(s_1,j)} \end{equation} the subset of $\mathbb T^L$ where all the fluctuations of the mean field averages of the terms of the coupling are less than the imposed threshold. The set $\mathcal B_\varepsilon:=\mathcal Q_\varepsilon^c$, is the portion of phase space for the low degree nodes were the fluctuations exceed the threshold, and the systems $F$ and $F_\varepsilon$ are different. Furthermore we can control the perturbation introduced by the term $\xi_{j,\varepsilon}$ in equation \eqref{Eq:CoupDyn2} so that $F_\varepsilon$ is close to the hyperbolic uncoupled product map $\boldsymbol f:\mathbb T^N\rightarrow\mathbb T^N$ \begin{equation}\label{Eq:UncSystMF} \boldsymbol f(x_1,..,x_L,y_1,...,y_M):=(f(x_1),...,f(x_L),g_1(y_1),...,g_M(y_M)). \end{equation} All the bounds on relevant norms of $\xi_{j,\varepsilon}$ are reported in Appendix \ref{App:TruncSyst}. To upper bound the Lebesgue measure $m_L(\mathcal B_\varepsilon)$ we use the Hoeffding's inequality (reported in Appendix \ref{App:TruncSyst}) on concentration of the average of independent bounded random variables. \begin{proposition}\label{Prop:UppBndBLeb} \begin{equation}\label{Eq:UppBndBLeb} m_L(\mathcal B_\varepsilon)\leq\frac{\exp\left[-\frac{\Delta\varepsilon^2}{2}+\mathcal O(\log M)\right]}{1-\exp\left[-\frac{\Delta\varepsilon^2}{2}\right]}. \end{equation} \end{proposition} \begin{proof} See Appendix \ref{App:TruncSyst}. \end{proof} This gives an estimate of the measure of the bad set with respect to the reference measure invariant for the uncoupled maps. In the next section we use this estimate to upper bound the measure of this set with respect to SRB measures for $F_\varepsilon$, which is the measure giving statistical informations on the orbits of $F_\varepsilon$. \begin{remark} Notice that in \eqref{Eq:UppBndBLeb} we expressed the upper bound only in terms of orders of functions of the network parameters, but all the constants could be rigorously estimated in terms of the coupling function and the other dynamical parameters of the system. In particular, where the expression of the coupling function was known one could have obtained better estimates on the concentration via large deviation results (see for example Cram\'er-type inequalities in \cite{dembo2009large}) which takes into account more than just the upper and lower bounds of $\theta_s$. In what follows, however, we will be only interested in the order of magnitudes with respect to the aforementioned parameters of the network ($\Delta$, $\delta$, $L$, $M$). \end{remark} \subsection{Steps of the Proof and Challenges} The basic steps of the proof are the following: \begin{itemize} \item[(i)] First of all we are going to restrict our attention to the case where the maps $g_j$ satisfy Definition~\ref{Def:AxiomA} with $n=1$ \item[(ii)] Secondly, hyperbolicity of the map $F_\varepsilon$ is established for an $\eta-$heterogeneous network with $\varepsilon,\eta>0$ small. This is achieved by constructing forward and backward invariant cone-fields made of expanding and contracting directions respectively for the cocycle defined by application of $D_z F_\varepsilon$ \eqref{Eq:DiffAuxMap}. \item[(iii)] Then we estimate the distortion of the maps along the unstable directions, keeping all dependencies on the structural parameters of the network explicit. \item[(iv)] We then use a geometric approach employing what are sometimes called \emph{standard pairs}, \cite{climenhaga2016geometric}, to estimate the regularity properties of the SRB measures for the endomorphism $F_\varepsilon$, and the hitting time to the set $\mathcal B_\varepsilon$ \item[(v)] Finally we show that Mather's trick allows us to generalise the proofs to the case in which $g_j$ satisfy Definition~\ref{Def:AxiomA} with $n\neq 1$. \end{itemize} We consider separately the cases where all the reduced maps $g_j$ are expanding and when some of them have non-empty attractor (Section \ref{Sec:ExpRedMapsGlob} and Section \ref{Sec:RedMapNonAtt}). At the end of Section \ref{Sec:RedMapNonAtt} we put the results together to obtain the proof of Theorem \ref{Thm:Main}. In the above points we treat $F_\varepsilon$ as a perturbation of a product map where the magnitude of the perturbation depends on the network size. In particular, we want to show that $F_\varepsilon$ is close to the uncoupled product map $\boldsymbol f$. To obtain this, the dimensionality of the system needs to increase, changing the underlying phase space. This leads to two main challenges. First of all, increasing the size of the system propagate nonlinearities of the maps and reduces the global regularity of the invariant measures. Secondly, the situation is inherently different from usual perturbation theory where one considers a parametric family of dynamical systems on the same phase space. Here, the parameters depend on the system's dimension. As a consequence one needs to make all estimates explicit on the system size. For these reasons we find the geometric approach advantageous with respect to the functional analytic approach \cite{keller1999stability} where the explicit dependence of most constants on the dimension are hidden in the functional analytic machinery. \paragraph{Notation} As usual, we write $\mathcal O(N)$ and $\mathcal O(\varepsilon)$ for an expression so that $\mathcal O(N)/N$ resp. $\mathcal O(\varepsilon)/\varepsilon$ is bounded as $N\to \infty$ and $\varepsilon\downarrow 0$. We use short-hand notations $[n]:=\{1,...,n\}$ for the natural numbers up to $n$. Throughout the paper $m$, $m_{n}$ stand for the Lebesgue measure on $\mathbb T$ and $\mathbb T^n$ respectively. Given an embedded manifold $W\subset \mathbb T^{N}$, $m_W$ stands for the Lebesgue measure induced on $W$. We indicate with $D_xG$ the differential of the function $G$ evaluated at the point $x$ in its domanin. \section{Proof of Theorem~\ref{Thm:Main} when all Reduced Maps are Uniformly Expanding}\label{Sec:ExpRedMapsGlob} In this section we assume that the collection of reduced maps $g_{j}$, $j=1,\dots,M$, from equation \eqref{Eq:MeanFieldMaps} is uniformly expanding. As shown in Lemma~\ref{lem:n=1} this means that we can assume that there exists $\lambda\in (0,1)$, so that $|g_j(x) |\ge \lambda^{-1}$ for all $x\in \mathbb T$ and all $j=1,\dots,M$. First of all pick $1\le p\le \infty$, let $1\le q\le \infty$ be so that $1/p+1/q=1$ and consider the norm defined as \[ \|\cdot\|_p:=\|\cdot\|_{p,\mb R^L}+\|\cdot\|_{p,\mb R^M} \] where $\|\cdot \|_{p,\mb R^k}$ is the usual $p-$norm on $\mb R^k$. $\|\cdot\|_p$ induces the operator norm of any linear map $\mathcal L\colon \mb R^{N}\to \mb R^{N}$, namely \[ \|\mathcal L\|_p:=\sup_{\substack{v\in\mb R^{N} \\ \|v\|_p=1}}\frac{\|\mathcal Lv\|_p}{\|v\|_p}. \] and the distance $d_p:\mathbb T^{N}\times\mathbb T^{N}\rightarrow \mb R^+$ on $\mathbb T^{N}$. \begin{theorem}\label{Thm:InvMeasDenExpand} There are $\eta_0,\varepsilon_0>0$ such that under \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond3} with $\eta<\eta_0$ and for all $\varepsilon<\varepsilon_0$ there exists an absolutely continuous invariant probability measure $\nu$ for $F_\varepsilon$. The density $\rho=d\nu/dm_N$ satisfies for all $z,\overline z\in\mathbb T^N$ \begin{equation}\label{Eq:InvDensProp} \frac{\rho(z)}{\rho(\overline z)}\leq\exp\left\{ad_p(z,\overline z)\right\},\quad a=\mathcal O(\Delta^{-1}\delta L)+\mathcal O(M). \end{equation} \end{theorem} In Section \ref{ConExp} we obtain conditions on the heterogenous structure of the network which ensure that the truncated system $F_\varepsilon$ is sufficiently close, in the $C^1$ topology, to the uncoupled system $\boldsymbol f$, in Eq. \eqref{Eq:UncSystMF}, with the hubs evolving according to the low-dimensional approximation $g_j$, for it to preserve expansivity when the network is large enough. In this setting $F_\varepsilon$ is a uniformly expanding endomorphism and therefore has an absolutely continuous invariant measure $\nu$ whose density $\rho=\rho_\varepsilon$ is a fixed point of the transfer operator of $F_\varepsilon$ \[ P_\varepsilon:L^1\left(\mathbb T^{N},m_{N}\right)\rightarrow L^1\left(\mathbb T^{N},m_{N}\right). \] (See Appendix \ref{Ap:TranOp} for a quick review on the theory of transfer operators). For our purposes we will also require bounds on $\rho$ which are explicit on the structural parameters of the network (for suitable $\varepsilon$). In Section \ref{InvConeSec} we obtain bounds on the distortion of the Jacobian of $F_\varepsilon$ (Proposition \ref{Prop:DistJac}), which in turn allow us to prove the existence of a cone of functions with controlled regularity which is invariant under the action of $P_\varepsilon$ (Proposition \ref{Prop:ConInv}) and to which $\rho$ belongs. To obtain the conclusion of Theorem \ref{Thm:Main}, we need that the $\nu$-measure of the bad set is small which will be obtained from an upper bound for the supremum of the functions in the invariant cone. This is what is shown in Section \ref{Sec:ProofTHM1} under some additional conditions on the network. \subsection{Global Expansion of $F_\varepsilon$}\label{ConExp} \begin{proposition}\label{Prop:ExpAuxSys} Suppose that for every $j\in[M]$ the reduced map $g_j$ is uniformly expanding, i.e. there exists $\lambda\in (0,1)$ so that $|D_yg_j|>\lambda^{-1}$ for all $y\in\mathbb T$. Then \begin{itemize} \item[(i)] there exists $C_\#$ (depending on $\sigma$, $h$ and $\alpha$ only) such that for every $1\leq p\leq \infty$, $z\in\mathbb T^{N}$, and $w\in\mb R^{N}\backslash\{0\}$ \[ \frac{\|(D_{z}F_\varepsilon) w\|_p}{\|w\|_p}\geq \left[\min\{\sigma,\lambda^{-1} -\varepsilon C_\#\}-\mathcal O(\Delta^{-1}\delta)-\mathcal O(\Delta^{-1/p}M^{1/p})-\mathcal O(\Delta^{-1}N^{1/p}\delta^{1/q})\right]; \] \item[(ii)] there exists $\eta>0$ such that if \eqref{Eq:ThmCond1} and \eqref{Eq:ThmCond2} are satisfied together with \begin{align}\label{Eq:EpsilonCond} \varepsilon<\frac{\lambda^{-1}-1}{C_\#} \end{align} then there exists $\overline \sigma>1$ (not depending on the parameters of the network or on $p$), so that \[ \frac{\|(D_{z}F_\varepsilon)w\|_p}{\|w\|_p}\geq\overline\sigma>1,\quad \forall z\in\mathbb T^{N},\mbox{ }\forall w\in\mb R^{N}\backslash\{0\}. \] \end{itemize} \end{proposition} \begin{proof} To prove (i), let $z=(x,y)\in \mathbb T^{L+M}$ and $w=\binom{u}{v}\in \mb R^{L+M}$ and $$ \binom{u'}{v'}=D_zF_\varepsilon\binom{u}{v},\quad u'\in\mb R^{L},v'\in\mb R^M. $$ \def\j^*} \def\hm{m^*{\j^*} \def\hm{m^*} Using \eqref{Eq:CoupDyn1'}-\eqref{Eq:CoupDyn2'}, or \eqref{Eq:DiffAuxMap}, we obtain that for every $1\leq i\leq L$ and $1\leq j\leq M$, \begin{align*} u'_i&=\left[D_{x_i}f+\frac{\alpha}{\Delta}\sum_{n=1}^N A_{in}h_1(x_i,z_n)\right]u_i+\frac{\alpha}{\Delta}\sum _{n=1}^N A_{in}h_1(x_i,z_n)w_n \\ v'_j&=\sum_{\ell=1}^L\partial_{x_\ell}\xi_{j,\varepsilon}u_\ell+\frac{\alpha}{\Delta}\sum_{m=1}^M A^{hh}_{jm}h_2(y_j,y_m)v_m+\left[D_{y_j}g_j+\partial_{y_j}\xi_{j,\varepsilon}\right]v_{j}. \end{align*} where $h_1$ and $h_2$ denote the partial derivatives with respect to the first and second variable. Hence \[ \|u'\|_{p,\mb R^L}\geq \left(\sigma-\mathcal O(\delta\Delta^{-1})\right)\|u\|_p-\mathcal O(\Delta^{-1}L^{1/p})\max_{i=1,\dots,L} \left[\sum_{n=1}^NA_{in}|w_n|\right]. \] Recall that, for any $k\in\mb N$, if $w\in\mb R^k$ then \begin{equation}\label{Eq:Ineq1pspaces} \|w\|_{1,\mb R^k}\leq k^{1/q}\|w\|_{p,\mb R^k}, \mbox{ with }\frac{1}{p}+\frac{1}{q}=1 \end{equation} for every $1\leq p\leq \infty$. Thus \[ \sum_{n=1}^N A_{in} |w_n|\leq \delta^{1/q}\left(\sum_{n=1}^NA_{in}|w_n|^p \right)^{1/p}\leq \delta^{1/q}\|w\|_p \] since at most $\delta$ terms are non-vanishing in the sum $\left(\sum_{n=1}^NA_{in}|w_n|^p \right)$, we can view as a vector in $\mb R^\delta$, which implies \[ \|u'\|_{p,\mb R^L}\geq\left(\sigma-\mathcal O(\delta\Delta^{-1})\right)\|u\|_p-\mathcal O(\Delta^{-1}L^{1/p}\delta^{1/q})\|w\|_p \] Analogously using the estimates in Lemma~\ref{Lem:XiProp} \begin{align} \|v'\|_{p,\mb R^M}&\geq \left(\lambda^{-1}-\varepsilon C_\# \right)\|v\|_p-\mathcal O(\Delta^{-1}M^{1/p})\max_{j=1,\dots,M} \left[\sum_{n}A_{jn}|w_n|\right]\label{Eq:pnormExpUppBnd2}\\ &\geq \left(\lambda^{-1}-\varepsilon C_\#-\mathcal O(\Delta^{-1}M) \right)\|v\|_p-\mathcal O(\Delta^{-1/p}M^{1/p})\|w\|_p\nonumber \end{align} since in the sum $\sum_{n}A_{jn}|w_n|$ in \eqref{Eq:pnormExpUppBnd2}, at most $\Delta$ terms are different from zero and since $\Delta^{-1}\Delta^{1/q}=\Delta^{-1/p}$. This implies \begin{align*} \frac{\|(u',v')\|_p}{\|(u,v)\|_p}&=\frac{\|u'\|_{p,\mb R^L}+\|v'\|_{p,\mb R^M}}{\|(u,v)\|_p}\geq\\ &\geq \left[\min\{\sigma-\mathcal O(\Delta^{-1}\delta),\lambda^{-1} -\varepsilon C_\#-\mathcal O(\Delta^{-1}M)\}-\mathcal O(\Delta^{-1/p}M^{1/p})-\mathcal O(\Delta^{-1}L^{1/p}\delta^{1/q})\right] \end{align*} For the proof of (ii), notice that condition \eqref{Eq:EpsilonCond} implies that $\min\{\sigma,\lambda^{-1}-\varepsilon C_\#\}>1$ and conditions \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond2} imply that the $\mathcal O$ are bounded by $\eta$ and so \[ \frac{\|D_{z}F_\varepsilon w\|_p}{\|w\|_p}\geq \min\{\sigma,\lambda^{-1}-\varepsilon C_\#\}-\mathcal O(\eta),\quad \forall w\in\mb R^{N}\backslash\{0\} \] and choosing $\eta>0$ sufficiently small one obtains the proposition. \end{proof} Now that we have proved that $F_\varepsilon$ is expanding, we know from the ergodic theory of expanding maps, that it also has an invariant measure we call $\nu$, with density $\rho=d\nu/dm_{N}$. The rest of the section is dedicated to upper bound $\nu(\mathcal Q_{\varepsilon})$. \subsection{Distortion of $F_\varepsilon$}\label{InvConeSec} \begin{proposition}\label{Prop:DistJac} If conditions \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond2} are satisfied then there exists $\varepsilon_0$ (depending only on $\sigma$, $|\alpha|$ and the coupling function $h$) such that if $\varepsilon<\varepsilon_0$ then for every $z,\overline z\in\mathbb T^{N}$ \[ \frac{|D_{z}F_\varepsilon|}{|D_{\overline z}F_\varepsilon|}\leq \exp\left\{\left[\mathcal O(\Delta^{-1}\delta L)+\mathcal O(M)\right]d_\infty(z,\overline z)\right\}. \] \end{proposition} \begin{proof} To estimate the ratios consider the matrix $\mathcal D(z)$ obtained from $D_{z} F_\varepsilon$ factoring $D_{x_i}f=\sigma$ out of the $i$-th column ($i\in[N]$), and $D_{y_j}g_j$ out of the $(j+L)$-th column ($j\in[M]$). Thus \begin{equation}\label{Eq:DExpression} [\mathcal D(z)]_{k,\ell}:=\left\{\begin{array}{lr} 1+\frac{\alpha}{\Delta}\sum_{n=1}^LA_{kn}\frac{h_1(x_k,z_n)}{\sigma}& k=\ell \leq L,\\ \frac{\alpha}{\Delta}A_{k\ell}\frac{h_2(x_k,x_\ell)}{\sigma}& k\neq \ell \leq L,\\ \frac{\partial_{x_\ell}\xi_{k-L,\varepsilon}}{\sigma}& k>L, \ell\leq L,\\ \frac{\alpha}{\Delta}A_{k\ell}\frac{h_2(y_{k-L},y_{\ell-L})}{D_{y_{\ell-L}}g_{\ell-L}}&k\neq \ell > L,\\ 1+\frac{\partial_{y_{k-L}}\xi_{k-L,\varepsilon}}{D_{y_{\ell-L}}g_{\ell-L}}&k= \ell>L. \end{array}\right. \end{equation} and $$ \frac{|D_{z}F_\varepsilon|}{|D_{\overline z}F_\varepsilon|}=\frac{\prod_{j=1}^MD_{y_j}g_j}{\prod_{j=1}^MD_{\overline y_j}g_j}\cdot\frac{|\mathcal D(z)|}{|\mathcal D(\overline z)|}. $$ For the first ratio: \begin{align} \prod_{j=1}^M\frac{D_{y_j}g_j}{D_{\overline y_j}g_j}&=\prod_{j=1}^M\left(1+\frac{D_{y_j}g_j-D_{\overline y_j}g_j}{D_{\overline y_j}g_j}\right)\leq\prod_{j=1}^M\left(1+\mathcal O(1)|y_j-\overline y_j|\right)\leq\exp\left[\mathcal O(M)d_\infty(y,\overline y)\right].\label{Eq:Ineqtildef} \end{align} To estimate the ratio $\frac{|\mathcal D(z)|}{|\mathcal D(\overline z)|}$ we will apply Proposition \ref{Prop:EstTool} in Appendix~\ref{Ap:TechComp}. To this end define the matrix \[ B(z):=\mathcal D(z)-\Id. \] First of all we will prove that for every $1\leq p< \infty$ and $z\in\mathbb T^{N}$, $B(z)$ has operator norm bounded by \begin{equation}\label{Eq:bOperatorNormBound} \|B(z)\|_p\leq\max\{\mathcal O(\Delta^{-1}M), C_\#\varepsilon\}+\mathcal O(\Delta^{-1/p}M^{1/p})+\mathcal O(\Delta^{-1}N^{1/p}\delta^{1/q}) \end{equation} where $C_\#$ is a constant uniform on the parameters of the network and $1/p+1/q=1$. Indeed, consider $\binom{u}{v}\in\mb R^{L+M}$ and $\binom{u'}{v'}:=B(z)\binom{u}{v}$. Then \begin{align*} u'_i&=\left[\frac{\alpha}{\Delta}\sum_{n=1}^LA_{in}\frac{h_1(x_i,z_n)}{\sigma}\right]u_i+\frac{\alpha}{\Delta}\sum _{\ell=1}^LA^{ll}_{i\ell}\frac{h_1(x_i,x_\ell)}{\sigma}u_\ell+\frac{\alpha}{\Delta}\sum_{m=1}^MA^{lh}_{im}\frac{h_2(x_i,y_m)}{D_{y_m}g_m}v_{m}\\ v'_j&=\sum_{\ell=1}^{L}\frac{\partial_{x_\ell}\xi_{j,\varepsilon}}{\sigma}u_\ell+\frac{\alpha}{\Delta}\sum_{m=1}^MA^{hh}_{jm}\frac{h_2(y_j,y_m)}{D_{y_m}g_m}v_m+ \frac{\partial_{y_{j}}\xi_{j,\varepsilon}}{D_{y_{j}}g_{j}}v_{j}. \end{align*} Using estimates analogous the ones used in the proof of Proposition \ref{Prop:ExpAuxSys} \begin{align*} \|u'\|_{p,\mb R^L}&\leq \mathcal O(\Delta^{-1}\delta)\|u\|_p+\mathcal O(\Delta^{-1})\max_i\left[\sum_{\ell=1}^LA^{ll}_{i\ell}|u_\ell|+\sum_{m=1}^MA^{lh}_{im}|v_m|\right]\\ &\leq\mathcal O(\Delta^{-1}\delta)\|u\|_p+\mathcal O(\Delta^{-1}N^{1/p}\delta^{1/q})\|(u,v)\|_p\\ \|v'\|_{p,\mb R^M}&\leq C_\#\varepsilon\|v\|_p+\mathcal O(\Delta^{-1}N^{1/p})\max_i\left[\sum_{\ell}A^{ll}_{i\ell}|u_\ell|+\sum_mA^{lh}_{im}|v_m|\right]\\ &\leq C_\#\varepsilon\|v\|_p+\mathcal O(\Delta^{-1/p}M^{1/p})\|(u,v)\|_p \end{align*} so using conditions \eqref{Eq:ThmCond1}, \eqref{Eq:ThmCond2'}, we obtain \eqref{Eq:bOperatorNormBound}: \begin{align*} \frac{\|(u',v')\|_p}{\|(u,v)\|_p}&\leq\max\{\mathcal O(\Delta^{-1}\delta), C_\#\varepsilon\}+\mathcal O(\Delta^{-1/p}M^{1/p})+\mathcal O(\Delta^{-1}L^{1/p}\delta^{1/q})\\ &\leq C_\#\varepsilon+\mathcal O(\eta). \end{align*} Taking $C_\# \varepsilon<1$ and $\eta>0$ sufficiently small, ensures that $\|B(z)\|_p\leq\lambda<1$ for all $z\in\mathbb T^{N}$. Now we want to estimate the norm $\|\cdot\|_p$ of columns of $B-\overline B$ where \[ B:=B(z)\quad\mbox{and}\quad\overline B:=B(\overline z). \] For $1\leq i\leq L$, looking at the entries of $\mathcal D(z)$, \eqref{Eq:DExpression}, it is clear that the non-vanishing entries $[B(z)]_{ik}$ for $k\neq i$ are Lipschitz functions with Lipschitz constants of the order $\mathcal O(\Delta^{-1})$: $$ |B_{ik}-\overline B_{ik}|\leq A_{ik}\mathcal O(\Delta^{-1})d_\infty(z,\overline z) $$ Instead, for $k=i$, \begin{align*} \left|B_{ii}-\overline B_{ii}\right|&=\frac{\alpha}{\Delta}\left|\sum_\ell A^{ll}_{in}(h_1(x_i,x_\ell)-h_1(\overline x_i,\overline x_\ell))+\sum_m A^{lh}_{im} (h_1(x_i,x_m)-h_1(\overline x_i,\overline x_m))\right|\\ &\leq \frac{\alpha}{\Delta}\sum_\ell A^{ll}_{i\ell}|h_1(x_i,x_\ell)-h_1(\overline x_i,\overline x_\ell)|+\frac{\alpha}{\Delta}\sum_mA^{lh}_{im}|h_1(x_i,y_m)-h_1(\overline x_i,\overline y_m)|\\ &\leq\mathcal O(\Delta^{-1}\delta)d_\infty(z,\overline z). \end{align*} which implies \begin{align*} \|\Col^i[B-\overline B]\|_p&= \left(\sum_{k\in[L]}|B_{ik}-\overline B_{ik}|^p\right)^{\frac{1}{p}}+\left(\sum_{k\in[L+1,N]}|B_{ik}-\overline B_{ik}|^p\right)^{\frac{1}{p}}\\ &\leq \left(\sum_{k\in [L]\backslash\{i\}}|B_{ik}-\overline B_{ik}|^p\right)^{\frac{1}{p}}+\left(\sum_{k\in[L+1,N]}|B_{ik}-\overline B_{ik}|^p\right)^{\frac{1}{p}}+\mathcal O(\Delta^{-1}\delta)d_\infty(z,\overline z)\\ &\leq 2\left(\sum_{k\neq i}A_{ik}\right)^{\frac{1}{p}}\mathcal O(\Delta^{-1})d_\infty(z,\overline z)+\mathcal O(\Delta^{-1}\delta)d_\infty(z,\overline z)\\ &\leq \mathcal O(\Delta^{-1}\delta)d_\infty(z,\overline z). \end{align*} For $1\leq j\leq M$, looking again at \eqref{Eq:DExpression} the non-vanishing entries of $[B(z)]_{(j+L)k}$ for $k\neq j+N$ are Lipschitz functions with Lipschitz constants of the order $\mathcal O(\Delta^{-1})$, while $[B(z)]_{(j+L)(j+L)}$ has Lipschitz constant of order $\mathcal O(1)$, thus \begin{align*} \|\Col^{j+L}[B-\overline B]\|_p&=\left(\sum_{k\in[L]}|B_{(j+L)k}-\overline B_{(j+L)k}|^p\right)^{\frac{1}{p}}+\left(\sum_{k\in[L+1,N]}|B_{(j+L)k}-\overline B_{(j+L)k}|^p\right)^{\frac{1}{p}}\\ &\leq \left(\sum_{k\in[L]}|B_{(j+L)k}-\overline B_{(j+L)k}|^p\right)^{\frac{1}{p}}+\left(\sum_{k\in[L+1,N]\backslash \{j+L\}}|B_{(j+L)k}-\overline B_{(j+L)k}|^p\right)^{\frac{1}{p}}+\\ & \quad \quad \quad \quad + \mathcal O(1)d_\infty(z,\overline z)\\ &\leq 2\left(\sum_{k\neq j+L}A_{(j+L)k}\right)^{\frac{1}{p}}\mathcal O(\Delta^{-1})d_\infty(z,\overline z)+\mathcal O(1)d_\infty(z,\overline z)\\ &\leq \mathcal O(1)d_\infty(z,\overline z). \end{align*} Proposition \ref{Prop:EstTool} from Appendix~\ref{Ap:TechComp} now implies that \begin{equation}\label{Eq:RatDestimate} \frac{|\mathcal D(z)|}{|\mathcal D(\overline z)|}\leq\exp\left\{\sum_{k=1}^{N}\|\Col^k[B-\overline B]\|_p\right\}\leq \exp\left\{(\mathcal O(\Delta^{-1}\delta L)+\mathcal O(M))d_\infty(z,\overline z)\right\}. \end{equation} \end{proof} \subsection{Invariant Cone of Functions} Define the cone of functions \[ C_{a,p}:=\left\{\varphi:\mathbb T^{N}\rightarrow \mb R^+:\quad \frac{\varphi(z)}{\varphi(\overline z)}\leq\exp[ad_p(z,\overline z)],\mbox{ }\forall z,\overline z\in\mathbb T^{N}\right\}. \] This is convex and has finite diameter (see for example \cite{MR0087058, MR0336473} or \cite{ViaSdds}). We now use the result on distortion from the previous section to determine the parameters $a>0$ such that $C_{a,p}$ is invariant under the action of the transfer operator $P_\varepsilon$. Since $C_{a,p}$ has finite diameter with respect to the Hilbert metric on the cone, see \cite{ViaSdds}, $P_\varepsilon$ is a contraction restricted to this set and its unique fixed point is the only invariant density which thus belongs to $C_{a,p}$. In the next subsection, we will use this observation to conclude the proof of Theorem \ref{Thm:Main} in the expanding case. \begin{proposition}\label{Prop:ConInv} Under conditions \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond2}, for every $a>a_c$, where $a_c$ is of the form \begin{equation}\label{Eq:ConditionOna} a_c=\frac{\mathcal O(\Delta^{-1}\delta L)+\mathcal O(M)}{1-\overline \sigma}, \end{equation} $C_{a,p}$ is invariant under the action of the transfer operator $P_\varepsilon$ of $F_\varepsilon$, i.e. $P_\varepsilon(C_{a,p})\subset C_{a,p}$. \end{proposition} \begin{proof} Since $F_\varepsilon$ is a local expanding diffeomorphism, its transfer operator, $P_\varepsilon$, has expression $$ (P_\varepsilon\varphi)(z)=\sum_{i}\varphi(F_{\varepsilon,i}^{-1}(z))\left|D_{F_{\varepsilon,i}^{-1}(z)}F_\varepsilon\right|^{-1} $$ where $\{F_{\varepsilon,i}\}_i$ are surjective invertible branches of $F_\varepsilon$. Suppose $\varphi\in C_{a,p}$. Then \begin{align*} \frac{\varphi(F_{\varepsilon,i}^{-1}(z))}{\varphi(F_{\varepsilon,i}^{-1}(\overline z))}\frac{\left|D_{F_{\varepsilon,i}^{-1}(\overline z)}F_\varepsilon\right| }{\left|D_{F_{\varepsilon,i}^{-1}(z)}F_\varepsilon\right|}&\leq \exp\left\{ad_p(F_{\varepsilon,i}^{-1}(z),F_{\varepsilon,i}^{-1}(\overline z))\right\}\exp\left\{\left[\mathcal O(\Delta^{-1}\delta L)+\mathcal O(M)\right] d_\infty(F_{\varepsilon,i}^{-1}(z),F^{-1}_{\varepsilon,i}(\overline z))\right\} \\ &\leq\exp\left\{\left[a+\mathcal O(\Delta^{-1}\delta L)+\mathcal O(M)\right] d_p(F_{\varepsilon,i}^{-1}(z),F^{-1}_{\varepsilon,i}(\overline z))\right\} \\ &\leq\exp\left\{\left[\overline \sigma^{-1}a+\mathcal O(\Delta^{-1}\delta L)+\mathcal O(M)\right] d_p(z,\overline z)\right\}. \end{align*} Here we used that $d_\infty(z,\overline z)\leq d_p(z,\overline z)$ for every $1\leq p<\infty$ Hence \begin{align*} \frac{(P_\varepsilon\varphi)(z)}{(P_\varepsilon\varphi)(\overline z)}&=\frac{\sum_{i}\varphi(F_{\varepsilon,i}^{-1}(z))|D_{F_{\varepsilon,i}^{-1}(z)}F_\varepsilon|^{-1}}{\sum_{i}\varphi(F_{\varepsilon,i}^{-1}(\overline z))|D_{F_{\varepsilon,i}^{-1}(w)}F_\varepsilon|^{-1}}\\ &\leq\exp\left[\left(\overline \sigma ^{-1}a+\mathcal O(\Delta^{-1}\delta L)+\mathcal O(M)\right)d_p(z,\overline z)\right]. \end{align*} It follows that if $a>a_c$ then $C_{a,p}$ is invariant under $P_\varepsilon$. \end{proof} \begin{proof}[Proof of Theorem~\ref{Thm:InvMeasDenExpand}] The existence of the absolutely continuous invariant probability measure is standard from the expansivity of $F_\varepsilon$. The regularity bound on the density immediately follows from Proposition~\ref{Prop:ConInv} and from the observation (that can be found in \cite{ViaSdds}) that the cone $\mathcal C_{a,p}$ has finite dimeter with respect to the projective Hilbert metric. This in particularly means that $P_\varepsilon$ is a contraction with respect to this metric and has a fixed point. \end{proof} \subsection{Proof of Theorem \ref{Thm:Main} in the Expanding Case}\label{Sec:ProofTHM1} Property \eqref{Eq:InvDensProp} of the invariant density provides an upper bound for its supremum which depends on the parameters of the network and proves the statement of Theorem~\ref{Thm:Main} in the expanding case. \begin{proof}[Proof of Theorem \ref{Thm:Main}] Since under conditions \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond2} in Theorem \ref{Thm:Main}, Proposition \ref{Prop:ConInv} holds, the invariant density for $F_\varepsilon$ belongs to the cone $C_{a,p}$, $\rho\in C_{a,p}$, for $a>a_c$. Since $\rho$ is a continuous density, it has to take value one at some point in its domain. This together with the regularity condition given by the cone implies that $$ \sup_{z\in\mathbb T^{N}}\rho(z)\leq \exp\left\{{\mathcal O(\Delta^{-1}\delta L^{1+1/p})+\mathcal O(M L^{1/p})}\right\}. $$ Using the upper bound \eqref{Eq:UppBndBLeb}, \begin{align*} \nu(\mathcal B_\varepsilon\times \mathbb T^M)&=\int_{\mathcal B_\varepsilon\times \mathbb T^M}\rho(z)dm_{N}(z)\\ &\leqm_{N}(\mathcal B_\varepsilon\times \mathbb T^M)\sup_z\rho(z)\\ &\leq\exp\left\{-\Delta\varepsilon^2/2+\mathcal O(\Delta^{-1}\delta L^{1+1/p})+\mathcal O(ML^{1/p})\right\}. \end{align*} From the invariance of $\rho$ and thus of $\nu$, for any $T\in\mb N$ \[ \nu\left(\bigcup_{t=0}^{T} F_\varepsilon^{-t}(\mathcal B_\varepsilon\times \mathbb T^M)\right)\leq (T+1)\nu(\mathcal B_\varepsilon\times\mathbb T^M)\leq (T+1)\exp\left\{-\Delta\varepsilon^2/2+\mathcal O(\Delta^{-1}\delta L^{1+1/p})+\mathcal O(ML^{1/p})\right\}. \] Using again that $\rho\in C_{a,p}$, and \eqref{Eq:ThmCond1} and \eqref{Eq:ThmCond2}, \begin{align*} m_N\left(\bigcup_{t=0}^{T}F_\varepsilon ^{-t}(\mathcal B_\varepsilon\times \mathbb T^M)\right)&= \int_{\bigcup_{t=0}^{T}F_\varepsilon^{-t}(\mathcal B_\varepsilon\times \mathbb T^M)}\rho^{-1}d\nu\\ &\leq \nu\left(\bigcup_{t=0}^{T}F_\varepsilon^{-t}(\mathcal B_\varepsilon\times \mathbb T^M)\right)\exp\left\{\mathcal O(\Delta^{-1}\delta L^{1+1/p})+\mathcal O(ML^{1/p})\right\}\\ &\leq (T+1)\exp\left\{-\Delta\varepsilon^2/2+\mathcal O(\Delta^{-1}\delta L^{1+1/p})+\mathcal O(ML^{1/p})\right\}\\ & \leq (T+1) \exp \left\{ -\Delta\varepsilon^2/2 + \mathcal O(\eta) \Delta \right\} . \end{align*} Where we used \eqref{Eq:ThmCond3} to obtain the last inequality. Hence, the set \[ \Omega_T=\mathbb T^{N}\backslash\bigcup_{t=0}^{T} F_\varepsilon^{-t}(\mathcal B_\varepsilon\times \mathbb T^M) \] for $\eta>0$ sufficiently small satisfies the assertion of the theorem. \end{proof} \section{Proof of Theorem~\ref{Thm:Main} when some Reduced Maps have Hyperbolic Attractors}\label{Sec:RedMapNonAtt} In this section, we allow for the situation where some (or possibly all) reduced maps have periodic attractors. For this reason, we introduce the new structural parameter $M_u\in\mb N_0$ such that, after renaming the hub nodes, the reduced dynamics $g_j$ is expanding for $1\leq j \leq M_u$, while for $M_u<j\leq M$, $g_j$ has a hyperbolic periodic attractor $\Lambda_j$. Let us also define $M_s=M-M_u$. We also assume that $g_j$ are $(n,m,\lambda,r)$-hyperbolic with $n=1$. We will show how to drop this assumption in Lemma~\ref{lem:n=1}. \textcolor{blue}{} As in the previous section, the goal is to prove the existence of a set of large measure whose points take a long time to enter the set $\mathcal B_\varepsilon$ where fluctuations are above the threshold. To achieve this, we study the ergodic properties of $F_\varepsilon$ restricted to a certain forward invariant set $\mathcal S$ and prove that the statement of Theorem \ref{Thm:Main} holds true for initial conditions taken in this set. Then in Section \ref{Sec:AttFullStatProof} we extend the reasoning to the remainder of the phase space and prove the full statement of the theorem. For simplicity we will sometimes write $(z_u,z_s)$ for a point in $\mathbb T^{L+M_u}\times \mathbb T^{M_s}=\mathbb T^N$ and $z_u=(x,y_u)\in \mathbb T^{L}\times\mathbb T^{M_u}$. Let \[ \pi_u \colon \mathbb T^N \to \mathbb T^{L+M_u}\mbox{ and }\Pi_u:\mb R^{N}\rightarrow\mb R^{L+M_u} \] be respectively the (canonical) projection on the first $L+M_u$ coordinates and its differential. We begin by pointing out the existence of the invariant set. \begin{lemma}\label{Lem:InvSetStrip} As before, for $j\in\{M_u+1,...,M\}$, let $\Lambda_j$ be the attracting sets of $g_j$ and $\Upsilon=\mathbb T\setminus W_s(\Lambda_j)$. There exist $\lambda\in (0,1)$, ${\varepsilon_\Lambda}>0$, $r_0>0$ so that for each $j\in\{M_u+1,...,M\}$ and each $|r|<r_0$, (i) $|Dg_j(y)|<\lambda<1$ for every $y\in U_j$ and $g_j(x)+r \in U_j,\quad\forall x\in U_j$, where $U_j$ is the ${\varepsilon_\Lambda}$-neighborhood of $\Lambda_j$. (ii) $|Dg_j|>\lambda^{-1}$ on the ${\varepsilon_\Lambda}$-neighborhood of $\Upsilon_j$, $\forall j\in[M_u+1,M]$. \end{lemma} \begin{proof} The first assertion in (i) and (ii) follow from continuity of $Dg_j$. Fix $x\in U_j$ and $r\in(-r_0,r_0)$. From the definition of $U_j$, there exists $y\in\Lambda_j$ such that $d(x,y)<{\varepsilon_\Lambda}$. From the contraction property $d(g_j(x),g_j(y))<\lambda d(x,y)<\lambda{\varepsilon_\Lambda}$ and choosing $r_0<(1-\lambda){\varepsilon_\Lambda}$, \[ d(g_j(x)+r,g_j(y))<\lambda {\varepsilon_\Lambda}+r_0<{\varepsilon_\Lambda}. \] From the invariance of $\Lambda_j$, $g_j(y)\in\Lambda_j$, the lemma follows. \end{proof} Let \begin{equation}\label{Eq:DefRProdFixINt} \mathcal R:=U_{M_u+1}\times...\times U_M\quad\mbox{and}\quad \mathcal S:=\mathbb T^{L+M_u}\times \mathcal R \subset \mathbb T^N . \end{equation} Lemma \ref{Lem:InvSetStrip} implies that provided the $\varepsilon$ from the truncated system is below $r_0/2$, the set $\mathcal S$ is forward invariant under $F_\varepsilon$. It follows that for each attracting periodic orbit $O(z_s)$ of $g_{M_u+1}\times \dots \times g_{M}\colon \mathbb T^{M_s}\to \mathbb T^{M_s}$, the endomorphism $F_\varepsilon$ has a fat solenoidal invariant set. Indeed, take the union $U$ of the connected components of $\mathcal R$ containing $O(z_s)$. Then by the previous lemma, $F_\varepsilon(\mathbb T^{L+M_u}\times U)\subset \mathbb T^{L+M_u} \times U$. The set $\cap_{n\ge 0} F^n_\varepsilon(\mathbb T^{L+M_u} \times U)$ is the analogue of the usual solenoid but with self-intersections, see Figure~\ref{Fig:Attractor}. An analogous situation, but where the map is a skew product is studied in \cite{MR1862809}. The set $\bigcap_{n\ge 0} F^n_\varepsilon(\mathbb T^{L+M_u} \times U)$ will support an invariant measure: \begin{theorem}\label{Thm:PhysMeasFep} Under conditions \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond3} of Theorem \ref{Thm:Main} with $\eta>0$ sufficiently small \begin{itemize} \item for every attracting periodic orbit of $g_{M_u+1}\times \dots \times g_{M}$, $F_\varepsilon$ has an ergodic physical measure, \item for each such measure $\nu$, the marginal $(\pi_u)_* \nu$ on $\mathbb T^{L+M_u}\times \{0\}$ has a density $\rho$ satisfying $\forall z_u,\overline z_u\in \mathbb T^{L+M_u}\times\{0\}$, \[ \frac{\rho(z_u)}{\rho(\overline z_u)}\leq \exp\left\{a d_p(z_u,\overline z_u)\right\}, \quad\quad a={\mathcal O(\Delta^{-1}L^{1+1/p}\delta^{1/q})+\mathcal O(M)}, \] \item these are the only physical measures for $F_\varepsilon$. \end{itemize} \end{theorem} \begin{figure} \caption{Approximate 2D and $3$D representations of one component of the attractor of $F_\varepsilon$.} \label{Fig:Attractor} \end{figure} This theorem will be proved in Subsection~\ref{subsec:invariantcones}. \subsection{Strategy of the proof of Theorem~\ref{Thm:Main} in Presence of Hyperbolic Attractors}\label{Sec:InvStrip} For the time being, we restrict our attention to the case where the threshold of the fluctuations is below $r_0$ as defined in Lemma \ref{Lem:InvSetStrip} and consider the map ${F_\varepsilon}|_{\mathcal S}:\mathcal S\rightarrow \mathcal S$ that we will still call ${F_\varepsilon}$ with an abuse of notation. The expression for ${F_\varepsilon}$ is the same as in equations \eqref{Eq:CoupDyn1'} and \eqref{Eq:CoupDyn2'}, but now the local phase space for the hubs with a non-empty attractor, $\{L+M_u+1,\dots,L+M=N\}$, is restricted to the open set $\mathcal R$. The proof of Theorem~\ref{Thm:Main} will follow from the following proposition. \begin{proposition}\label{Prop:BadSetMeasAtt} For every $s_1\in\mathbb Z$ and $j\in[M]$ \[ \mathcal B^{(s_1,j)}_{\varepsilon, T}:=\bigcup_{t=0}^{T}F_\varepsilon^{-t}\left(\mathcal B^{(s_1,j)}_\varepsilon\times \mathbb T^{M_u}\times \mathcal R\right)\cap \mathcal S \subset \mathbb T^N \] is bounded as \begin{equation}\label{Eq:BadSetBound} m_N\left(\mathcal B^{(s_1,j)}_{\varepsilon,T}\right)\leq T\exp\left[-C \Delta \varepsilon^2+\mathcal O(\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(ML^{1/p})\right]. \end{equation} \end{proposition} To prove the above result, we first build families of stable and unstable invariant cones for $F_\varepsilon$ in the tangent bundle of $\mathcal S$ (Proposition \ref{Prop:InvConesForTildeF}) which correspond to contracting and expanding directions for the dynamics, thus proving hyperbolic behaviour of the map. In Section \ref{Sec:AdmManifolds} we define a class of manifolds tangent to the unstable cones whose regularity properties are kept invariant under the dynamics, and we study the evolution of densities supported on them under action of $F_\varepsilon$. Bounding the Jacobian of the map restricted to the manifolds (Proposition \ref{Prop:JacobEst}) one can prove the existence of an invariant cone of densities (Proposition \ref{Prop:DensEvSubm}) which gives the desired regularity properties for the measures. Since the product structure of $\mathcal B^{(s_1,j)}_\varepsilon\times \mathbb T^{M_u}\times \mathcal R$ is not preserved under pre-images of $F_\varepsilon^{t}$, we approximate it with the set which is the union of global stable manifolds (Lemma~\ref{Lem:IncSetUnMan}). This last property is preserved taking pre-images. The bound in \eqref{Eq:BadSetBound} will then be a consequence of estimates on the distortion of the holonomy map along stable leaves of $F_\varepsilon$ (Proposition \ref{Prop:JacEstBnd}). \subsection{Invariant Cone Fields for $F_\varepsilon$}\label{Sec:InvCones} \begin{proposition}\label{Prop:InvConesForTildeF} There exists $\eta_0>0$ such that if conditions \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond3} are satisfied with $\eta<\eta_0$, then there exists $C_\#>0$ such that for every $\varepsilon>0$ \begin{equation} \varepsilon<\min\left\{\frac{1-\lambda}{C_\#},\frac{\lambda^{-1}-1}{C_\#},\varepsilon_0\right\}\label{AttCond3} \end{equation} \begin{itemize} \item[(i)]the constant cone fields \begin{equation}\label{Eq:UnstConeCond} \mathcal C_p^u:=\left\{(u,w,v)\in\mb R^{L+M_u+M_s}\backslash{\{0\}}:\quad\frac{\|v\|_{p,M_s}}{\|u\|_{p,L}+\|w\|_{p,M_u}}<\beta_{u,p}\right\} \end{equation} and \begin{equation}\label{Eq:StabCon} \mathcal C_p^s:=\left\{(u,v,w)\in\mb R^{L+M_u+M_s}\backslash{\{0\}}:\quad \frac{\|v\|_{p,M_s}}{\|u\|_{p,L}+\|w\|_{p,M_u}} > \frac{1}{\beta_{s,p}} \right\} \end{equation} with \begin{align*} \beta_{u,p}:=\mathcal O(\Delta^{-1/p}M_s^{1/p}),\quad \beta_{s,p}:=\max\{\mathcal O(\Delta^{-1}L^{1/p}\delta^{1/q}), \mathcal O(\Delta^{-1/p}M_u^{1/p})\} \end{align*} satisfy $\forall z\in\mathbb T^{N}$ $D_{z}{F_\varepsilon}(\mathcal C^u)\subset\mathcal C^u$ and $D_{z}{F_\varepsilon}^{-1}(\mathcal C^s)\subset\mathcal C^s$. \item[(ii)] there exists $\overline \sigma$ and $\overline \lambda$ such that, for every $z\in\mathbb T^{N}$ \begin{align} \frac{\|D_{z}{F_\varepsilon}(u,w,v)\|_p }{\|(u,w,v)\|_p}&\geq\overline\sigma>1,&\forall (u,w,v)\in\mathcal C_p^u\label{Eq:ExpResUnstCone}\\ \frac{\|D_{z}{F_\varepsilon}(u,w,v)\|_p }{\|(u,w,v)\|_p}&\leq\overline \lambda<1, &\forall (u,w,v)\in\mathcal C_p^s.\label{Eq:ExpResStabCone} \end{align} \end{itemize} \end{proposition} \begin{remark} We have constructed the map $F_\varepsilon$ in such a way that, when the network is $\eta-$heterogeneous with $\eta$ very small, it results to be \say{close} to the product of uncoupled factors equal to $f$, for the coordinates corresponding to low degree nodes, and equal to $g_j$, for the coordinates of the hubs. This is reflected by the width of the invariant cones which can be chosen to be very small for $\eta$ tending to zero, so that $\mathcal C_p^u$ and $\mathcal C_p^s$ are very narrow around their respective axis $\mb R^{L+M_u}\oplus \{0\}$ and $\{0\}\oplus\mb R^{M_s}$. \end{remark} \begin{corollary}\label{cor2} Under the assumptions of the previous proposition, $\pi_u \circ {F_\varepsilon}^n \colon \mathbb T^{L+M_u} \times \{0\} \to \mathbb T^{L+M_u}$ is a covering map of degree $\sigma^{n(L+M_u)}$ where $\sigma$ is the degree of the local map. \end{corollary} \begin{proof} This follows from the previous proposition, because $\mathbb T^{L+M_u}\times \{0\}$ is tangent to the unstable cone, and thus $\pi_u \circ {F_\varepsilon}^n$ is a local diffeomorphism between compact manifolds. This implies that every point of $\mathbb T^{L+M_u}$ has the same number of preimages, and this number equals the degree of the map. Then observe that there is a homotopy bringing $\pi_u\circ{F_\varepsilon}$ to the $(L+M_u)$-fold uncoupled product of identical copies of the map $f^n$. The homotopy is obtained by continuously deforming the map letting the coupling strength $\alpha$ go to zero. Since the degree is an homotopy invariant and $\pi_u\circ{F_\varepsilon}$ is homotopic to the $(L+M_u)$-fold uncoupled product of identical copies of the map $f^n$, \[ \deg \pi_u\circ F_\varepsilon= \deg \underbrace{f^n\times...\times f^n}_{L+M_u\mbox{ times}}=\sigma^{n(L+\mc M_u)}. \] \end{proof} \begin{proof} (i) The expression for the differential of the map ${F_\varepsilon}$ is the same as in \eqref{Eq:DiffAuxMap}. Take $(u,w,v)\in\mb R^{L}\times\mb R^{M_u}\times\mb R^{M_s}$, and suppose $(u',w',v')^{t}:=D_z{{F_\varepsilon}}(u,v,w)^{t}$. Then \begin{align*} &(u')_i=\left[f'(x_i)+\frac{\alpha}{\Delta}\sum_{m=1}^MA^{lh}_{im}h_1+\frac{\alpha}{\Delta}\sum_{\ell=1}^LA^{ll}_{i\ell}h_1\right]u_i+\frac{\alpha}{\Delta}\sum _{\ell=1}^LA^{ll}_{i\ell}h_1u_\ell+\\ & \quad \quad \quad + \frac{\alpha}{\Delta}\sum_{m=1}^{M_u}A^{lh}_{im}h_2w_{m}+\frac{\alpha}{\Delta}\sum_{m=M_u+1}^{M}A^{lh}_{im}h_2v_{m}& 1\leq i\leq L\\ &(w')_j=\sum_{\ell=1}^L\partial_{x_\ell}\xi_{j,\varepsilon}u_\ell+\frac{\alpha}{\Delta}\sum_{m=1}^{M_u}A^{hh}_{jm}h_2w_m+\frac{\alpha}{\Delta}\sum_{m=M_u+1}^{M}A^{hh}_{jm}h_2v^m+\\ & \quad \quad \quad + \left[\partial_{y_j}\xi_{j,\varepsilon} +\frac{\alpha}{\Delta}\sum_{m=1}^MA^{hh}_{jm}h_2\right]w_{j}&1\leq j\leq M_u\\ &(v')_j=\sum_{\ell=1}^L\partial_{x_\ell}\xi_{j,\varepsilon}u_\ell+\frac{\alpha}{\Delta}\sum_{m=1}^{M_u}A^{hh}_{jm}h_2w_m+\frac{\alpha}{\Delta}\sum_{m=M_u+1}^{M}A^{hh}_{jm}h_2v_m+\\ &\quad \quad \quad + \left[\partial_{y_j}\xi_{j,\varepsilon}+\frac{\alpha}{\Delta}\sum_{m=1}^MA^{hh}_{jm}h_2\right]v_{j}&M_u< j\leq M \end{align*} where we suppressed all dependences of those functions for which we use a uniform bound. \begin{align*} \|u'\|_{p,\mb R^L}&\geq \left(\sigma-\mathcal O(\Delta^{-1}\delta)\right)\|u\|_{p,\mb R^L}-\\ &\quad\quad-\mathcal O(\Delta^{-1}L^{1/p})\max_{i\in[L]}\left[\sum_{\ell=1}^LA^{ll}_{i\ell}|u_\ell|+\sum_{m=1}^{M_u}A^{lh}_{im}|w_m|+\sum_{m=M_u+1}^{M}A^{lh}_{im}|v_m|\right]\\ &\geq \left(\sigma-\mathcal O(\Delta^{-1}\delta)\right)\|u\|_{p,\mb R^L}-\mathcal O(\Delta^{-1}L^{1/p}\delta^{1/q})(\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}}+\|v\|_{p,\mb R^{M_s}})\\ \\ \|w'\|_{p,\mb R^{M_u}}&\geq \left(\lambda^{-1}-C_\#\varepsilon-\mathcal O(\Delta^{-1}M)\right)\|w\|_{p,\mb R^{M_u}}-\\ &\quad\quad-\mathcal O(\Delta^{-1}M_u^{1/p})\max_{1\leq j\leq M_u}\left[\sum_{\ell=1}^L A^{hl}_{j\ell}|u_\ell|+\sum_{m=1}^{M_u} A^{hh}_{jm}|w_m|+\sum_{m=M_u+1}^{M} A^{hh}_{jm}|v_m|\right]\\ &\geq \left(\lambda^{-1}-C_\#\delta-\mathcal O(\Delta^{-1}M)\right)\|w\|_{p,\mb R^{M_u}}-\mathcal O(\Delta^{-1/p}M_u^{1/p})(\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}}+\|v\|_{p,\mb R^{M_s}}) \end{align*} and analogously \begin{align*} \|v'\|_{p,\mb R^{M_s}}&\leq (\lambda+C_\#\varepsilon+\mathcal O(\Delta^{-1}M))\|v\|_{p,\mb R^{M_s}}+\mathcal O(\Delta^{-1/p}M_s^{1/p})(\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}}+\|v\|_{p,\mb R^{M_s}}) \end{align*} Suppose that $(u,w,v)$ satisfies the cone condition $\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}}\geq\tau\|v\|_{p,\mb R^{M_s}}$ for some $\tau$. Then \begin{align*} \frac{\|u'\|_{p,\mb R^L}+\|w'\|_{p,\mb R^{M_u}}}{\|v'\|_{p,\mb R^{M_s}}}&\geq \frac{\mathcal F_{11}(\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}})-\mathcal F_{12}\|v\|_{p,\mb R^{M_s}}}{\mathcal F_{21}\|v\|_{p,\mb R^{M_s}}+\mathcal F_{22}(\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}})}\\ &\geq \frac{\mathcal F_{11}-\tau^{-1}\mathcal F_{12}}{\tau^{-1}\mathcal F_{21}+\mathcal F_{22}} \end{align*} with \begin{align*} \mathcal F_{11}&:=\min\left\{\sigma-\mathcal O(\Delta^{-1}\delta),\lambda^{-1}-C_\#\varepsilon-\mathcal O(\Delta^{-1}M)\right\}-\max\{\mathcal O(\Delta^{-1}L^{1/p}\delta^{1/q}), \mathcal O(\Delta^{-1/p}M_u^{1/p}) \}\\ & = \min\left\{\sigma,\lambda^{-1}-C_\#\varepsilon\right\}-\mathcal O(\eta), \\ \mathcal F_{12}&:=\max\{\mathcal O(\Delta^{-1}L^{1/p}\delta^{1/q}), \mathcal O(\Delta^{-1/p}M_u^{1/p})\}= \mathcal O(\eta),\\ \mathcal F_{21}&:=\lambda+C_\#\varepsilon+\mathcal O(\Delta^{-1}M)+\mathcal O(\Delta^{-1/p}M_s^{1/p}))= \lambda+C_\#\varepsilon + \mathcal O(\eta),\\ \mathcal F_{22}&:=\mathcal O(\Delta^{-1/p}M_s^{1/p}))= \mathcal O(\eta), \end{align*} where we used \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond3}. The cone $\mathcal C_p^u$ is forward invariant iff $\|u'\|_{p,\mb R^L}+\|w'\|_{p,\mb R^{M_u}}\geq \tau \|v'\|_{p,\mb R^{M_s}}$ and therefore if \begin{align} \mathcal F_{11}-\tau^{-1}\mathcal F_{12}\geq\mathcal F_{21}+\tau\mathcal F_{22}. \label{Eq:taucond} \end{align} Hence we find $C_*>0$, so that if $\tau=C_*/ \mathcal F_{22}$ the inequality \eqref{Eq:taucond} is satisfied provided \eqref{AttCond3} holds and $\eta>0$ is small enough because then $\mathcal F_{11}>\mathcal F_{21}$. Now let us check when the cone $\mathcal C_p^s$ is backward invariant. Suppose that $\|u'\|_{p,\mb R^L}+\|w'\|_{p,\mb R^{M_u}}\leq\tau\|v'\|_{p,\mb R^{M_s}}$, thus \begin{align*} {\mathcal F_{11}\frac{\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}}}{\|v\|_{p,\mb R^{M_s}}}-\mathcal F_{12}}&\leq\tau\mathcal F_{21}+\tau\mathcal F_{22}\frac{\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}}}{\|v\|_{p,\mb R^{M_s}}}\\ \frac{\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}}}{\|v\|_{p,\mb R^{M_s}}}&\leq \frac{\mathcal F_{12}+\tau\mathcal F_{21}}{\mathcal F_{11}-\tau\mathcal F_{22}} \end{align*} and imposing, yet again, \begin{equation}\label{Eq:BackInvTauCond} \tau^{-1}\mathcal F_{12}+\mathcal F_{21}\leq \mathcal F_{11}-\tau\mathcal F_{22}, \end{equation} implies that $\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}}\leq\tau\|v\|_{p,\mb R^{M_s}}$. Taking $\tau=C_* \mathcal F_{12}$ with $C_*>0$ small, we obtain that $\mathcal C_p^s$ is backward invariant (provided as before that \eqref{AttCond3} holds and $\eta>0$ is small). (ii) Take $(u,w,v) \in\mathcal C_p^u$ such that $\|(u,w,v)\|_{p}=1$. From the above computations, and applying the cone condition \begin{align} \|u'\|_{p,\mb R^L}+\|w'\|_{p,\mb R^{M_u}}+\|v'\|_{p,\mb R^{M_s}}&\geq \|u'\|_{p,\mb R^L}+\|w'\|_{p,\mb R^{M_u}}\nonumber\\ &\geq{\mathcal F_{11}(\|u\|_{p,\mb R^L}+\|w\|_{p,\mb R^{M_u}})-\mathcal F_{12}\|v\|_{p,\mb R^{M_s}}}\nonumber\\ &\geq{\mathcal F_{11}(1-\beta_{u,p})-\mathcal F_{12}\beta_{u,p}}\nonumber\\ &\geq \min\left\{\sigma,\lambda^{-1}-C_\#\varepsilon\right\}-\mathcal O(\eta)-\mathcal O(\eta^2)\label{Eq:ExpConEst1} \end{align} where to obtain \eqref{Eq:ExpConEst1} we kept only the largest order in the parameters of the network, after substituting the expressions for $\mathcal F_{11}$ and $\mathcal F_{12}$. This means that in conditions \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond3}, if $\eta>0$ is sufficiently small, \eqref{Eq:ExpResUnstCone} will be satisfied. Choosing, now, $(u,v,w)\in\mathcal C_p^s$ of unit norm we get \begin{align*} \|u'\|_{p,\mb R^L}+\|w'\|_{p,\mb R^{M_u}}+\|v'\|_{p,\mb R^{M_s}}&\leq (1+\beta_{s,p})\Delta^{-1}\|v'\|_{p,\mb R^{M_s}}\\ &\leq \lambda+C_\#\varepsilon+\mathcal O(\Delta^{-1}M)+\mathcal O(\Delta^{-1/p}M^{1/p})+\beta_{s,p}\\ &\leq \lambda +C_\#\varepsilon +\mathcal O(\eta) \end{align*} and again whenever conditions \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond3} are satisfied with $\eta>0$ sufficiently small, \eqref{Eq:ExpResStabCone} is verified. \end{proof} \subsection{Admissible Manifolds for $F_\varepsilon$}\label{Sec:AdmManifolds} As in the diffeomorphism case, the existence of the stable and unstable cone fields implies that the the endomorphism ${F_\varepsilon}$ admits a natural measure. To determine the measure of the set $\mathcal B_\varepsilon\times\mathbb T^M$ with respect to one of these measures we need to estimate how much the marginals on the coordinates of the low degree nodes differ from Lebesgue measure. To do this we look at the evolution of densities supported on admissible manifolds, namely manifolds whose tangent space is contained in the unstable cone and whose geometry is controlled. To control the geometry locally, we invoke the Hadamard-Perron graph transform argument (see for example \cite{shub2013global,MR1326374}) (Appendix \ref{Ap:GraphTrans}) which implies that manifolds tangent to the unstable cone which are locally graph of functions in a given regularity class, are mapped by the dynamics into manifolds which are locally graphs of functions in the same regularity class. As before $\mathbb T=\mb R/\sim$ with $x_1\sim x_2$ when $x_1-x_2\in \mb Z$, so each point in $\mathbb T$ can be identified with a point in $[0,1)$. Define $I=(0,1)$. \begin{definition}[Admissible manifolds $\mathcal W_{p,K_0}$] For every $K_0>0$ and $1\leq p\leq\infty$ we say that a manifold $W$ of $\mathcal S$ is {\em admissible} and belongs to the set $\mathcal W_{p,K_0}$ if there exists a differentiable function $E:I^{L+M_u} \rightarrow \mathcal R$ with Lipschitz differential so that \begin{itemize} \item $W$ is the graph $(id, E)(I^{L+M_u})$ of $E$, \item $D_{z_u}E(\mb R^{L+M_u})\subset\mathcal C_p^u$, $\forall z_u\in I^{L+M_u}$, \item and \[ \|DE\|_{\Lip}:=\sup_{z_u\neq\overline z_u}\frac{\|D_{z_u}E-D_{\overline z_u}E\|_p}{d_p(z_u,\overline z_u)}\leq K_0, \] where, with an abuse of notation, we denoted by $\|\cdot\|_p$ the operator norm of linear transformations from $(\mb R^{L+M_u},\|\cdot\|_{p,\mb R^L}+\|\cdot\|_{p,\mb R^{M_u}})$ to $(\mb R^{M_s},\|\cdot\|_{p,\mb R^{M_s}})$. \end{itemize} \end{definition} \begin{proposition} Under conditions \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond3}, for $\eta>0$ sufficiently small, there is $K_u$ uniform on the network parameters such that for all $z_1,z_2\in \mathcal S$ the norm \[ \|D_{z_1}{F_\varepsilon}-D_{z_2}{F_\varepsilon}\|_{u,p}:=\sup_{(u,w,v)\in \mathcal C_p^u}\frac{\|(D_{z_1}{F_\varepsilon}-D_{z_2}{F_\varepsilon})(u,w,v)\|_{p}}{\|(u,w,v) \|_{p}} \] satisfies \[ \|D_{z_1}{F_\varepsilon}-D_{z_2}{F_\varepsilon}\|_{u,p}\leq K_ud_\infty(z_1,z_2). \] \end{proposition} \begin{proof} Notice that from the regularity assumptions on the coupling function $h$, we can write the entries of $D_{z_1}{F_\varepsilon}-D_{z_2}{F_\varepsilon}$ as \begin{equation}\label{Eq:DiffAuxMapDiff} [D_{z_1}{F_\varepsilon}-D_{z_2}{F_\varepsilon}]_{k\ell}=\left\{\begin{array}{ll} \left[\sum_{\ell=1}^L\mathcal O(\Delta^{-1})A^{ll}_{k\ell}+\sum_{m=1}^M\mathcal O(\Delta^{-1})A^{lh}_{km}\right]d_{\infty}(z_1,z_2)& k=\ell \leq L\\ \mathcal O(\Delta^{-1})A^{ll}_{k\ell}d_\infty(z_1,z_2)& k\neq \ell \leq L\\ \mathcal O(\Delta^{-1})A^{lh}_{k(\ell-L)}d_\infty(z_1,z_2)& k\leq L, \ell> L\\ \mathcal O(\Delta^{-1})A^{hl}_{k\ell}d_\infty(z_1,z_2)& k> L, \ell\leq L\\ \mathcal O(\Delta^{-1})A^{hh}_{(k-L)(\ell-L)}d_\infty(z_1,z_2)&k\neq \ell > L\\ \left[\mathcal O(1)+\mathcal O(\Delta^{-1}M)\right]d_{\infty}(z_1,z_2)&k= \ell > L\\ \end{array}\right. \end{equation} Take $(u,w,v)\in \mathcal C_p^u$ such that $\|(u,w,v)\|_p=1$ and $(u',w',v')^t=(D_{z_1}{F_\varepsilon}-D_{z_2}{F_\varepsilon})(u,w,v)^{t}$. \begin{align*} u'_i&=\mathcal O(\Delta^{-1})\left[\sum_{\ell=1}^LA^{ll}_{k\ell}+\sum_{m=1}^MA^{lh}_{km}\right]u_id_{\infty}(z_1,z_2)+\\ &\phantom{=}+\mathcal O(\Delta^{-1})\left[\sum _{\ell=1}^LA^{ll}_{i\ell}u_n+\sum_{m=1}^{M_u}A^{lh}_{im}w_m+\sum_{m=1}^{M_s}A^{lh}_{i(m+M_u)}v_{m}\right]d_{\infty}(z_1,z_2)& 1\leq i\leq L\\ w'_j&=\left[\mathcal O(1)+\mathcal O(\Delta^{-1}M)\right]w_{j}d_{\infty}(z_1,z_2)\\ &\phantom{=}+\mathcal O(\Delta^{-1})\left[\sum_iA^{hl}_{ji}u_i+\sum_{m=1}^{M_u}A^{hh}_{jm}w_m+\sum_{m=1}^{M_s}A^{hh}_{j(m+M_u)}v_m\right]d_\infty(z_1,z_2)&1\leq j\leq M_u\\ v'_j&=\left[\mathcal O(1)+\mathcal O(\Delta^{-1}M)\right]v_jd_{\infty}(z_1,z_2)+\\ &\phantom{=}+\mathcal O(\Delta^{-1})\left[\sum_{\ell=1}^{L}A^{hl}_{j\ell}u_\ell+\sum_{m=1}^{M_u}A^{hh}_{jm}w_m+\sum_{m=M_u+1}^{M}A^{hh}_{jm}v_m\right]d_{\infty}(z_1,z_2)&1\leq j\leq M_s \end{align*} \begin{align*} \|u'\|_{p,\mb R^L}&\leq\mathcal O(\Delta^{-1}\delta N^{1/p})d_\infty(z_1,z_2)=\mathcal O(\eta) d_\infty(z_1,z_2) \\ \|w'\|_{p,\mb R^{M_u}}&\leq\mathcal O(1)d_\infty(z_1,z_2)\\ \|v'\|_{p,\mb R^{M_s}}&\leq\mathcal O(1)d_\infty(z_1,z_2) \end{align*} which implies the proposition. \end{proof} \begin{lemma}\label{Lem:CovDec} Suppose that $K_0>\mathcal O(K_u)$ and $W$ is an embedded $(L+M_u)-$dimensional torus which is the closure of $W_0\in\mathcal W_{p,K_0}$. Then, for every $n\in\mb N$, ${F_\varepsilon}^n(W)$ is the closure of a finite union of manifolds, $W_{n,k}\in \mathcal W_{p,K_0}$, $k\in\mathcal K_n$ (and the difference ${F_\varepsilon}^n(W)\setminus \cup \{W_{n,k}\}_{k\in\mathcal K_n}$ consists of finite union of manifolds of lower dimension). \end{lemma} \begin{proof} As in Corollary~\ref{cor2}, since $\pi_u|_{W_0}$ is a diffeomorphism, the map $\pi_u\circ{F_\varepsilon}^n\circ \pi_u|_{W_0}^{-1}:\mathbb T^{L+M_u}\rightarrow \mathbb T^{L+M_u}$ is a well defined local diffeomorphism between compact manifolds, and therefore is a covering map. One can then find a partition $\{R_{n,k}\}_{k\in\mathcal K_n}$ of $\mathbb T^{L+M_u}$ such that $\pi_u\circ{F_\varepsilon}^n\circ \pi_u|_{W_0}^{-1}(R_{n,k})=\mathbb T^{L+M_u}$ and, defining $W_{n,k}:=\pi_u\circ{F_\varepsilon}^n\circ \pi|_{W_0}^{-1}(R^o_{n,k})$, where $R^o_{n,k}$ is the interior of $R_{n,k}$, $\pi_u(W_{n,k})=I^{L+M_u}$. From Proposition~\ref{Prop:InvRegularityLip} in Appendix A it follows that $W_{n,k}\in \mathcal W_{p,K_0}$ and $\{W_{n,k}\}_{k\in\mathcal K_n}$ is the desired partition. \end{proof} \begin{figure} \caption{The admissible manifold $W_0$ is mapped under $F_\varepsilon$ to the union of sub manifolds $W_{1,1}$, $W_{1,2}$, $W_{1,3}$, and $W_{1,4}$.} \label{Fig:AdmMnfds} \end{figure} \subsection{Evolution of Densities on the Admissible Manifolds for $F_\varepsilon$} Recall that $\pi_u$ and $\Pi_u$ are projections on the first $L+M_u$ coordinates in $\mathbb T^N$ and $\mb R^N$ respectively. Given an admissible manifold $W\in\mathcal W_{p,K_0}$, which is the graph of the function $E:I^{L+M_u}\rightarrow \mathcal R$, for every $z_u\in I^{L+M_u}$ the map \[ \pi_u\circ{F_\varepsilon}\circ(id,E)(z_u) \] gives the evolution of the first $L+M_u$ coordinates of points in $W$. The Jacobian of this map is given by $$ J(z_u)=\left|\Pi_u\cdot D_{(id,E)(z_u)}{F_\varepsilon}\cdot(\Id, D_{z_u}E)\right|. $$ In the next proposition we upper bound the distortion of such a map. \begin{proposition}\label{Prop:JacobEst} Let $W\in W_{p,L}$ be an admissible manifold and suppose $z_u,\overline z_u\in I^{L+M_u}$, then $$ \left|\frac{J(z_u)}{J(\overline z_u)}\right|\leq \exp\left\{[\mathcal O(\Delta^{-1}L^{1+1/p}\delta^{1/q})+\mathcal O(M)]d_\infty(z_u,\overline z_u)\right\}. $$ \end{proposition} \begin{proof} \begin{align*} \frac{|J(z_u)|}{|J(\overline z_u)|}&=\frac{\left|\Pi_u\cdot D_{(id,E)(z_u)}{F_\varepsilon}\cdot(\Id, D_{z_u}E)\right|}{\left|\Pi_u\cdot D_{(id,E)(\overline z_u)}{F_\varepsilon}\cdot(\Id, D_{z_u}E)\right|}\frac{\left|\Pi_u\cdot D_{(id,E)(\overline z_u)}{F_\varepsilon}\cdot(\Id, D_{z_u}E)\right|}{\left|\Pi_u\cdot D_{(id,E)(\overline z_u)}{F_\varepsilon}\cdot(\Id, D_{\overline z_u}E)\right|}\\ &=:(A)\cdot(B) \end{align*} $(A)$ can be bounded with computations similar to the ones carried on in Proposition \ref{Prop:DistJac}: \[ (A)\leq \exp\left\{\left[\mathcal O(\Delta^{-1}\delta L)+\mathcal O(M)\right]d_\infty(z,\overline z)\right\}. \] To estimate $(B)$ we also factor out the number $Df=\sigma$ from the first $L$ columns of $\Pi_uD_{(id,E)(\overline z_u)}{F_\varepsilon}$, and $Dg_j(\overline y_{u,j})$ from the $(L+j)-$th column when $1\leq j\leq M_u$ and thus obtain \[ (B)=\frac{\sigma^L}{\sigma^L}\cdot\frac{\prod_{j=1}^{M_u}Dg_j}{\prod_{j=1}^{M_u}Dg_j}\cdot\frac{\left|\Pi_u\mathcal D({(id,E)(\overline z_u)})\cdot(\Id, D_{z_u}E)\right|}{\left|\Pi_u\mathcal D((id,E) (\overline z_u))\cdot(\Id, D_{\overline z_u}E)\right|} \] where $\mathcal D(\cdot)$ is the same matrix defined in \eqref{Eq:DExpression} apart from the last $M_s$ columns which are kept equal to the corresponding columns of $D_{\cdot}{F_\varepsilon}$. The first two ratios trivially cancel. For the third factor we proceed in a fashion similar to previous computations using Proposition \ref{Prop:EstTool} in the appendix. Defining $B:=\mathcal D((id,E)(\overline z_u))-\Id$, we are reduced to estimate \begin{align*} \frac{\left|\Id+\Pi_u\cdot B\cdot(\Id, D_{z_u}E)\right|}{\left|\Id+\Pi_u\cdot B\cdot(\Id, D_{\overline z_u}E)\right|} \end{align*} where we used that $\Pi_u\mathcal D\cdot(\Id,D_{z_u}E)-\Id=\Pi_uB\cdot(\Id,D_{z_u}E)$. Since $\|(\Id,D_{z_u}E)\|_p\leq(1+\beta_{u,p})$ for any $z_u\in \mathcal S$, it follows, choosing $\eta>0$ sufficiently small in \eqref{Eq:ThmCond1}-\eqref{Eq:ThmCond3} and from equation \eqref{Eq:bOperatorNormBound} that the operator norm \begin{equation}\label{Eq:bContEvar} \|\Pi_u\cdot B\cdot (\Id,D_{z_u}E)\|_{p}<\lambda<1 \end{equation} It is also rather immediate to upper bound the column norms of $\Pi_u\cdot B\cdot (0,D_{z_u}E-D_{\overline z_u}E)$ and obtain \begin{align*} \|\Col^i[\Pi_uB(0,D_{z_u}E-D_{\overline z_u}E)]\|_p&\leq\mathcal O(\Delta^{-1} L^{1/p}\delta^{1/q})\|DE\|_{\Lip,p}d_p(z_u,\overline z_u)\\ &\leq \mathcal O(\Delta^{-1} M^{1/p})d_p(z_u,\overline z_u) \end{align*} so that by Proposition \ref{Prop:EstTool}, the overall estimate for (B) is \begin{equation}\label{Eq:EstRatioDetPointE} \frac{\left|\Pi_u\cdot \mathcal D({(id,E)(\overline z_u)})\cdot(\Id, D_{z_u}E)\right|}{\left|\Pi_u\cdot \mathcal D((id,E)(\overline z_u)\cdot(\Id, D_{\overline z_u}E)\right|}\leq\exp\left\{\mathcal O(\Delta^{-1}L^{1+1/p}\delta^{1/q})d_p(z_u,\overline z_u)\right\}. \end{equation} \end{proof} \subsection{Invariant Cone of Densities on Admissible Manifolds for $F_\varepsilon$}\label{subsec:invariantcones} Take $W\in\mathcal W_{p,K_0}$. A density $\rho$ on $W$ is a measurable function $\varphi:W\rightarrow\mb R^+$ such that the integral of $\varphi$ over $W$ with respect to $m_W$ is one, where $m_W$ is defined to be the measure obtained by restricting the volume form in $\mathbb T^N$ to $W$. The measure ${\pi_u}_{*}(\varphi\cdotm_W)$ is absolutely continuous with respect to $m_{L+M_u}$ on $\mathbb T^{L+M_u}$ and so its density $\varphi_u:\mathbb T^{L+M_u}\rightarrow\mb R^+$ is well defined. \begin{definition}\label{Def:UnstableMarginal} For every $W\in\mathcal W_{p,K_0}$ and for every $\varphi:W\rightarrow\mb R^+$ we define $\varphi_u$ as \[ \varphi_u:=\frac{d{\pi_u}_{*}(\varphi\cdotm_W)}{dm_{L+M_u}}. \] \end{definition} Consider the set of densities \[ \mathcal C_{a,p}(W):=\left\{\varphi:W\rightarrow \mb R^+\mbox{ s.t. }\frac{\varphi_u(z_u)}{\varphi_u(\overline z_u)}\leq \exp[ad_p(z_u,\overline z_u)]\right\}. \] The above set consists of all densities on $W$ whose projection on the first $L+M_u$ coordinates has the prescribed regularity property. \begin{proposition}\label{Prop:DensEvSubm} For every $a>a_c$, where \begin{equation}\label{Eq:Acriticattfix} a_c={\mathcal O(\Delta^{-1}L^{1+1/p}\delta^{1/q})+\mathcal O(M)} \end{equation} $W\in \mathcal W_{p,K_0}$ and $\varphi\in\mathcal C_{a,p}(W)$ the following holds. Suppose that $\{W_k'\}_k$ is the partition of ${F_\varepsilon}(W)$ given by Lemma \ref{Lem:CovDec} and that $W_k$ is a manifold of $W$ such that ${F_\varepsilon}(W_k)=W_k'\in \mathcal W_{p,K_0}$, then for every $k$, the density $\varphi_k'$ on $W_k'$ defined as \[ \varphi_k':=\frac{1}{\int_{W_k}\varphi dm_{W}}\frac{d{F_\varepsilon}_*(\varphi|_{W_k}\cdotm_{W_k})}{dm_{W_k'}} \] belongs to $\mathcal C_{a,p}(W_k')$. \end{proposition} \begin{proof} It is easy to verify that $\varphi'_k$ is well-defined. Let $G_k$ be the inverse of the map ${F_\varepsilon}|_{W_k}:W_k\rightarrow W_k'$. From Definition \ref{Def:UnstableMarginal} follows that \[ (\varphi'_k)_u:=\frac{d(\pi_u\circ {F_\varepsilon}\circ (id,E))_*(\varphi_u|_{\pi_u(W_k)}\cdotm_{L+M_u})}{dm_{L+M_u}} \] where $E$ is the map whose graph equals $W$. This implies that \[ (\varphi'_k)_u(z_u)=\frac{\varphi_u(G_k(z_u))}{J(G_k(z_u))} \] and therefore \begin{align*} \frac{(\varphi'_k)_u(z_u)}{(\varphi'_k)_u(\overline z_u)}&=\frac{\varphi_u(G_k(z_u))}{\varphi_u(G_k(\overline z_u))}\frac{J(G_k(\overline z_u))}{J(G_k(z_u))}\\ &\leq \exp\left[\overline\sigma^{-1}a d_p(z_u,\overline z_u)\right] \exp\left\{[\mathcal O(\Delta^{-1}L^{1+1/p}\delta^{1/q})+\mathcal O(M)]d_p(z_u,\overline z_u)\right\}\\ &\leq \exp\left\{[\overline\sigma^{-1}a+\mathcal O(\Delta^{-1}L^{1+1/p}\delta^{1/q})+\mathcal O(M)]d_p(z_u,\overline z_u))\right\} . \end{align*} Taking $a_c$ as in \eqref{Eq:Acriticattfix}, the proposition is verified. \end{proof} At this point we can prove that the system admits invariant physical measures and that their marginals on the first $L+M_u$ coordinates is in the cone $C_{a,p}$ for $a>a_c$. The main ingredients we use are Krylov-Bogolyubov's theorem, and Hopf's argument \cite{MR3220769, MR1326374}. \begin{proof}[Proof of Theorem \ref{Thm:PhysMeasFep}] Pick a periodic orbit, $O(z_s)$ of $g_{M_u+1}\times \dots \times g_{M}$ and let $U$ be the union of the connected components of $\mathcal R$ containing points of $O(z_s)$. Pick $y_s\in U$ and take the admissible manifold $W_0:=\mathbb T^{L+M_u}\times\{y_s\}\in \mathcal W_{p,K_0}$. Consider a density $\rho\in \mathcal C_{a,p}(W_0)$ with $a>a_c$ such that the measure $\mu_0:=\rho\cdot m_{W}$ is the probability measure supported on $W_0$ with density $\rho$ with respect to the Lebesgue measure on $W_0$. Consider the sequence of measures $\{\mu_t\}_{t\in\mb N_0}$ defined as \[ \mu_t:=\frac{1}{t+1}\sum_{i=0}^{t}{F_\varepsilon}^i_*\mu_0. \] From Lemma \ref{Lem:CovDec} we know that ${F_\varepsilon}^i(W_0)=\bigcup_{k\in\mathcal K_i}W_{i,k}$ modulo a negligible set w.r.t. ${F_\varepsilon}^i_*(\mu_0)$, and that \[ {F_\varepsilon}^i_*(\mu_0)=\sum_{k\in\mathcal K_i}{F_\varepsilon}^i_*\mu_0(W_{i,k})\mu_{i,k}, \] where $\mu_{i,k}$ is a probability measure supported on $W_{i,k}$ for all $i$ and $k\in\mathcal K_i$. It is a consequence of Proposition \ref{Prop:DensEvSubm} that $\mu_{i,k}=\rho_{i,k}\cdot m_{W_{i,k}}$ with $\rho_{i,k}\in C_{a,p}(W_{i,k})$. Since ${F_\varepsilon}$ is continuous, every subsequence of $\{\mu_t\}_{t\in\mb N_0}$ has a convergent subsequence in the set of all probability measures of $\mathcal S$ with respect to the weak topology ($\{\mu_t\}_{t\in\mb N_0}$ is tight). Let $\overline \mu$ be a probability measure which is a limit of a converging subsequence. By convexity of the cone $\mathcal C_{a,p}$ the second assertion of the theorem holds for $\overline \mu$. Now let $V_1,\dots,V_n$ be the components of $U$ where $n$ is the period of $O(z_s)$. Since all stable manifolds are tangent to a constant cone which has a very small angle (in particular less then $\pi/4$) with the vertical direction (corresponding to the last $M_s$ directions of $\mathbb T^N$), they will intersect all horizontal tori $\mathbb T^{L+M_u}\times\{y\}$ with $y\in V_k$. It follows from the standard arguments that $\overline \mu$ has absolutely continuous disintegration on foliations of local unstable manifolds, which in the case of an endomorphism, are defined on a set of histories called inverse limit set (see \cite{qian2009smooth} for details). Following the standard Hopf argument (\cite{MR3220769, MR1326374}), one first notices that fixed a point $x\in V_i$ on the support of $\overline \mu$, a history $\overline x\in(\mathbb T^N)^\mb N$, and a continuous observable $\varphi$, from the definition of $\overline \mu$, almost every point on the local unstable manifold associated to the selected history has a well defined forward asymptotic Birkhoff average (computed along $\overline x$) and every point on that stable manifold through $x$ has the same asymptotic forward Birkhoff average. The aforementioned property of the stable leaves implies that any two unstable manifolds are crossed by the same stable leaf. This, together with absolute continuity of the stable foliation, implies that forward Birkhoff averages of $\varphi$ are constant almost everywhere on the support of $\overline \mu$ which implies ergodicity. \end{proof} \subsection{Jacobian of the Holonomy Map along Stable Leaves of $F_\varepsilon$} In order to prove Proposition~\ref{Prop:BadSetMeasAtt} we need to upper bound the Jacobian of the holonomy map along stable leaves. It is known that for a $C^2$ uniformly hyperbolic (or even partially hyperbolic) diffeomorphism the holonomy map along the stable leaves is absolutely continuous with respect to the induced Lebesgue measure on the transversal to the leaves \cite{hasselblatt2005partially}. This can be easily generalised to the non-invertible case. First of all, let us recall the definition of holonomy map. We consider holonomies between manifolds tangent to the unstable cone. \begin{definition} Given $D_1$ and $D_2$ embedded disks of dimension $L+M_u$, tangent to the unstable cone $\mathcal C^u$, we define the holonomy map $\pi:D_1\rightarrow D_2$ \[ \pi(x)=W^s(x)\cap D_2. \] As before we define $m_D$ be the Lebesgue measure on $D$ induced by the volume form on $\mathbb T^N$. \end{definition} \begin{remark} For the truncated dynamical system ${F_\varepsilon}$, fixing $D_1$, one can always find a sufficiently large $D_2$ such that the map is well defined everywhere in $D_1$. \end{remark} \begin{proposition}\label{Prop:JacEstBnd} Given $D_1$ and $D_2$ admissible embedded disks tangent to the unstable cone, the holonomy map $\pi:D_1\rightarrow D_2$ associated to ${F_\varepsilon}$ is absolutely continuous with respect to $m_{D_1}$ and $m_{D_2}$ which are the restrictions of Lebesgue measure to the two embedded disks. Furthermore, if $J_s$ is the Jacobian of $\pi$, then \begin{equation}\label{Eq:JacEstUppBnd} J_s(z)\leq\exp\left\{[\mathcal O(\Delta^{-1}L\delta )+\mathcal O(M)]\frac{1}{1-\lambda}d_\infty(z,\pi(z))\right\},\quad \forall z\in D_1. \end{equation} \end{proposition} \begin{proof} The absolute continuity follows from results in \cite{MR889254} (see Appendix \ref{Ap:GraphTrans}), as well as the estimate on the Jacobian. In fact it is proven in \cite{MR889254} that \[ J_s(z)=\prod_{k=0}^\infty\frac{\Jac\left(D_{z_k}{F_\varepsilon}| V_k\right)}{\Jac\left(D_{\overline z_k}{F_\varepsilon}|\overline V_k\right)}\quad \forall z\in D_1, \] where $z_k:={F_\varepsilon}^k(z)$, $\overline z_k:={F_\varepsilon}^k(\pi(z))$, $V_k:=T_{z_k}{F_\varepsilon}^k(D_1)$ and $\overline V_k:=T_{\overline z_k}{F_\varepsilon}^k(D_2)$. Since $D_1$ and $D_2$ are tangent to the unstable cone, one can write ${F_\varepsilon}^k(D_1)$ and ${F_\varepsilon}^k(D_2)$ locally as graphs of functions $E_{1,k}:B^u_\delta(z_k)\rightarrow B^s_\delta(z_k)$ and $E_{2,k}:B^u_\delta(\overline z_k))\rightarrow B^s_\delta(\overline z_k)$, with $E_{i,k}$ given by application of the graph transform on $E_{i,k-1}$, and $E_{i,0}$ such that $(\Id,D_zE_{i,0})(\mb R^{L+M_u})= T_z D_i$. For every $k\in\mb N\cup\{0\}$, $(\Id, D_{\pi_u(z_k)}E_{1,k})\circ \Pi_u|_{V_k}=\Id|_{V_k}$, \[ \Jac(\Id, D_{\pi_u(z_k)}E_{1,k})\Jac(\Pi_u|_{V_k})=1 \] and analogously \[ \Jac(\Id, D_{\pi_u(\overline z_k)}E_{2,k})\Jac(\Pi_u|_{\overline V_k})=1. \] Since \begin{align*} \left|\Pi_u\circ D_{z_k}{F_\varepsilon}(\Id,D_{\pi_u(z_k)}E_{1,k})\right|&=\Jac(\Pi_u\circ D_{z_k}{F_\varepsilon}(\Id,D_{\pi_u(z_k)}E_{1,k}))\\ &=\Jac(\Pi_u|_{V_k})\Jac(\Id, D_{\pi_u(z_k)}E_{1,k})\Jac\left(D_{z_k}{F_\varepsilon}|V_k\right) \end{align*} and, analogously, \begin{align*} \left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u(\overline z_k)}E_{2,k})\right|&=\Jac(\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u(\overline z_k)}E_{2,k}))\\ &=\Jac(\Pi_u|_{\overline V_k})\Jac(\Id, D_{\pi_u(\overline z_k)}E_{2,k})\Jac\left(D_{\overline z_k}{F_\varepsilon}|\overline V_k\right). \end{align*} So \begin{align*} J_s(z)&= \prod_{k=0}^\infty\frac{\Jac\left(D_{z_k}{F_\varepsilon}|V_k\right)}{\Jac\left(D_{\overline z_k}{F_\varepsilon}|V_k\right)}\frac{\Jac\left(D_{\overline z_k}{F_\varepsilon}|V_k\right)}{\Jac\left(D_{\overline z_k}{F_\varepsilon}|\overline V_k\right)}\\ &=\prod_{k=0}^\infty\frac{\left|\Pi_u\circ D_{z_k}{F_\varepsilon}(\Id,D_{\pi_u(z_k)}E_{1,k})\right|}{\left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u(z_k)}E_{1,k})\right|}\frac{\left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u(z_k)}E_{1,k})\right|}{\left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u(\overline z_k)}E_{2,k})\right|}. \end{align*} The first ratio can be deduced with minor changes from the estimate \eqref{Eq:EstRatioDetPointE} in the proof of Proposition \ref{Prop:JacobEst}. So \begin{align*} \prod_{k=0}^\infty\frac{\left|\Pi_u\circ D_{z_k}{F_\varepsilon}(\Id,D_{\pi_u(z_k)}E_{1,k})\right|}{\left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u(z_k)}E_{1,k})\right|}&\leq \exp\left\{[\mathcal O(\Delta^{-1}L\delta)+\mathcal O(M)]\sum_{k=0}^{\infty}d_p(z_k,\overline z_k)\right\}\\ &\leq\exp\left\{[\mathcal O(\Delta^{-1}L\delta)+\mathcal O(M)]\frac{1}{1-\overline \lambda}d_p(z_0,\overline z_0)\right\} \end{align*} where we used the fact that $z_0$ and $\overline z_0$ lay on the same stable manifold and, by Proposition \ref{Prop:InvConesForTildeF}: $d_{p}(z_k,\overline z_k)\leq \overline \lambda^kd_p(z_0,\overline z_0)$. To estimate the other ratio we proceed making similar computations leading to the estimate in \eqref{Eq:EstRatioDetPointE}. Once more we factor out $\sigma$ from the first $N$ columns of $D_{\overline z_k}{F_\varepsilon}$ and, for all $j\in[M]$, $D_{y_j}g$ from the $(L+j)-$th column and thus \begin{align*} \frac{\left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u( z_k)}E_{1,k})\right|}{\left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u(\overline z_k)}E_{2,k})\right|}&=\frac{\sigma^L}{\sigma^L}\cdot\frac{\prod_{j=1}^{M_u}Dg_j}{\prod_{j=1}^{M_u}Dg_j}\frac{\left|\Pi_u\circ D(\overline z_k)(\Id,D_{\pi_u( z_k)}E_{1,k})\right|}{\left|\Pi_u\circ D(\overline z_k)(\Id,D_{\pi_u(\overline z_k)}E_{2,k})\right|}\\ &=\frac{\left|\Pi_u\circ \mathcal D(\overline z_k)(\Id,D_{\pi_u( z_k)}E_{1,k})\right|}{\left|\Pi_u\circ \mathcal D(\overline z_k)(\Id,D_{\pi_u(\overline z_k)}E_{2,k})\right|} \end{align*} where $\mathcal D(z_k)$ is defined as in \eqref{Eq:DExpression}. Defining for every $k\in\mb N$ \begin{align*} B_k&:=\Pi_u\mathcal D(\overline z_k)(\Id,D_{\pi_u(z_k)}E_{1,k})-\Id\\ &=\Pi_u(\mathcal D(\overline z_k)-\Id)(\Id,D_{\pi_u( z_k)}E_{1,k}) \end{align*} and analogously \begin{align*} \overline B_k&:=\Pi_u\mathcal D(\overline z_k)(\Id,D_{\pi_u(\overline z_k)}E_{2,k})-\Id\\ &=\Pi_u(\mathcal D(\overline z_k)-\Id)(\Id,D_{\pi_u( \overline z_k)}E_{2,k}). \end{align*} we have proved in \eqref{Eq:bContEvar} that $\|\overline B_k\|,\|B_k\|\leq\lambda<1$. It remains to estimate the norm of the columns of $\overline B_k-B_k$. For all $\ell\in[L]$ \begin{align*} \|\Col^\ell[\overline B_k-B_k]\|&=\|\Col^\ell [\Pi_u(\mathcal D(\overline z_k)-\Id)(0,D_{\pi_u( z_k)}E_{1,k}-D_{\pi_u(\overline z_k)}E_{2,k})]\|\\ &\leq\left\|\Pi_u(\mathcal D(\overline z_k)-\Id)|_{0\oplus\mb R^{M_s}}\right\|d_u(\overline V_k,V_k)\\ &\leq\mathcal O(\Delta^{-1} \delta)d_u(\overline V_k,V_k). \end{align*} where we used that, as can be easily deduced from the definition of $d_u$ in \eqref{Eq:Defd_u} of Appendix \ref{Ap:GraphTrans}, that $\|(0,D_{\pi_u( z_k)}E_{1,k}-D_{\pi_u(\overline z_k)}E_{2,k})\|=d_u(\overline V_k,V_k)$. By Proposition \ref{Prop:EstTool} in Appendix \ref{Ap:TechComp}, we obtain \[ \frac{\left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u( z_k)}E_{1,k})\right|}{\left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u(\overline z_k)}E_{2,k})\right|}\leq \exp\left\{\mathcal O(L\Delta^{-1}\delta)d_u(\overline V_k,V_k)\right\}. \] By Proposition \ref{Prop:ContTangentSpace} we know that, if $\beta_u$ is sufficiently small, then $d_u(\overline V_k,V_k)\leq \lambda_*d_u(V_0,W_0)$ for some $\lambda_*<1$, and this implies that \begin{align*} \prod_{k=0}^\infty \frac{\left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u( z_k)}E_{1,k})\right|}{\left|\Pi_u\circ D_{\overline z_k}{F_\varepsilon}(\Id,D_{\pi_u(\overline z_k)}E_{2,k})\right|}&\leq \exp\left\{\mathcal O(L\Delta^{-1} \delta)\sum_{k=0}^{\infty}d_u(\overline V_k,V_k)\right\} \\ &\leq \exp\left\{\mathcal O(L\Delta^{-1}\delta)d_u(V_0,W_0)\right\}\\ &\leq \exp\left\{\mathcal O(L\Delta^{-1}\delta\beta_u)\right\}. \end{align*} \end{proof} \subsection{Proof of Proposition \ref{Prop:BadSetMeasAtt}} The following result shows that the set $\mathcal B_\varepsilon^{(s,j)}\times \mathbb T^{M_u}\times\mathcal R$, which is the set where fluctuations of the dynamics of a given hub exceed a given threshold, is contained in a set, $\widetilde {\mathcal B}_\varepsilon^{(s,j)}$, that is the union of global stable manifolds. This is important to notice because, even if the product structure of the former set is not preserved taking preimages under ${F_\varepsilon}$, the preimage of $\widetilde {\mathcal B}_\varepsilon^{(s,j)}$ will be again the union of global stable manifolds. Furthermore, if $\beta_s$ is sufficiently small, and this is provided by the heterogenity conditions, the set $\widetilde {\mathcal B}_\varepsilon^{(s,j)}$ will be \say{close} (topologically and with respect to the right measures) to ${\mathcal B}_\varepsilon^{(s,j)}$. \begin{lemma}\label{Lem:IncSetUnMan} Consider $\mathcal B_\varepsilon^{(s,j)}$ as in \eqref{Eq:DefBadSetComp}. Then there exists a constant $C>0$ such that \begin{equation}\label{Eq:IncSetUnMan} \mathcal B_\varepsilon^{(s,j)}\times \mathbb T^{M_u} \times \mathcal R \quad \subset\quad \widetilde {\mathcal B}_\varepsilon^{(s,j)}:=\bigcup_{z\in \mathcal B_\varepsilon^{(s,j)}\times \mathbb T^{M_u} \times \mathcal R}W^s(z)\quad\subset\quad {\mathcal B}_{\varepsilon_1}^{(s,j)}\times \mathbb T^{M_u} \times \mathcal R , \end{equation} with $\varepsilon_1=\varepsilon+C_\# M^{1/p}\beta_{s,p}$. \end{lemma} \begin{proof} The first inclusion is trivial. Take $z\in {\mathcal B}_{\varepsilon}^{(s,j)}\times \mathbb T^{M_u} \times \mathcal R$ such that \[ \left|\frac{1}{\Delta}\sum_i A^{hl}_{ji}\theta_{s_1} (x_i)-\kappa_j\overline\theta_s \right|\ge \varepsilon|s_1|. \] Since $W^s(z)$ is tangent to the stable cone $\mathcal C^s$, by \eqref{Eq:StabCon} for any $z'\in W^s(z)$, $d_p(\pi_u(z'),z)\leq\beta_{s,p}$. This implies that \begin{align*} \left|\left|\frac{1}{\Delta}\sum_iA^{hl}_{ji}\theta_{s_1} (x_i)-\kappa_j\overline\theta_s \right|-\left|\frac{1}{\Delta}\sum_iA^{hl}_{ji}\theta_{s_1}(x'_i)-\kappa_j\overline\theta_s\right|\right|&\leq \left|\frac{1}{\Delta}\sum_iA^{hl}_{ji}(\theta_{s_1} (x_i)-\theta_s(x'_i))\right|\\ &\leq\frac{1}{\Delta}\sum_iA^{hl}_{ji}|D\theta_{s_1}|d_p(\pi_u(z'),z)\\ &\leq |s_1|\mathcal O(M^{1/p}\beta_{s,p}) \end{align*} proving the lemma. \end{proof} \begin{proof}[Proof of Proposition \ref{Prop:BadSetMeasAtt}] As in the proof of Theorem \ref{Thm:PhysMeasFep}, take an embedded $L+M_u$ torus $W_0\in\mathcal W_{p,K_0}$ such that $\pi_u|_{W_0}:W_0\rightarrow \mathbb T^{L+M_u}$ is a diffeomorphism, a density $\rho\in \mathcal C_{a,p}(W_0)$ with $a>a_c$ so that $\rho\mu_W$ is a probability measure and the limit $\overline \mu$ of the sequence of measures $\{\mu_t\}_{t\in\mb N_0}$ defined as \[ \mu_t:=\frac{1}{t+1}\sum_{i=0}^{t}{F_\varepsilon}^i_*\mu_0. \] is an SRB measure. From Lemma \ref{Lem:CovDec} we know that ${F_\varepsilon}^i(W_0)=\bigcup_{k\in\mathcal K_i}W_{i,k}$ modulo a negligible set w.r.t. ${F_\varepsilon}^i_*(\mu_0)$, and that \[ {F_\varepsilon}^i_*(\mu_0)=\sum_{k\in\mathcal K_i}{F_\varepsilon}^i_*\mu_0(W_{i,k})\mu_{i,k}, \] where $\mu_{i,k}$ is a probability measure supported on $W_{i,k}$ for all $i$ and $k\in\mathcal K_i$. It is a consequence of Proposition \ref{Prop:DensEvSubm} that $\mu_{i,k}=\rho_{i,k}\cdot m_{W_{i,k}}$ with $\rho_{i,k}\in C_{a,p}(W_{i,k})$. For every $t\in\mb N_0$, \begin{align} \mu_t(\widetilde {\mathcal B}_\varepsilon^{(s,j)})&\leq \mu_t({\mathcal B}_{\varepsilon_1}^{(s,j)}\times\mb T^{M_u}\times \mathcal R)\nonumber\\ &=\sum_{i=0}^{t}\sum_{k\in\mathcal K_i}\frac{{F_\varepsilon}^i_*\mu_0(W_{i,k})}{t+1}\mu_{i,k}({\mathcal B}_{\varepsilon_1}^{(s,j)}\times \mb T^{M_u}\times \mathcal R)\nonumber\\ &=\sum_{i=0}^{t}\sum_{k\in\mathcal K_i}\frac{{F_\varepsilon}^i_*\mu_0(W_{i,k})}{t+1}\int_{{\mathcal B}_{\varepsilon_1}^{(s,j)}\times \mathbb T^{M_u}}\rho_{i,k}dm_{L+M_u}\nonumber\\ &\leq\sum_{i=0}^{t}\sum_{k\in\mathcal K_i}\frac{{F_\varepsilon}^i_*\mu_0(W_{i,k})}{t+1}\exp\left[\mathcal O (\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(M)\right]m_{L+M_u}({\mathcal B}_{\varepsilon_1}^{(s,j)}\times\mathbb T^{M_u})\label{Eq:Ineq1MeasEst}\\ &= \exp\left[\mathcal O (\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(M)\right]m_{L+M_u}({\mathcal B}_{\varepsilon_1}^{(s,j)}\times \mathbb T^{M_u})\label{Eq:Ineq2MeasEst} \end{align} Since the set $\widetilde {\mathcal B}_\varepsilon^{(s,j)}$ might not be in general measurable, in the above and in what follows we abused the notation so that whenever the measure of such set or one of its sections is computed, it should be intended as its outer measure. To prove the bound \eqref{Eq:Ineq1MeasEst} we used the fact that $\rho_{i,k}\in C_{a,p}(W_{i,k})$ for $a>a_c$ and thus its supremum is upper bounded by $\exp\left[\mathcal O (\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(ML^{1/p})\right]$. \eqref{Eq:Ineq2MeasEst} follows from the fact that \begin{align} \sum_{i=0}^{t}\sum_{k\in\mathcal K_i}\frac{{F_\varepsilon}^i_*\mu_0(W_{i,k})}{t+1}=\sum_{i=0}^{t}\frac{{F_\varepsilon}^i_*\mu_0({F_\varepsilon}^i(W_0))}{t+1}=\sum_{i=0}^{t}\frac{1}{t+1}=1.\label{Eq:ProbSum1} \end{align} Since the bound is true for every $t\in\mb N_0$, then it is also true for the weak limit $\overline \mu$ \[ \overline \mu(\widetilde {\mathcal B}_\varepsilon^{(s,j)})\leq\exp\left[\mathcal O (\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(ML^{1/p})\right]m_{L+M_u}({\mathcal B}_{\varepsilon_1}^{(s,j)}\times\mathbb T^{M_u}) \] and since $\overline \mu$ is invariant $\forall t\in\mb N$ \[ \overline \mu\left({F_\varepsilon}^{-t}(\widetilde {\mathcal B}_\varepsilon^{(s,j)})\right)\leq\exp\left[\mathcal O (\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(ML^{1/p})\right]m_{L+M_u}({\mathcal B}_{\varepsilon_1}^{(s,j)}\times\mathbb T^{M_u}). \] From \eqref{Eq:ProbSum1} there exists $i\in\mb N$ and $k\in\mathcal K_i$ such that \[ \mu_{i,k}\left({F_\varepsilon}^{-t}(\widetilde {\mathcal B}_\varepsilon^{(s,j)})\right)\leq \overline \mu\left({F_\varepsilon}^{-t}(\widetilde {\mathcal B}_\varepsilon^{(s,j)})\right) \] and thus \[ m_{W_{i,k}}\left({F_\varepsilon}^{-t}(\widetilde {\mathcal B}_\varepsilon^{(s,j)})\right)\leq\exp\left[\mathcal O (\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(ML^{1/p})\right]m_{L+M_u}({\mathcal B}_{\varepsilon_1}^{(s,j)}\times\mathbb T^{M_u}). \] Now, pick $y_s\in \mathcal R$ and consider the the holonomy map along the stable leaves $\pi:W_{i,k}\rightarrow D_{y_s}$ between transversals $W_{i,k}$ and $D_{y_s}=\mathbb T^{L+M_u}\times\{y_s\}\subset \mathcal C^u$. We know from Proposition \ref{Prop:JacEstBnd} that the Jacobian of $\pi$ is bounded by \eqref{Eq:JacEstUppBnd} and thus \[ m_{D_{y_s}}\left({F_\varepsilon}^{-t}(\widetilde {\mathcal B}_\varepsilon^{(s,j)})\right)\leq \exp\left[\mathcal O (\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(ML^{1/p})\right]m_{L+M_u}({\mathcal B}_{\varepsilon_1}^{(s,j)}\times\mathbb T^{M_u}). \] The above holds for every $y_s\in \mathcal R$, and so by Fubini \[ m_N\left({F_\varepsilon}^{-t}(\widetilde {\mathcal B}_\varepsilon^{(s,j)})\right)\leq \exp\left[\mathcal O (\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(ML^{1/p})\right]m_{L+M_u}({\mathcal B}_{\varepsilon_1}^{(s,j)}\times\mathbb T^{M_u}), \] and from the first inclusion in \eqref{Eq:IncSetUnMan} we obtain \[ m_N\left({F_\varepsilon}^{-t}({\mathcal B}_\varepsilon^{(s,j)})\right)\leq \exp\left[\mathcal O (\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(ML^{1/p})\right]m_{L+M_u}({\mathcal B}_{\varepsilon_1}^{(s,j)}\times\mathbb T^{M_u}). \] \end{proof} \subsection{Proof of Theorem \ref{Thm:Main}}\label{Sec:AttFullStatProof} In this section ${F_\varepsilon}:\mathbb T^{N}\rightarrow\mathbb T^{N}$ denotes again the truncated map defined on the whole phase space. Define the uncoupled map $\boldsymbol f:\mathbb T^{N}\rightarrow\mathbb T^{N}$ \[ \boldsymbol f(x_1,...,x_L,y_1,...,y_M):=(f(x_1),...,f(x_L),g_1(y_1),...,g_M(y_M)). \] The next lemma evaluates the ratios of the Jacobians of ${F_\varepsilon}^t$ and $\boldsymbol f^t$ for any fixed $t\in\mb N$. \begin{lemma}\label{Lem:EvVolCoupUnc} \[ \frac{|D_z\boldsymbol f^t|}{|D_z{F_\varepsilon}^t|}\leq \exp\left[\mathcal O(ML\Delta^{-1}\delta)+\mathcal O(M^2)\right] \] \end{lemma} \begin{proof} For all $i\in [t]$ define $z_i:=\boldsymbol f^i(z)$, $\overline z_i:={F_\varepsilon}^i(z)$, and $z_0=\overline z_0:=z$. \begin{align*} \frac{|D_z\boldsymbol f^t|}{|D_z{F_\varepsilon}^t|}=\frac{\prod_{k=0}^t|D_{z_k}\boldsymbol f|}{\prod_{k=0}^t|D_{\overline z_k}{F_\varepsilon}|}=\prod_{k=0}^t\frac{\sigma^L\prod_{m=1}^MD_{y_{k,m}}g_m}{|D_{\overline z_k}{F_\varepsilon}|}&=\prod_{k=0}^t\frac{\sigma^L\prod_{m=1}^MD_{\overline y_{k,m}}g_m\left(1+\frac{D_{y_{k,m}}g_m-D_{\overline y_{k,m}}g_m}{D_{\overline y_{k,m}}g_m}\right)}{|D_{\overline z_k}{F_\varepsilon}|}\\ &\leq \exp\left[\mathcal O(M)\right] \prod_{k=0}^t\frac{1}{|\mathcal D(\overline z_k)|} \end{align*} where $\mathcal D(z_i)$ is defined as in \eqref{Eq:DExpression}. $1/|\mathcal D(\overline z_i)|$ can be estimated in the usual way defining $B(z_i):=\mathcal D(z_i)-\Id$ and noticing that $1=|\Id+0|$. One can obtain, from the computations leading to \eqref{Eq:RatDestimate}, that \[ \frac{1}{|\mathcal D(\overline z_i)|}\leq \exp\left[\sum_{k=1}^{N}\Col^k[B(z_i)]\right]\leq \exp\left[\mathcal O(L\Delta^{-1}\delta)+\mathcal O(M)\right]. \] and thus \[ \frac{|D_z\boldsymbol f^t|}{|D_z{F_\varepsilon}^t|}\leq \exp\left[\mathcal O(M)\right]\exp\left[\mathcal O(L\Delta^{-1}\delta)+\mathcal O(M)\right]\leq\exp\left[\mathcal O(L\Delta^{-1}\delta)+\mathcal O(M)\right]. \] \end{proof} \begin{lemma}\label{Lem:OneDimDyn} Consider the set $A^2(\mathbb T,\mathbb T)$ of $C^2$ Axiom A endomorphisms on $\mathbb T$ endowed with the $C^1$ topology. Take a continuous curve $\gamma:[\alpha_1,\alpha_2]\rightarrow A^2(\mathbb T,\mathbb T)$. Then, denoting by $\Lambda^\alpha$ and $\Upsilon^\alpha$ respectively the attractor and repellor of $\gamma_\alpha$ for all $\alpha\in[\alpha_1,\alpha_2]$, \begin{itemize} \item[(i)] there exist uniform ${\varepsilon_\Lambda}>0$ and $\lambda\in(0,1)$ such that \begin{equation}\label{Eq:UnifParaAxCur} \left|\gamma_\alpha'|_{\Lambda^\alpha_{\varepsilon_\Lambda}}\right|<\lambda\quad\mbox{and}\quad\left|\gamma_\alpha'|_{\Upsilon^\alpha_{\varepsilon_\Lambda}}\right|>\lambda^{-1}, \end{equation} \item[(ii)] there are uniform $r>0$ and $\tau \in\mb N$ such that for all $\alpha\in[\alpha_1,\alpha_2]$, all sequences $\{\varepsilon_i\}_{i=0}^{\tau-1}$ with $\varepsilon_i\in(-r,r)$ and all points $x\in\mathbb T\backslash{\Upsilon^{\alpha}_{{\varepsilon_\Lambda}}}$, the orbit $\{x_i\}_{i=0}^{\tau}$ defined \begin{equation}\label{Eq:DefOrbRand} x_0:=x\quad\mbox{and}\quad x_i:=\gamma_\alpha(x_{i-1})+\varepsilon_i, \end{equation} satisfies $x_\tau\in \Lambda^\alpha_{\varepsilon_\Lambda}$. \end{itemize} \end{lemma} \begin{proof} The above lemma is quite standard \cite{MS} and can be easily proved by considering the sets \[ \bigcup_{\alpha\in [\alpha_1,\alpha_2]}\{\alpha\}\times\Lambda_\alpha\subset[\alpha_1,\alpha_2]\times\mathbb T\quad \mbox{and}\quad\bigcup_{\alpha\in[\alpha_1,\alpha_2]}\{\alpha\}\times\Upsilon_\alpha\subset[\alpha_1,\alpha_2]\times \mathbb T \] and noticing that they are compact. Then from the $C^1$ assumption on the axiom A map, it follows that all the stated quantities are uniformly bounded. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm:Main}] \textbf{Step 1} Restricting ${F_\varepsilon}$ to $\mathcal S$, we can use Proposition \ref{Prop:BadSetMeasAtt} to get an estimate of the Lebesgue measure of ${{\mathcal B}_{\varepsilon,T}\times \mathbb T^{M_u}\times \mathcal R}$. Define \[ {\mathcal B}_{\varepsilon, T,\tau}:=\bigcup_{t=0}^{\tau}{F_\varepsilon}^{-t}\left({\mathcal B}_{\varepsilon,T}\times\mathbb T^{M_u}\times \mathcal R\right)\cap \mathcal S. \] To determine the Lebesgue measure of this set we compare it with the Lebesgue measure of \[ {\mathcal B}'_{\varepsilon, T,\tau}:=\bigcup_{t=0}^{\tau}\boldsymbol f^{-t}\left({\mathcal B}_{\varepsilon,T}\times\mathbb T^{M_u}\times \mathcal R\right)\cap\mathcal S. \] For all $y\in \mathbb T^M$, the map $\boldsymbol f|_{\mathbb T^{L}\times\{y\}}:\mathbb T^{L}\times\{y\}\rightarrow \mathbb T^{L}\times\{(g_1(y_1),...,g_M(y_M))\}$ is an expanding map with constant Jacobian and thus measure preserving if we endow $\mathbb T^{L}\times\{y\}$ and $ \mathbb T^{L}\times\{(g_1(y_1),...,g_M(y_M))\}$ with the induced Lebesgue measure. Fubini's theorem implies that for all $t\in[\tau]$ \[ {m_N(\boldsymbol f^{-t}({\mathcal B}_{\varepsilon,T}\times\mathbb T^{M_u}\times \mathcal R)\cap \mathcal S)}\leq C(\tau) \frac{m_N({\mathcal B}_{\varepsilon,T}\times\mathbb T^{M_u}\times \mathcal R)}{m_N(\mathcal S)} \] where $C(\tau)$ is a constant depending on $\tau$ and uniform on the network parameters. And thus \begin{align*} m_N(\boldsymbol f^{-t}({\mathcal B}_{\varepsilon,T}\times\mathbb T^{M_u}\times \mathcal R)\cap \mathcal S)\leq C_\#m_N({\mathcal B}_{\varepsilon,T}\times\mathbb T^{M_u}\times \mathcal R). \end{align*} Now \[ m_N\left({F_\varepsilon}^{-t}({\mathcal B}_{\varepsilon,T}\times\mathbb T^{M_u}\times \mathcal R)\cap \mathcal S\right)\leqm_N\left(\boldsymbol f^{-t}({\mathcal B}_{\varepsilon,T}\times\mathbb T^{M_u}\times \mathcal R)\cap \mathcal S\right)\sup_{z\in\mathbb T^{N}}\frac{|D_z\boldsymbol f^t|}{|D_z{F_\varepsilon}^t|}. \] By Lemma \ref{Lem:EvVolCoupUnc}, assuming that $\tau\leq T$, we get \begin{align*} m_N({\mathcal B}_{\varepsilon,T,\tau})&\leq \sum_{t=0}^{\tau} \exp\left[\mathcal O(L\delta\Delta^{-1})+\mathcal O(M)\right] C_\#m_N({\mathcal B}_{\varepsilon,T}\times\mathbb T^{M_u}\times \mathcal R)\\ &\leq T\exp\left[-\mathcal O(\Delta^{-1})\varepsilon^2+\mathcal O(\Delta^{-1}L^{1+2/p}\delta)+\mathcal O(ML^{1/p})\right]. \end{align*} \textbf{Step 2} Define the set $\mathcal U\subset \mathbb T^{N}$ as \[ \mathcal U:=\mathbb T^{L+M_u}\times \Upsilon^{M_u+1}_{{\varepsilon_\Lambda}}\times...\times\Upsilon^{M}_{{\varepsilon_\Lambda}}, \] Consider the system $G:\mathbb T^{N}\rightarrow\mathbb T^{N}$ obtained redefining ${F_\varepsilon}$ on $\mathcal U^c$ so that if $(x',y')=G(x,y)$, $\pi_u\circ G(x,y)=\pi_u\circ {F_\varepsilon}(x,y)$ (the evolution of the \say{expanding} coordinates is unvaried) and \[ y'_j=\hat f_j(y_j)+\alpha\sum_pg\left (\frac{1}{\Delta}\sum_nA^{hl}_{jn}\theta_s(x_n)-\kappa_j\overline\theta_s\right)\upsilon_s(y_j)+\frac{\alpha}{\Delta}\sum_{m=1}^MD_{jm}h(y_j,y_m)\mod 1 \] where the reduced dynamics is (smoothly) modified to be globally expanding by putting $\hat f_j|_{ \Upsilon^{j}_{{\varepsilon_\Lambda}}}:=g_j|_{ \Upsilon^{j}_{{\varepsilon_\Lambda}}}$ and $\hat f_j|_{\mathbb T\backslash \Upsilon^{j}_{{\varepsilon_\Lambda}}}$ redefined so that $|\hat f_j|\geq\lambda^{-1}>1$ everywhere on $\mathbb T$. Evidently $G|_{\mathcal U}={F_\varepsilon}|_{\mathcal U}$. We can then invoke the results of Section \ref{Sec:ExpRedMapsGlob} to impose conditions on $\eta$ and $\varepsilon$ to deduce global expansion of the map $G$ (under suitable heterogeneity hypotheses) and the bounds on the invariant density obtained in that section. In particular one has that for al $T\in \mb N$ \[ m_N\left(\bigcup_{t=0}^T G^{-t}({\mathcal B}_\varepsilon\times \mathbb T^{M})\right)\leq T\exp\left\{-\Delta\varepsilon^2/2+\mathcal O(\Delta^{-1}N^{1+2/p}\delta^{1/q})+\mathcal O(MN^{1/p})\right\} \] and this implies \[ m_N\left(\bigcup_{t=0}^T G^{-t}({\mathcal B}_\varepsilon\times\mathbb T^{M})\bigcup {\mathcal B}_{\varepsilon,\tau,T}\right)\leq 2T\exp\left\{-\Delta\varepsilon^2/2+\mathcal O(\Delta^{-1}L^{1+2/p}\delta^{1/q})+\mathcal O(ML^{1/p})\right\}. \] And this concludes the proof of the theorem. \end{proof} \subsection{Mather's Trick and proof of Theorem~\ref{Thm:Main} when $n\neq 1$ } \label{subsec:mather} Until now we have assumed that the reduced maps $g_j$ satisfied Definition~\ref{Def:AxiomA} with $n=1$. We now show that any $n\in\mb N$ will work by constructing an adapted metric via what is known as \say{Mather's trick} (see Lemma 1.3 in Chapter 3 of \cite{MS} or \cite{hirsch2006invariant}). \begin{lemma} It is enough to prove Theorem~\ref{Thm:Main} for $n=1$. \label{lem:n=1} \end{lemma} \begin{proof} Assume that $g_j$, $j=1,\dots,M$ satisfies the assumptions in Definition~\ref{Def:AxiomA} for some $(n,m,\lambda,r)$. Condition (2) and (3) imply that one can smoothly conjugate each of the maps $g_j$ so that $|D_xg_j|<\lambda$ for all $x\in N_r(\Lambda_j)$, and $|D_xg_j|>\lambda^{-1}$ for all $x\in N_r(\Upsilon_j)$. These conjugations are obtained by changing the metric (\say{Mather's trick}). In other words, there exists a smooth coordinate change $\varphi_j\colon \mathbb T \to \mathbb T$ so that for $\widetilde g_j:=\varphi_j \circ g_j \circ \varphi_j^{-1}$ the properties from Definition~\ref{Def:AxiomA} hold for $n=1$. Moreover, there exists some uniform constant $C_\#$ only depending on $(n,\lambda,r)$, so that the $C^2$ norms of $\varphi_j$ and $\varphi_j^{-1}$ are bounded by $C_\#$. Writing $\widetilde y_j=\varphi_j(y_j)$ and $\widetilde z=(\widetilde z_1,\dots,\widetilde z_n)=(x_1,\dots,x_L,\widetilde y_1,\dots,\widetilde y_M)$, in these new coordinates \eqref{Eq:CoupDyn1}-\eqref{Eq:average} become \begin{align} x_i'&= f(x_i)+\frac{\alpha}{\Delta}\sum_{\ell=1}^L A^{ll}_{i\ell}h(x_i,x_\ell)+\frac{\alpha}{\Delta}\sum_{m=1}^M A^{lh}_{im}\widetilde h(x_i,\widetilde y_m) \mod 1& i=1,...,L\label{Eq:CoupDyn1"}\\ \widetilde y_j'&=\widetilde g_j (\widetilde y_j)+\widetilde \xi_{j}(\widetilde z) \quad \,\, \mod 1& j=1,...,M\label{Eq:CoupDyn2"} \end{align} where \begin{equation} \widetilde \xi_{j}(z):=\int_{\widetilde g_j(\widetilde y_j)}^{\widetilde g_j(\widetilde y_j)+\xi_j}D_t\varphi_jdt,\quad\quad\mbox{and} \quad\quad \widetilde h(x,\widetilde y):=h(x,\varphi_j^{-1}(\widetilde y)). \label{eq:xijeps''} \end{equation} In fact \begin{align*} \widetilde y_j'=\varphi_j(y_j)=\varphi_j\left(g_j(y_j)+\xi_{j}\right)=\widetilde g_j(\widetilde y_j)+\int_{\widetilde g_j(\widetilde y_j)}^{\widetilde g_j(\widetilde y_j)+\xi_j}D_t\varphi_jdt. \end{align*} Then we can define $\widetilde \xi_{j,\varepsilon}$ as \[ \widetilde \xi_{j,\varepsilon}:= \int_{\widetilde g_j(\widetilde y_j)}^{\widetilde g_j(\widetilde y_j)+\xi_{j,\varepsilon}}D_t\varphi_jdt \] and define the truncated system as \begin{align} x_i'&= f(x_i)+\frac{\alpha}{\Delta}\sum_{\ell=1}^L A^{ll}_{i\ell}h(x_i,x_\ell)+\frac{\alpha}{\Delta}\sum_{m=1}^M A^{lh}_{im}\widetilde h(x_i,\widetilde y_m) \mod 1& i=1,...,L\\ \widetilde y_j'&=\widetilde g_j (\widetilde y_j)+\widetilde \xi_{j,\varepsilon}(\widetilde z) \quad \,\, \mod 1& j=1,...,M. \end{align} Since the $\varphi_j$ are $C^2$ with uniformly bounded $C^2$ norm, it immediately follows that $\widetilde \xi_{j,\varepsilon}$ satisfies all the properties satisfied by $\xi_{j,\varepsilon}$ listed in Lemma~\ref{Lem:XiProp}. Assuming that $|\widetilde \xi_{j,\varepsilon}(\widetilde x(t))|\le \xi$ for all $0\le t\le T$, we immediately obtain that $$|y_j'- g_j(y_j)| =\left |\varphi_j^{-1} [ \widetilde g_j (\varphi_j ( y_j))+\widetilde \xi_{j,\varepsilon}(\widetilde z)] - g_j(y_j)\right|\le \mathcal O(\widetilde x_{j,\varepsilon}(\widetilde z)) \le \mathcal O(\xi).$$ \end{proof} \subsection{Persistence of the Result Under Perturbations} The picture presented in Theorem~\ref{Thm:Main} is persistent under smooth random perturbations of the coordinates. Suppose that instead of the deterministic dynamical system $F:\mathbb T^N\rightarrow \mathbb T^N$ we have a stationary Markov chain $\{\mathcal F_t\}_{t\in\mb N}$ on some probability space $(\Omega,\mathbb P)$ with transition kernel \[ \mathbb P (\mathcal F_{n+1}\in A|\mathcal F_n=z):=\int_A\varphi(y-F(z))dy \] where $\varphi:\mathbb T^N\rightarrow\mb R^+$ is a density function. The Markov chain describes a random dynamical system where independent random noise distributed according to the density $\varphi$ is added to the iteration of $F$. Take now the stationary Markov chain $\{\mathcal F_{\varepsilon,t}\}_{t\in\mb N}$ defined by the transition kernel \[ \mathbb P (\mathcal F_{\varepsilon,n+1}\in A|\mathcal F_{\varepsilon,n}=z):=\int_A\varphi(y-F_\varepsilon(z))dy \] where we consider the truncated system instead of the original map in the deterministic drift of the process and restrict, for example, to the case where $F_\varepsilon$ is uniformly expanding. The associated transfer operator can be written as $\mathcal P_\varepsilon=P_\varphi\circ P_\varepsilon$ where $P_\varepsilon$ is the transfer operator for $F_\varepsilon$ and \[ (P_\varphi\rho)(x)=\int\rho(y)\varphi(y-x)dy. \] Let $C_{a,p}$ be a cone of densities invariant under $P_\varepsilon$ as prescribed in Proposition~\ref{Prop:ConInv}. It is easy to see that this is also invariant under $P_\varphi$ and thus under $\mathcal P_\varepsilon$. In fact, take $\rho\in C_{a,p}$. Then \begin{align*} \frac{(P_{\varphi}\rho)(z)}{(P_{\varphi}\rho)(\overline z)}=\frac{\int\rho(y)\varphi(y-z)dy}{\int\rho(y)\varphi(y-\overline z)dy}=\frac{\int\rho(z-y)\varphi(y)dy}{\int\rho(\overline z-y)\varphi(y)dy}\leq\frac{\int\rho(\overline z-y)\exp\{a d_p(z,\overline z)\}\varphi(y)dy}{\int\rho(\overline z-y)\varphi(y)dy}=\exp\{a d_p(z,\overline z)\}. \end{align*} This means that there exists a stationary measure for the chain with density belonging to $C_{a,p}$ and that the same estimates we have in Section~\ref{Sec:ExpRedMapsGlob} for the measure of the set $\mathcal B_\varepsilon$ hold. This allows to conclude that the hitting times to the set $\mathcal B_\varepsilon$ satisfy the same type of bound in the proof of Theorem~\ref{Thm:Main}. Notice the independence of the above on the choice of density $\varphi$ for the noise. This implies that all the arguments continue to hold independently on the size of the noise which, however, contribute to spoil the low-dimensional approximation for the hubs in that it randomly perturbs it. \section{Conclusions and Further Developments}\label{Sec:Conc} Heterogeneously Coupled Maps (HCM) are ubiquitous in applications. Because of the heterogeneous structure and lack of symmetries in the graph, most previously available results and techniques cannot be directly applied to this situation. Even if the behaviour of the local maps is well understood, once they are coupled in a large network, a rigorous description of the system becomes a major challenge, and numerical simulations are used to obtain information on the dynamics. The ergodic theory of high dimensional systems presents many difficulties including the choice of reference measure and dependence of decay of correlations on the system size. We exploited the heterogeneity to obtain rigorous results for HCM. Using an ergodic description, the dynamics of hubs can be predicted from knowledge of the local dynamics and coupling function. This makes it possible to obtain quantitative theoretical predictions of the behaviour of complex networks. Thereby, we establish that existence of a range of dynamical phenomena quite different from the ones encountered in homogenous networks. This highlights the need of new paradigms when dealing with high-dimensional dynamical systems with a heterogenous coupling. \noindent {\bf Synchronization occurs through a heat bath mechanism.} For certain coupling functions, hubs can synchronize, unlike poorly connected nodes which remain out of synchrony. The underlying synchronization process is not related to direct coupling between hubs, but comes via the coupling with a poorly connected nodes. So the hub {\em synchronization process} is through a mean-field effect (i.e. the coupling is {\em through a \say{heat bath}}). In HCM synchronization depends on the connectivity layer (see Subsection~\ref{subsec:predictions+experiments}). We highlighted this feature in the networks three types of hubs having distinct degrees. \noindent {\bf Synchronization in random networks - HCM versus Homogeneous.} Theorem~\ref{MTheo:C} shows that synchronization occurs in random homogeneous networks, but is rare in HCM (see Appendix~\ref{App:RandGrap}). Recent work (for example \cite{Gol-Stewart98}) showed that structure influences dynamics. What Theorem B shows that it is not strict symmetry, but (probablisitic) homogeneity that makes synchronization possible. In contrast, in presence of heterogeneity the dynamics changes according to connectivity layers. \noindent {\bf Importance of Long Transients in High Dimensional Systems.} Section \ref{Sec:StarNetExamp} shows how certain behaviour can be sustained by a system only for finite time $T$ and, as it turns out, $T$ is exponentially large in terms of the size of the network being greater than any feasible observation time. The issue of such long transient times, naturally arises in high dimensional systems. For example, given an $N-$fold product of the same expanding map $f$, densities evolve asymptotically to the unique SRB measure exponentially fast, but the rate depends on the dimension and becomes very low for $N\rightarrow \infty$. Take $f$ an expanding map and define \[ \boldsymbol f:=f\times\dots\times f. \] Suppose $\nu$ is the invariant measure for $f$ absolutely continuous with respect to some reference measure $m$ different from $\nu$. Then the push forward $\boldsymbol f^t_*(m^{\otimes N})=(f_*^tm)^{\otimes N}$ will converge exponentially fast in some suitable product norm to $\nu^{\otimes N}$ because this is true for each factor separately. However, choosing a large $N$, the rate can be made arbitrarily slow and in the limit of infinite $N$, $\boldsymbol f^t_*(m^{\mb N})$ and $\nu^\mb N$ are singular for all $t\in\mb N$. This means that in practice, pushing forward with the dynamics an absolutely continuous initial measure, it might take a very long time before relaxing to the SRB measure even if the system is hyperbolic. This suggests that in order to accurately describe HCM and high-dimensional systems, it is necessary to understand the {\em dependence} of all relevant quantities and bounds {\em on the dimension}. This is often disregarded in the classical literature on ergodic theory. \subsection{Open problems and new research directions} With regard to HCM some problems remain open. \begin{enumerate} \item In Theorem~\ref{Thm:Main} we assumed that the local map $f$ in our model is Bernoulli and that all the non-linearity within the model is contained in the coupling. This assumption makes it easier to control distortion estimates as the dimension of the network increases. For example, without this assumption, the density of the invariant measure in the expanding case, see Section~\ref{Sec:ExpRedMapsGlob}, becomes highly irregular as the dimension increases. \noindent \textbf{Problem:} \emph{obtain the results in Theorem~\ref{Thm:Main} when $f$ is a general uniformly expanding circle map in $C^{1+\nu}$ with $\nu\in(0,1)$.} \item In Theorem \ref{Thm:Main} we gave a description of orbits for finite time until they hit the set $\mathcal B_\varepsilon$ where the fluctuations are above the threshold and the truncated system $F_\varepsilon$ differs from the map $F$. \noindent \textbf{Problem:} \emph{describe what happens after an orbit enters the set $\mathcal B_\varepsilon$. In particular, find how much time orbits need to escape $\mathcal B_\varepsilon$ and how long it takes for them to return to this set. } \noindent \item In the proof of Theorems~\ref{Thm:Main} and \ref{MTheo:C} we assume that the reduced dynamics $g_j$, in Eq. \eqref{Eq:RedEqInt}, of each hub node $j\in\{1,...,M\}$ is uniformly hyperbolic. \noindent \textbf{Problem:} \emph{find a sufficiently weak argument that allows to describe the case where some of the reduced maps $g_j$ have non-uniformly hyperbolic behaviour, for example, when they have a neutral fixed point.} In fact, hyperbolicity is a generic condition in dimension one, see \cite{MR3336841,MR2342693} but not in higher dimensions. An answer to this question would be desirable even in the one-dimensional case, but especially when treating multi-dimensional HCM. \end{enumerate} \noindent The study of HCM and the approach used in this paper also raise more general questions such as: \begin{enumerate} \item \textbf{Problem:} \emph{is the SRB measure supported on the attractor of $F_\varepsilon$ absolutely continuous with respect to the Lebesgue measure $m_N$ on the whole space?} Tsujii proved in \cite{MR1862809} absolute continuity of the SRB measure for a non-invertible two-dimensional skew product system. Here the main challenges are that the system does not have a skew-product structure, and the perturbation with respect to the product system depend on the dimension. \item Chimera states refer to \say{heterogeneous} behaviour observed (in simulations and experiments) on homogeneous networks, see \cite{abrams2004chimera}. The emergence of such states is not yet completely understood, but they are widely believed to be associated to long transients. \textbf{Problem:} \emph{does the approach of the truncated system shed light on Chimera states?} \end{enumerate} \begin{appendices} \section{Estimates on the Truncated System}\label{App:TruncSyst} \begin{Hoefd} Suppose that $(X_i)_{i\in\mb N}$ is a sequence of bounded independent random variables on a probability space $(\Omega,\Sigma,\mathbb P)$, and suppose that there exists $a_i<b_i$ such that $X_i(\omega)\in[a_i,b_i]$ for all $\omega\in\Omega$, then $$ \mathbb P\left(\left|\frac{1}{n}\sum_{i=1}^{n}X_i-\mathbb E_{\mathbb P}\left[\frac{1}{n}\sum_{i=1}^{n}X_i\right]\right|\geq \tea \right)\leq2\exp\left[-\frac{2n^2\tea^2}{\sum_{i=1}^n(b_i-a_i)^2}\right] $$ for all $\tea>0$ and $n\in\mb N$. \end{Hoefd} \begin{proof}[Proof of Proposition \ref{Prop:UppBndBLeb}] Hoeffding's inequality can be directly applied to the random variables defined on $(\mathbb T^{L},\mathcal B, m_L)$ by $X_i:=\theta_{s_1}\circ \pi^i(x)$ where $\pi^i:\mathbb T^{L}\rightarrow \mathbb T$ is the projection on the $i$-th coordinate ($1\leq i\leq L$). These are in fact independent by construction and bounded since $\{\theta_{s_1}\}_{s_1\in\mathbb Z}$ are trigonometric functions on $[-1,1]$. Consider the set \[ \mathcal B_\varepsilon=\bigcup_{j=1}^{M}\bigcup_{s_1\in\mathbb Z}\mathcal B_\varepsilon^{(s_1,j)}. \] $B_\varepsilon^{(s_1,j)}$ defined as in \eqref{Eq:DefBadSetComp}. Notice that for $s_1=0$, $\mathcal B_\varepsilon^{(0,j)}=\mathbb T^L$. Since $\kappa_j\Delta=d_j$, we can rewrite $\mathcal B_\varepsilon^{(s_1,j)}$ as \[ \mathcal B_\varepsilon^{(s_1,j)}=\left\{x\in\mathbb T^{L}:\left|\frac{1}{d_j}\sum_{i=1}^L A_{ji}\theta_{s_1}(x_i)-\overline\theta_{s_1}\right|>\frac{\varepsilon}{\kappa_j}|s_1| \right\}. \] Since $d_j$ is number of non-vanishing terms in the sum, the above is the measurable set where the empirical average over $d_j$ i.i.d bounded random variables, exceeds their common expectation of more than $\varepsilon|s_1| /\kappa_j$. Being under the hypotheses of the above Hoeffding Inequality, we can estimate the measure of the set as \begin{equation} m_L(\mathcal B_\varepsilon^{(s_1,j)})\leq 2\exp\left[-\frac{d_j^2\varepsilon^2|s_1|^{2}}{d_j2\kappa_j^2}\right] = 2\exp\left[-\frac{\Delta\varepsilon^2|s_1|^{2}}{2\kappa_j}\right] \label{Beps} \end{equation} and this gives \[ m_L(\mathcal B_\varepsilon)\leq\sum_{j=1}^{M}\sum_{s_1\in\mathbb Z\backslash\{0\}} m_{L}(\mathcal B_\varepsilon^{(s_1,j)})\leq 2M\sum_{s_1\in\mathbb Z\backslash\{0\}}\exp\left[-\frac{\Delta\varepsilon^2}{2}|s_1|\right]\leq 4M\frac{\exp\left[-\frac{\Delta\varepsilon^2}{2}\right]}{1-\exp\left[-\frac{\Delta\varepsilon^2}{2}\right]} \] since $\kappa_j<1$, which concludes the proof of the proposition. \end{proof} Here follows an expression for $DF_\varepsilon$. Using \eqref{Eq:CoupDyn1'} and \eqref{Eq:CoupDyn2'} and writing as before $z=(x,y)$, noting that for $k>L$, $z_k=y_{k-L}$ we get \begin{equation}\label{Eq:DiffAuxMap} [D_{(x,y)}F_\varepsilon]_{k\ell}=\left\{\begin{array}{lr} D_{x_k} f+\frac{\alpha}{\Delta}\sum_{n=1}^N A_{kn}h_1(x_k,z_n) & k=\ell \leq L,\\ \frac{\alpha}{\Delta} A_{k\ell}h_2(x_k,z_\ell)& k\neq \ell, k \leq L,\\ \partial_{x_\ell}\xi_{k-L,\varepsilon}& k>L, \ell\leq L\\ \frac{\alpha}{\Delta}A_{k\ell}h_2(y_{k-L},y_{\ell-L})&k\neq \ell > L,\\ D_{y_{k-L}} g_{k-L}+\partial_{y_{k-L}}\xi_{k-L,\varepsilon}&k= \ell>L. \\ \end{array}\right. \end{equation} Here $h_1$ and $h_2$ stand for the partial derivatives of the function $h$ with respect to the first and second coordinate respectively, and where we suppressed, not to additionally cloud the notation some of the functional dependences. The following lemma summarises the properties $\xi_{j,\varepsilon}$ satisfies and that will yield good hyperbolic properties for $F_\varepsilon$. \begin{lemma}\label{Lem:XiProp} The functions $\xi_{j,\varepsilon}:\mathbb T^N\rightarrow \mb R$ defined in Eq. \eqref{eq:xijeps} satisfy \begin{itemize}\item[(i)] \[ |\xi_{j,\varepsilon}|\leq C_\#(\varepsilon+\Delta^{-1}M) \] where $C_\#$ is a constant depending only on $\sigma$, $h$, and $\alpha$. \item[(ii)] \begin{align*} \left|\partial_{z_n}\xi_{j,\varepsilon}\right|\leq\left\{ \begin{array}{cr} \mathcal O(\Delta^{-1})A_{jn} &n\leq L\\ C_\#\varepsilon+\mathcal O(\Delta^{-1}M)& n=j+L\\ \mathcal O(\Delta^{-1})A_{jn}&n>L,\mbox{ }n\neq j+L. \end{array}\right. \end{align*} \item[(iii)] for all $z,\overline z\in\mathbb T^N$ \begin{align*} \left|\partial_{z_n}\xi_{j,\varepsilon}(z)-\partial_{z_n}\xi_{j,\varepsilon}(\overline z)\right|\leq\left\{ \begin{array}{cr} \mathcal O(\Delta^{-1})A_{jn}d_{\infty}(z,\overline z) &n\leq L\\ \mathcal O(1)+\mathcal O(\Delta^{-1}M)d_{\infty}(z,\overline z)& n=j+L\\ \mathcal O(\Delta^{-1})A_{jn}d_{\infty}(z,\overline z)&n>L,\mbox{ }n\neq j+L. \end{array}\right. \end{align*} \end{itemize} \end{lemma} \begin{proof} Proof of (i) follows from the following estimates \begin{align*} |\xi_{j,\varepsilon}|&\leq C_\#\left(\sum_{s\in\mathbb Z^2}c_s\varepsilon|s_1| +\Delta^{-1}M\right)\\ &\leq C_\#(\varepsilon+\Delta^{-1}M) \end{align*} where we used that the sum is absolutely convergent. \noindent To prove $(ii)$ notice that for $n\leq L$ \begin{align}\label{Eq:PArtDerXi} \partial_{z_n}\xi_{j,\varepsilon}(z)=\partial_{x_n}\xi_{j,\varepsilon}(z)=\alpha\sum_{s\in\mathbb Z^2}c_s D_{(\cdot)}\zeta_{\varepsilon|s_1| }\frac{A_{jn}}{\Delta}D_{x_n}\theta_{s_1} \end{align} and $|D_{x_n}\theta_{s_1}|\leq 2\pi |s_1|$, so the bound follows from the fast decay rate of the Fourier coefficients. For $n=j+L$ \begin{align*} |\partial_{z_{j+L}}\xi_{j,\varepsilon}(z)|=|\partial_{y_j}\xi_{j,\varepsilon}(z)|&=\left|\alpha\sum_{s\in\mathbb Z^2}c_s \zeta_{\varepsilon |s_1|}\left(\cdot \right)D_{y_j}\upsilon_{s_2}+\sum_{n=1}^{M}\frac{\alpha}{\Delta} A^h_{jn}\partial_{y_j}h(y_j,y_n)\right|\\ &\leq \varepsilon C_\# \sum_{s\in\mathbb Z^2}|c_s||D_{y_j}\theta_{s_2}||s_1| +\mathcal O(\Delta^{-1}M). \end{align*} Again the decay of the Fourier coefficients yields the desired bound. For $n>L$ and different from $j+L$ it is trivial. Point (iii) for $n\neq L+j$ follows immediately from expression \eqref{Eq:PArtDerXi} and by the decay of the Fourier coefficients. For $n=j+L$ \begin{align*} |\partial_{y_j}\xi_{j,\varepsilon}(z)-\partial_{y_j}\xi_{j,\varepsilon}(\overline z)|&\leq\left|\alpha\sum_{s\in\mathbb Z^2}c_s \varepsilon |s_1| \left[D_{y_j}\upsilon_{s_2}-D_{\overline y_j}\upsilon_{s_2}\right]+\sum_{n=1}^{M}\frac{\alpha}{\Delta} A^h_{jn}\left[\partial_{y_j}h(y_j,y_n)-\partial_{y_j}h(\overline y_j,\overline y_n)\right]\right|\\ &\leq \mathcal O(1+\Delta^{-1}M)d_{\infty }(z,\overline z). \end{align*} Notice that to obtain the last step we need the sequence $\{c_{\boldsymbol s}|s_1||s_2|^3\}$ to be summable. In particular, \[ c_{\boldsymbol s}\leq \frac{c_\#}{|s_1|^{2+b}|s_2|^{4+b}},\quad b>0 \] is a sufficient condition, ensured by picking $h\in C^{10}$. \end{proof} \section{Estimate on Ratios of Determinants}\label{Ap:TechComp} In the following proposition $\Col^k[M]$, with $M\in\mathcal M_{n\times n}$ a square matrix of dimension $n$, is the $k-$th column of the matrix $M$. \begin{proposition}\label{Prop:EstTool} Suppose that $\|\cdot\|_p:\mb R^n\rightarrow\mb R^+$ is the $p$ norm ($1\leq p\leq \infty$) on the euclidean space $\mb R^n$. Take two square matrices $b_1$ and $b_2$ of dimension $n$. Suppose there is constant $\lambda\in(0,1)$ and such that \begin{equation}\label{Eq:Cond1Appb} \|b_i\|_p:=\sup_{\substack{v\in\mathbb R^n\\ \|v\|_p\leq 1}}\frac{\|b_iv\|_p}{\|v\|_p}\leq \lambda \quad \forall i\in \{1,2\}, \end{equation} Then \[ \frac{|\Id+b_1|}{|\Id+b_2|}\leq \exp\left\{\frac{\sum_{k=1}^{n}\|\Col^k [b_1-b_2]\|_p}{1+\lambda}\right\}. \] \end{proposition} \begin{proof} Given a matrix $M\in\mathcal M_{n\times n}$ it is a standard formula that \[ |M|=\exp[\Tr \log(M)]. \] \begin{align*} \frac{|\Id+b_1|}{|\Id+b_2|}&=\exp\left\{\sum_{\ell=1}^\infty\frac{(-1)^{\ell+1}}{\ell} \Tr[b_1^\ell-b_2^\ell]\right\}. \end{align*} Substituting the expression $$ b_1^\ell- b_2^\ell=\sum_{j=0}^{\ell-1}b_1^j(b_1-b_2)b_2^{\ell-j-1} $$ we obtain \begin{align} \Tr(b_1^\ell-b_2^\ell)&= \sum_{i=0}^{\ell - 1}\Tr(b_1^j(b_1-b_2)b_2^{\ell-j-1})\nonumber\\ &=\sum_{j=0}^{\ell-1}\Tr(b_2^{\ell-j-1}b_1^n( b_1-b_2))\nonumber\\ &\leq \sum_{j=0}^{\ell-1}\sum_{k=1}^{n} \|\Col^k[b_2^{\ell-j-1}b_1^j( b_1-b_2)]\|\label{passone} \end{align} where we used that the trace of a matrix is upper bounded by the sum of the $p-$norms of its columns (for any $p\in[1,\infty]$). Using conditions \eqref{Eq:Cond1Appb} we obtain \begin{align} \Tr(b_1^\ell-b_2^\ell)&\leq \ell \lambda^{\ell-1} \sum_{k=1}^{n}\|\Col^k[b_1- b_2]\|_p. \label{passotwo} \end{align} To conclude \begin{align*} \frac{|\Id+b_1|}{|\Id+b_2|}&\leq\exp\left\{\sum_{\ell=1}^\infty(-1)^{\ell+1}\lambda^{\ell-1}\sum_{k=1}^{n}\|\Col^k[b_1- b_2]\|_p\}\right\} \\ &=\exp\left\{\frac{\sum_{k=1}^{n}\|\Col^k[b_1- b_2]\|_p}{1+\lambda}\right\}. \end{align*} \end{proof} \section{Transfer Operator}\label{Ap:TranOp} Suppose that $(M,\mathcal B)$ is a measurable space. Given a measurable map $F:M\rightarrow M$ define the \emph{push forward}, $F_*\mu$, of any (signed) measure $\mu$ on $(M,\mathcal B)$ by \[ F_*\mu(A):=\mu(F^{-1}(A)), \quad \forall A\in\mathcal B. \] The operator $F_*$ defines how mass distribution evolves on $M$ after application of the map $F$. Now suppose that a reference measure $m$ on $(M,\mathcal B)$ is given. The map $F$ is \emph{nonsingular} if $F_*m$ is absolutely continuous with respect to $m$ and we write it $F_*m\ll m$. If $F$ is nonsingular, given a measure $\mu\ll m$ then also $F_*\mu\ll m$. This means that one can define an operator \[ P:L^1(M,m)\rightarrow L^1(M,m) \] such that if $\rho\in L^1$ then $P\rho:=dF_*(\rho\cdot m)/dm$ where $\rho\cdot m$ is the measure with $d(\rho\cdot m)/dm=\rho$. In particular, if $\rho\in L^1$ is a mass density ($\rho\geq 0$ and $\int_M\rho dm$=1) then $P$ maps $\rho$ into the mass density obtained after application of $F$. One can prove that an equivalent characterization of $P$ is as the only operator that satisfies \[ \int_M\varphi \psi\circ F dm=\int_M P\varphi \psi dm,\quad \forall \psi\in L^\infty(M,m)\mbox{ and }\varphi\in L^1. \] This means that if, for example, $M$ is a Riemannian manifold and $m$ is its volume form and if $F$ is a local diffeomorphism then $P$ can be obtained from the change of variables formula as being \[ P\varphi(y)=\sum_{\{x:\mbox{ }F(x)=y\}}\frac{\varphi(x)}{\Jac F(x)} \] where $\Jac F(x)=\frac{d F_*m}{dm}(x)$. It follows from the definition of $P$ that $\rho\in L^1$ is an \emph{invariant density} for $F$ if and only if $P\rho=\rho$. \section{Graph Transform: Some Explicit Estimates}\label{Ap:GraphTrans} We go through once again the argument of the graph transform in the case of a cone-hyperbolic endomorphism of the $n-$dimensional torus. The scope of this result is to compute explicitly bounds on Lipschitz constants for the invariant set of admissible manifolds, and contraction rate of the graph transform (\cite{shub2013global,MR1326374}). Consider the torus $\mathbb T^n$ with the trivial tangent bundle $\mathbb T^n\times\mb R^n$. Suppose that $\|\cdot\|:\mb R^n\rightarrow\mb R$ is a constant norm on the tangent spaces, and that, with an abuse of notation, $\|x_1-x_2\|$ is the distance between $x_1,x_2\in\mathbb T^n$ induced by the norm. Take $n_u,n_s\in \mb N$ such that $n=n_s+n_u$, and $\Pi_s:\mb R^n\rightarrow\mb R^{n_s}$ $\Pi_u:\mb R^n\rightarrow \mb R^{n_u}$ projections for the decomposition $\mb R^n=\mb R^{n_u}\oplus \mb R^{n_s}$. Identifying $\mathbb T^{n}$ with $\mathbb T^{n_u}\times\mathbb T^{n_s}$, we call $\pi_s:\mathbb T^n\rightarrow\mathbb T^{n_s}$ and $\pi_u$ the projection on the respective coordinates. Take $F:\mathbb T^n\rightarrow \mathbb T^n$ a $C^2$ local diffeomorphism. We will also define $F_u:=\pi_u\circ F$ and $F_s:=\pi_s\circ F$. Suppose that it satisfies the following assumptions. There are constants $\beta_u,\beta_s>1$, $K_u>0$ and constant cone-fields \[ \mathcal C^u:=\left\{v\in\mb R^n:\quad{\|\Pi_uv\|}\geq \beta_u{\|\Pi_sv\|}\right\}\quad\mbox{and}\quad \mathcal C^s:=\left\{v\in\mb R^n:\quad{\|\Pi_sv\|}\geq \beta_s{\|\Pi_uv\|}\right\} \] such that: \begin{itemize} \item $\forall x\in\mathbb T^n$, $D_x F(\mathcal C^u)\subset \mathcal C^u(F(x))$ and $D_{F(x)}F^{-1}(\mathcal C^s(F(x)))\subset \mathcal C^s(x)$; \item there are real numbers $\lambda_1,\lambda_2,\mu_1,\mu_2\in\mb R^+$ such that \begin{align*} 0<\lambda_2&\leq\left\|D_x F|_{\mathcal C^s}\right\|\leq \lambda_1<1<\mu_1\leq \left\|D_x F|_{\mathcal C^u}\right\|\leq\mu_2; \end{align*} \item \[ \|D_{z_1}F-D_{z_2}F\|_u:=\sup_{v\in \mathcal C^u}\frac{\|(D_{z_1}F-D_{z_2}F)v\|}{\|v\|}\leq K_u\|z_1-z_2\| \] \end{itemize} From now on we denote $(x,y)\in\mathbb T^n$ a point in the torus with $x\in\mathbb T^{n_u}$ and $y\in\mathbb T^{n_s}$. Take $r>0$ and let $B_r^u(x)$ $B^s_r(y)$ be balls of radius $r$ in $\mathbb T^{n_u}$ and $\mathbb T^{n_s}$ respectively. Consider \[ C^1_{u}(B_r^u(x),B_r^s(y)):=\{\sigma:B_r^u(x)\rightarrow B_r^s(y)\mbox{ s.t. }\|D\sigma\|<\beta_u^{-1} \}. \] The condition above ensures that the graph of any $\sigma$ is tangent to the unstable cone. It is easy to prove invertibility of $\pi_u\circ F\circ (id,\sigma)|_{B_r^u(x)}$ for sufficiently small $r$, and it is thus well defined the graph transform \[ \Gamma:C^1_u(B_r^u(x),B_r^s(y))\rightarrow C^1_u(B_r^u(F_u(x,y)),B_r^s(F_s(x,y))) \] that takes $\sigma$ and maps it to $\Gamma\sigma$ with the only requirement that the graph of $\Gamma\sigma$, $(id,\Gamma\sigma)(B_r^u(F_u(x,y)))$, is contained in $F\circ(id,\sigma)(B_r^u(x))$. An expression for $\Gamma$ is given by \[ \Gamma\sigma:=[\pi_s\circ F\circ (id,\sigma)]\circ[\pi_u\circ F\circ (id,\sigma)]^{-1}|_{B_r^u(F_u(x,y))}. \] The fact that $\|D(\Gamma\sigma)\|\leq\beta_u^{-1}$ is a consequence of the invariance of $\mathcal C^u$. Now we prove a result that determines a regularity property for the admissible manifold which is invariant under the graph transform. \begin{proposition}\label{Prop:InvRegularityLip} Consider $\sigma \in C^1_{u,K}(B_r^u(x),B_r^s(y))\subset C^1_{u}(B_r^u(x),B_r^s(y))$ characterized as \[ \Lip(D\sigma)=\sup_{\substack{x'\neq y'\\ x',y'\in B_r^u(x)}}\frac{\|D_{x'}\sigma-D_{y'}\sigma\|}{\|x'-y'\|}\leq K. \] Then the graph transform $\Gamma$ maps $C^1_{u,K}(B_r^u(x),B_r^s(y))$ into $C^1_{u,K}(B_r^u(F_u(x,y)),B_r^s(F_s(x,y)))$ if \[ K>\frac{1}{1- \frac{\lambda_1}{\mu_1(1-\beta_u^{-1})}}\left(\frac{\mu_2}{\mu_1\lambda_2}K_u\lambda_1\frac{(1+\beta_u^{-1})}{(1-\beta_u^{-1})}+K_u\frac{(1+\beta_u^{-1})^2}{\mu_1(1-\beta_u^{-1})}\right) \] \end{proposition} \begin{proof} Take $z_1,z_2\in B_r^u(z)$, with $z=\pi_u\circ F\circ(id,\sigma)(x)$ and suppose that $x_1,x_2\in B_r^u(x)$ are such that $\pi_u\circ F\circ (id,\sigma)(x_i)=z_i$. Take $w\in\mb R^{n_u}$, and suppose that $v_1,v_2\in\mb R^{n_u}$ satisfy $\Pi_uD_{(x_i,\sigma(x_i))}F(v_i,D_{x_i}\sigma(v_i))=w$. Then \begin{align*} \|D_{z_1}(\Gamma\sigma)(w)-&D_{z_2}(\Gamma\sigma)(w)\|\leq\\&\leq \|D_{(x_1,\sigma(x_1))}F(v_1,D_{x_1}\sigma(v_1))-D_{(x_2,\sigma(x_2))}F(v_2,D_{x_2}\sigma(v_2))\|\\ &\leq\|D_{(x_1,\sigma(x_1))}F\left(v_1-v_2,D_{x_1}\sigma(v_1-v_2)\right)\|+\\ &\phantom{+}+\|D_{(x_1,\sigma(x_1))}F(0,D_{x_1}\sigma-D_{x_2}\sigma)v_2\|+\\ &\phantom{+}+\|D_{(x_1,\sigma(x_1))}F-D_{(x_2,\sigma(x_2))}F\|_u\|(v_2,D_{x_2}\sigma v_2)\|\\ &\leq\mu_2\|v_1-v_2\|+\lambda_1\|D_{x_1}\sigma-D_{x_2}\sigma\|\|v_2\|+K_u(1+\beta_u^{-1})\|x_1-x_2\|\|(v_2,D_{x_2}\sigma v_2)\|. \end{align*} Now \[ \|x_1-x_2\|\leq \lambda_1(1+\beta_u^{-1})\|z_1-z_2\| \] and \begin{align*} \|v_1-v_2\|&=\|v_1-\Pi_u(D_{(x_2,\sigma(x_2))}F)^{-1}D_{(x_1,\sigma(x_1))}F(\Id,D_{x_1}\sigma)(v_1)\|\\ &=\|\Pi_u(D_{(x_2,\sigma(x_2))}F)^{-1}(D_{(x_1,\sigma(x_1))}F-D_{(x_2,\sigma(x_2))}F)(\Id,D_{x_1}\sigma)(v_1)\|\\ &\leq\lambda_2^{-1}K_u\|x_1-x_2\|\|v_1\|\\ &\leq\lambda_2^{-1}K_u\lambda_1(1+\beta_u^{-1})\|z_1-z_2\|\|v_1\|. \end{align*} Taking into account that $\|v_1\|,\|v_2\|\leq\mu_1^{-1}(1-\beta_u^{-1})^{-1}\|w\|$ \[ \Lip(D_\cdot (\Gamma\sigma))\leq \frac{\lambda_1}{\mu_1(1-\beta_u^{-1})}\Lip(D_{\cdot}\sigma)+\frac{\mu_2}{\mu_1}\lambda_2^{-1}K_u\lambda_1\frac{(1+\beta_u^{-1})}{(1-\beta_u^{-1})}+K_u\frac{(1+\beta_u^{-1})^2}{\mu_1(1-\beta_u^{-1})} \] and this gives the condition of invariance of the proposition. \end{proof} \begin{proposition}\label{Prop:ContGraphtransC0} For all $\sigma_1,\sigma_2\in C^1_{u}(B_r^u(x),B_r^s(y))$ \[ \sup_{z\in B^u_r(F_u(x,y))}\|(\Gamma\sigma_1)(z)-(\Gamma\sigma_2)(z)\|\leq [\lambda_1+\lambda_1^2\mu_1^{-1}\beta_u^{-1}+\mu_2\mu_1^{-1}\lambda_1\beta_u^{-1}]\sup_{t\in B^u_r(x)}\|\sigma_1(t)-\sigma_2(t)\| \] Then if \[ \lambda_1+\lambda_1^2\mu_1^{-1}\beta_u^{-1}+\mu_2\mu_1^{-1}\lambda_1\beta_u^{-1}<1 \] $\Gamma:C^1_{u}(B_r^u(x),B_r^s(y))\rightarrow C^1_{u}(B_r^u(F_u(x,y)),B_r^s(F_s(x,y)))$ is a contraction in the $C^0$ topology. \end{proposition} \begin{proof} Take $\sigma_1,\sigma_2\in C^1_{u}(B_r^u(x),B_r^s(y))$, and $z\in B_{r}^u(F_u(x,y))$, and suppose that $x_1,x_2\in B_r^u(x)$ are such that $F_u(x_1,\sigma_1(x_1))=z$ and $F_u(x_2,\sigma_2(x_2))=z$. \begin{align*} \|(\Gamma\sigma_1)(z)-(\Gamma\sigma_2)(z)\|&=\|F_s(x_1,\sigma_1(x_1))-F_s(x_2,\sigma_2(x_2))\|\\ &\leq \|F_s(x_1,\sigma_1(x_1))-F_s(x_1,\sigma_2(x_1))\|+\| F_s(x_1,\sigma_2(x_1))-F_s(x_2,\sigma_2(x_2))\|\\ &\leq \lambda_1\|\sigma_1(x_1)-\sigma_2(x_1)\|+\lambda_1\Lip(\sigma_1)\|x_1-x_2\|+\Lip(F)\beta_u^{-1}\|x_1-x_2\|. \end{align*} The following estimates hold \begin{align} \|x_1-x_2\|&=\|x_1-(F_u\circ(id,\sigma_2))^{-1}\circ(F_u\circ(id,\sigma_1))(x_1)\|\nonumber\\ &=\|x_1-(F_u\circ(id,\sigma_2))^{-1}[F_u\circ(id,\sigma_2)(x_1)+F_u\circ(id,\sigma_1)(x_1)-F_u\circ(id,\sigma_2)(x_1)]\|\nonumber\\ &\leq\|x_1-x_1\|+\|D_{\overline x}(F_u\circ(id,\sigma_2))^{-1}\|\|F_u\circ(id,\sigma_1)(x_1)-F_u\circ(id,\sigma_2)(x_1)\|\nonumber\\ &\leq \|D_{\overline x}F_u\|^{-1}\lambda_1\|\sigma_1(x_1)-\sigma_2(x_1)\|\nonumber\\ &\leq \mu_1^{-1}\lambda_1\|\sigma_1(x_1)-\sigma_2(x_1)\|\label{Eq:estdiffpoints} \end{align} and hence \[ \|(\Gamma\sigma_1)(z)-(\Gamma\sigma_2)(z)\|\leq [\lambda_1+\lambda_1^2\mu_1^{-1}\beta_u^{-1}+\mu_2\mu_1^{-1}\lambda_1\beta_u^{-1}]\|\sigma_1(x_1)-\sigma_2(x_1)\|. \] \end{proof} Consider $V\subset \mathcal C^u$ any linear subspace of dimension $n_u$ contained in $\mathcal C^u$. This is uniquely associated to $L:\mb R^{n_u}\rightarrow\mb R^{n_s}$, such that $(\Id, L)(\mb R^{n_u})=V$. \begin{definition} Given any two $V_1,V_2\subset\mathcal C^u$ linear spaces of dimension $n_u$, we can define the distance \begin{equation}\label{Eq:Defd_u} d_u(V_1,V_2):=\sup_{\substack{u\in \mb R^{n_u}\\ \|u\|=1}}\|L_1(u)-L_2(u)\|, \end{equation} \end{definition} (which is also the operator norm of the difference of the two linear morphisms defining the subspaces). \begin{proposition}\label{Prop:ContTangentSpace} If \[ \mu_1^{-1}\left[\lambda_1+\frac{\beta_u\lambda_1}{\mu_1(1-\beta_u)}\right]<1 \] then $D_zF$ is a contraction with respect to $d_u$ for all $z\in\mathbb T^n$. \end{proposition} \begin{proof} Pick $L_1,L_2:\mb R^{n_u}\rightarrow\mb R^{n_s}$ with $\|L_i\|<\beta_u$. They define linear subspaces $V_i=(\Id,L_i)(\mb R^{n_u})$ which, as a consequence of the condition on the norm of $L_i$, are tangent to the unstable cone. $V_1$ and $V_2$ are transformed by $D_z F$ into subspaces $V_1'$ and $V_2'$. This subspaces are the graph of linear transformations $L'_1,L'_2:\mb R^{n_u}\rightarrow\mb R^{n_s}$ ($\|L_i'\|\leq\beta_u$). Analogously to the graph transform one can find explicit expression for $L_i'$ in terms of $L_i$: \[ L_i'=\Pi_s\circ D_zF\circ (\Id,L_i)\circ[\Pi_u\circ D_zF\circ (\Id, L_i)]^{-1}. \] To prove the proposition we then proceed analogously to the proof of Proposition \ref{Prop:ContGraphtransC0}. Pick $u\in\mb R^{n_u}$ and suppose that $u_1,u_2\in\mb R^{n_u}$ are such that \[ (\Id,L_1')(u)=D_zF\circ (\Id,L_1)(u_1)\quad(\Id,L_2')(u)=D_zF\circ (\Id,L_2)(u_2). \] With the above definitions \begin{align*} \|L_1'(u)-L_2'(u)\|&=\|\Pi_s\circ D_zF(\Id,L_1)(u_1)-\Pi_s\circ D_zF(\Id,L_1)(u_2)\|\\ &\leq\|\Pi_s\circ D_zF(\Id,L_1)(u_1)-\Pi_s\circ D_zF(\Id,L_2)(u_1)\|\\ &\phantom{=}+\|\Pi_s\circ D_zF(\Id,L_2)(u_1-u_2)\|\\ &\leq \lambda_1\|L_1-L_2\|\|u_1\|+\beta_u\mu_2(1+\beta_u)\|u_1-u_2\|. \end{align*} \begin{align*} \|u_1-u_2\|&=\|u_1-[\Pi_u\circ D_zF\circ (\Id, L_2)]^{-1}\Pi_u\circ D_zF\circ (\Id, L_1)(u_1)\|\\ &= \|[\Pi_u\circ D_zF\circ (\Id, L_2)]^{-1}\Pi_u\circ D_zF\circ (0, L_1-L_2)(u_1)\|\\ &\leq\|\Pi_u\circ D_zF\circ (\Id, L_2)^{-1}\|\beta_u\lambda_1\|L_1-L_2\|\|u_1\|\\ &\leq \frac{\beta_u\lambda_1}{\mu_1(1-\beta_u)}\|L_1-L_2\|\|u_1\| \end{align*} The two estimates together imply that \[ \|L_1'-L_2'\|\leq\mu_1^{-1}\left[\lambda_1+\frac{\beta_u\lambda_1}{\mu_1(1-\beta_u)}\right]\|L_1-L_2\|. \] \end{proof} \section{Proof of Theorem \ref{MTheo:B}}\label{appendix:thmc} Let $g \colon \mathbb T\to \mathbb T$ and for $\omega\in \mb R$ define $g_\omega=g+\omega$. Let $\underline \omega=(\dots,\omega_n,\omega_{n-1},\dots,\omega_0)$ with $\omega_i\in (-\varepsilon',\varepsilon')$ with $\varepsilon'>0$ small. Define $g^k_{\underline \omega}= g_{\omega_k} \circ \dots \circ g_{\omega_1}\circ g_{\omega_0}$. \begin{proposition}\label{Prop:ConvergenceToAtt} Let $g\colon \mathbb T\to \mathbb T$ be $C^2$ and hyperbolic (in the sense of Definition~\ref{Def:AxiomA}), and assume that $g$ has an attracting set $\Lambda$ (consisting of periodic orbits). Then there exist $\chi\in (0,1)$, $C>0$ so that for each $\varepsilon>0$ and $T=1/\varepsilon$ the following holds. There exists a set $\Omega\subset \mathbb T$ of measure $1-\varepsilon^{1-\chi}$ so that for any $k\ge T_0$, and any $\underline \omega= (\dots,\omega_n,\omega_{n-1},\dots,\omega_0)$ with $|\omega_i|\le C\varepsilon$, and for each $k\ge T_0$, \begin{itemize} \item $g^k_{\underline \omega}$ maps each component $J$ of $\Omega$ into components of the immediate basin of the periodic attractor of $g$; \item the distance of $g^k_{\underline \omega}(J)$ to a periodic attractor of $g$ is at most $\varepsilon$. \end{itemize} \end{proposition} The proof of this proposition follows from the next two lemmas: \begin{lemma} Let $g\colon \mathbb T\to \mathbb T$ be $C^2$ and hyperbolic (in the sense of Definition~\ref{Def:AxiomA}), and assume that $g$ has an attracting set $\Lambda$ (consisting of periodic orbits). Then the repelling hyperbolic set $\Upsilon=\mathbb T \setminus W^s(\Lambda)$ of $g$ is a Cantor set with Hausdorff dimension $\chi'<1$. Moreover, for each $\chi\in (\chi',1)$, the Lebesgue measure of the $\varepsilon$-neighborhood $N_\varepsilon(\Upsilon)$ of $\Upsilon$ is at most $\varepsilon^{1-\chi}$ provided $\varepsilon>0$ is sufficiently small. \end{lemma} \begin{proof} It is well known that the set $\Upsilon$ is a Cantor set, see \cite{MS}. Notice that by definition $g^{-1}(\Upsilon)=\Upsilon$. It is also well known that the Hausdorff dimension of a hyperbolic set $\Upsilon$ associated to a $C^2$ one-dimensional map is $<1$ and that this dimension is equal to its Box dimension, see \cite{MR1489237}. Now take a covering of $\Upsilon$ with intervals of length $\varepsilon$, and let $N(\varepsilon)$ be the smallest number of such intervals that are needed. By the definition of Box dimension $\lim_{\varepsilon\to 0} \frac{\log N(\varepsilon)}{\log(\varepsilon)} \to \chi'$. It follows that $N(\varepsilon)\le \frac{1}{\varepsilon^{\chi}}$ for $\varepsilon>0$ small. It follows that the Lebesgue measure of $N_\varepsilon$ is at most $N(\varepsilon)\varepsilon \le \varepsilon^{1-\chi}$ for $\varepsilon>0$ small. \end{proof} For simplicity assume that $n=1$ in Definition~\ref{Def:AxiomA}. As in Subsection~\ref{subsec:mather} the general proof can be reduced to this case. \begin{lemma} Let $g$ and $g^k_{\underline \omega}$ as above. Then there exists $C>0$ so that for each $\varepsilon>0$ sufficiently small, and taking $\widetilde N= N_\varepsilon(\Upsilon)$ and $|\omega_i|< \varepsilon'=C\varepsilon$ we have the following: \begin{enumerate} \item $g^k_{\underline \omega}(\mathbb T \setminus \widetilde N)\subset \mathbb T \setminus \widetilde N$ for all $k\ge 1$. \item $\mathbb T\setminus \widetilde N$ consists of at most $1/\varepsilon$ intervals. \item Take $T_0=2/\varepsilon$. Then for each $k\ge T_0$, $g^k_{\underline \omega}$ maps each component $J$ of $\mathbb T\setminus \widetilde N$ into a component of the immediate basin of a periodic attractor of $g$. Moreover, $g^k_{\underline \omega}(J)$ has length $<\varepsilon$ and has distance $<\varepsilon$ to a periodic attractor of $g$. \end{enumerate} \end{lemma} \begin{proof} The first statement follows from the fact that we assume that $|Dg|>1$ on $\Upsilon$, because $\Upsilon$ is backward invariant, and by continuity. To prove the second statement let $J_i$ be the components of $\mathbb T \setminus N_{\varepsilon/4}(\Upsilon)$. If $J_i$ has length $<\varepsilon$ then $J_i$ is contained in $\mathbb T \setminus N_{\varepsilon}(\Upsilon)$. So the remaining intervals $J_i$ all have length $\ge \varepsilon$ and cover $\mathbb T \setminus N_{\varepsilon}(\Upsilon)$. The second statement follows. To see the third statement, notice that the only components of $\mathbb T \setminus \Upsilon$ containing periodic points are those that contain periodic attractors. Since $\Upsilon$ is fully invariant, $\mathbb T \setminus \Upsilon$ is forward invariant. In particular, if $J'$ is component of $\mathbb T \setminus \Upsilon$ then there exists $k$ so that $g^k(J')$ is contained in the immediate basin of a periodic attractor of $g$ and $J',\dots,g^k(J')$ are all contained in different components of $\mathbb T \setminus \Upsilon$. This, together with 1) and 2) implies that each component of $\mathbb T\setminus \widetilde N$ is mapped in at most $1/\varepsilon$ steps into the immediate basin of $g$. Since the periodic attractor is hyperbolic, it follows that under $1/\varepsilon$ further iterates this interval has length $<\varepsilon$ and has distance at most $\varepsilon$ to a periodic attractor (here we use that $\varepsilon>0$ is sufficiently small so that also $2/\varepsilon>m$). \end{proof} \begin{proof}[Proof of Theorem \ref{MTheo:B}] a) Fix an integer $\sigma\ge 2$, $\alpha\in \mb R$, $\kappa\in (0,1]$. The map $\mathcal F\colon C^k(\mathbb T \times \mathbb T, \mb R) \to C^k(\mathbb T,\mb R)$ defined by $\mathcal F(h)(x)=\int h(x,y)\, dy$ is continuous. Since the set of hyperbolic $C^k$ maps $g\colon \mathbb T \to \mathbb T$ is open and dense in the $C^k$ topology, see \cite{MR2342693}, it follows that the set of $C^k$ functions $h\in C^k(\mathbb T \times \mathbb T, \mb R)$ for which $x\mapsto \sigma x + \alpha \kappa \int h(x,y) \, dm_1(y) \mod 1$ is hyperbolic is also open and dense in the $C^k$ topology, which proves the first statement of the theorem. (The above is true for $k\in\mb N$, $k=\infty$, or $k=\omega$). To prove b), first of all recall that if $g\in C^k(\mathbb T,\mathbb T)$ is a hyperbolic map with a critical point $x\in\mathbb T$, then $g$ has a periodic attractor and $x$ belongs to its basin. If $h\in C^k(\mathbb T \times \mathbb T, \mb R)$, supposing that $\mathcal F(h)(x)$ is not constant, then \begin{equation}\label{Eq:negcond} \exists x\in\mathbb T\quad\mbox{ s.t. }\quad\frac{d\mathcal F(h)(x)}{dx}<0 \end{equation} Condition \eqref{Eq:negcond} holds for an open and dense set $\Gamma''\subset C^k(\mathbb T \times \mathbb T, \mb R)$. Pick $h\in \Gamma''$, then from \eqref{Eq:negcond} follows that there exist an open neighbourhood $V$ of $h$, and an interval $\mathcal I\subset\mb R$ such that $g_{\beta,h}(x)=\sigma x+\beta\mathcal F(h)(x)\mod 1$ has a critical point for all $h\in V$ and $\beta\in \mathcal I$. Since the map $\mathcal I\times V\rightarrow C^k(\mathbb T,\mathbb T)$ is continuous, there is an open and dense subset of $\mathcal I\times V$ for which the map $g_{\beta,h}$ is hyperbolic, and thus has a finite periodic attractor. Furthermore, if $g_{\beta,h}$ has a periodic attractor, by structural stability there is an open interval $\mathcal I_\beta$ such that also $g_{\beta',h}$ has a periodic attractor for all $\beta'\in\mathcal I_\beta$. Once the existence of a hyperbolic periodic attractor is established, the rest of the proof follows from Theorem \ref{Thm:Main} and Proposition \ref{Prop:ConvergenceToAtt}. \end{proof} The following two propositions contain rigorous statement regarding the example presented in the introduction of the paper. \begin{proposition} \label{Prop:AppTbeta1} For any $\beta\in \mb R$, the map $T_{\beta}(x)= 2 x - \beta \sin (2\pi x) \mod 1$ has at most two periodic attractors $O_1, O_2$ with $O_1=-O_2$. \end{proposition} \begin{proof} The map $T_\beta$ extends to an entire map on $\mb C$ and therefore each periodic attractor has a critical point in its basin \cite{MR1216719}. This implies that there are at most two periodic attracting orbits. Note that $T_\beta(-x)=-T_\beta(x)$ and therefore if $O$ is a finite set in $\mb R$ corresponding to a periodic orbit of $T_\beta$, then so is $-O$, and it follows that if $T_\beta$ has two periodic attractors $O_1$ and $O_2$ then $O_1=-O_2$. (If $T_\beta$ has only one periodic attractor $O$, then one has $O=-O$.) Notice that indeed there exist parameters $\beta$ for which $T_\beta$ has two attracting orbits. For example, when $\beta=1.25$ then $T_\beta$ has two distinct attracting fixed points. \end{proof} \begin{proposition} \label{Prop:AppTbeta2} Given $\kappa_0,\kappa_1,\dots,\kappa_m$, consider the families $T_{\beta,j}(x)= 2 x - \beta\kappa_j \sin (2\pi x) \mod 1$. Then \begin{enumerate} \item there exists an open and dense subset $\mathcal I'$ of $\mb R$ so that for each $\beta\in \mathcal I'$ each of the maps $T_{\beta,j}$, $j=1,\dots,m$ is hyperbolic. \item there exists $\beta_0>0$ and an open and dense subset $\mathcal I$ of $(-\infty,-\beta_0)\cup (\beta_0,\infty)$ so that for each $\beta\in \mathcal I$, each of the maps $T_{\beta,j}$, $j=1,\dots,m$ is hyperbolic and has a periodic attractor. \end{enumerate} \end{proposition} \begin{proof} Let $\mathcal H$ be the set of parameters $\beta\in \mb R$ so that $T_{\beta}(x)= 2 x - \beta\sin (2\pi x) \mod 1$ is hyperbolic. By \cite{MR3336841}, the set $\mathcal H$ is open and dense. It follows that $(1/\kappa_j)\mathcal H$ is also open and dense. Hence $(1/\kappa_1)\mathcal H\cap \dots \cap (1/\kappa_m)\mathcal H$ is open and dense. It follows in particular that this intersection is open and dense in $\mb R$. For each $|\beta|>2\pi$ the map $T_{\beta}(x)= 2 x - \beta\sin (2\pi x) \mod 1$ has a critical point, and so if such a map $T_\beta$ is hyperbolic then, by definition, $T_\beta$ has one or more periodic attractors (and each critical point is in the basin of a periodic attractor). So if we take $\beta_0=\max(2\pi / \kappa_1,\dots,2\pi/\kappa_m)$ the second assertion follows. \end{proof} \section{Proof of Theorem~\ref{MTheo:C}}\label{App:RandGrap} The study of global synchronization of chaotic systems has started in the eighties for systems in the ring \cite{fujisaka1983,heagy1994}. This approach was generalized for undirected networks of diffusively coupled systems merging numerical computations of Lyapunov exponents and transverse instabilities of the synchronous states. See also \cite{dorfler2014synchronization,eroglu2017} for a review. These results have been generalized to weighted and directed graphs via dichotomy estimates \cite{Pereira2014}. In our Theorem C, we make use of these ideas to obtain an open set of coupling function such that the networks will globally synchronize for random homogeneous networks. Simultaneously, our Theorem A guarantees that any coupling function in this set can exhibit hub synchronization. \begin{proof}[Proof of the Theorem~\ref{MTheo:C}] First we recall that the manifold $\mathcal S$ is invariant $F(\mathcal S) \subset \mathcal S$. Indeed, if the system is in $\mathcal S$ at a time $t_0$, hence $x_1(t_0)=\cdots =x_N(t_0)$, then because $h(x(t_0),x(t_0))=0$ the whole coupling term vanishes and the evolution of the network will be given by $N$ copies of the evolution of $x(t_0)$. Hence, we notice that the dynamics on $\mathcal S$ is the dynamics of the uncoupled chaotic map, $x_i(t+1) = f(x_i(t))$ for all $t\ge t_0$ and $i=1,\dots, N$. Our goal is to show that for certain diffusive coupling functions, $\mathcal S$ is normally attracting. The proof of item a) can be adapted from \cite{Pereira2014}. \noindent {\bf Step 1 Dynamics near $\mathcal{S}$}. In a neighborhood of $\mathcal S$ we can write $x_i = s + \psi_i$ where $s(t+1) = f(s(t))$ and $| \psi_i | \ll 1$. Expanding the coupling in Taylor series, we obtain \begin{eqnarray} \psi_i(t+1)=f^{\prime} (s(t))\psi_i(t)) + \frac{\alpha}{\Delta} \sum_{j} A_{ij} [ h_1(s(t),s(t))\psi_i(t) + h_2(s(t),s(t))\psi_j(t) + R(\psi_i(t),\psi_{j}(t))] \nonumber \end{eqnarray} where $h_i$ stands for the derivative of $h$ in the $i$th entry and $R(\psi_i,\psi_j)$ is a nonlinear remainder, by Lagrange Theorem we have $R(\psi_i,\psi_j) < C( |\psi_i|^2 + |\psi_j|^2)$, for some positive constant $C =C(A,h,f)$. Moreover, because $h$ is diffusive \[ h_1(s(t),s(t)) = - h_2(s(t),s(t)).\] Defining $\omega(s(t)) := h_1(s(t),s(t))$ and entries of the Laplacian matrix $L_{ij} = A_{ij} - d_i \delta_{ij}$, we can write the first variational equation in compact form by introducing $\Psi = (\psi_1, \cdots, \psi_n) \in \mathbb{R}^n$. Indeed, \begin{equation}\label{mu} \Psi(t+1) =\left [ f^{\prime} (s(t)) I_N - \frac{\alpha}{\Delta} \omega(s(t)) L \right] \Psi(t). \end{equation} Because the laplacian is symmetric, it admits a spectral decomposition $L = U \Lambda U^*$, where $U$ is the matrix of eigenvectors and $\Lambda =$diag$(\lambda_1,\dots,\lambda_N)$ the matrix of eigenvalues. Also its eigenvalues can organized in increasing order $$ 0=\lambda_1 < \lambda_2 \le \cdots \le \lambda_N, $$ as the operator is positive semi-definite. The eigenvalue $\lambda_1 = 0$ is always in the spectrum as every row of $L$ sums to zero. Indeed, consider $\mathbf{1} = (1,\cdots,1) \in \mathbb{R}^n$ then $L \mathbf{1} = 0$. Notice that this direction $\mathbf{1}$ is associated with the synchronization manifold $\mathcal S$. All the remaining eigenvectors correspond to transversal directions to $\mathcal S$. The Laplacian $L$ has a spectral gap $\lambda_2 >0$ because the network is connected, as is shown in Theorem \ref{SpectralBounds}. So, we introduce new coordinates $\Theta = U \Psi$ to diagonalize $L$. Notice that by construction $\Psi $ is not in the subspace generated by $\{ \mathbf{1}\}$, and thereby $\Psi$ is associated to the dynamics in the transversal eigenmodes. Writing $\Theta = (\theta_1,\dots,\theta_N)$, we obtain the dynamics for the $i$-th component $$ \theta_i(t+1) = [ f^{\prime} (s(t)) + \alpha \lambda_i \omega(s(t)) ] \theta_i. $$ Thus, we decoupled all transversal modes. Since we are interested in the transverse directions we only care about $\lambda_i > 0$. This is equivalent to the linear evolution of Eq. (\ref{mu}) restricted to the subspace orthogonal $\mathbf{1}$. \noindent {\bf Step 2. Parametric Equation for Transversal Modes}. As we discussed, the modes $\theta_i$ with $i=2,\cdots,N$ correspond to the dynamics transversal to $\mathcal{S}$. If these modes are damped the manifold $\mathcal{S}$ will be normally attracting. Because all equations are the same up to a factor $\lambda_i$, we can tackle them all at once by considering a parametric equation \begin{equation}\label{para} z(t+1) = [ f^{\prime} (s(t)) - \beta \omega(s(t)) ] z(t). \end{equation} This equation will have a uniformly exponentially attracting trivial solution if \begin{equation}\label{st} \nu := \sup_{t>0} \| f^{\prime}(s(t)) - \beta \omega(s(t)) \| < 1. \end{equation} Now pick any $\varphi\in C^1(\mathbb T; \mathbb R)$ with $\frac{d\varphi}{dx}(0)\neq 0$, and suppose that $h'(x,y)$ is a diffusive coupling function with $\|h'(x,y)-\varphi(y-x)\|_{C^1}<\varepsilon$. Because $f^{\prime}(s(t)) = \sigma$ and \[ \omega(s(t)) = -\frac{d\varphi}{dx}(0) + \frac{\partial}{\partial x}\left[h'(x,y)-\varphi(y-x)\right] (s(t),s(t)), \] the condition in Eq. (\ref{st}) is always satisfied as long as \begin{equation}\label{Eq:Ineqbeta} \left | \sigma - \beta \frac{d\varphi}{dx}(0)\right | + |\beta|\varepsilon < 1 \end{equation} Suppose that $\frac{d\varphi}{dx}(0)>0$ (the negative case can be dealt with analogously). Define \[ \beta_c^1 := (\sigma-1) \left(\frac{d\varphi}{dx}(0)\right)^{-1} \mbox{~ and ~} \beta_c^2 := (\sigma+1) \left(\frac{d\varphi}{dx}(0)\right)^{-1}. \] Then, there is an interval $\mathcal I \subset (\beta_c^1, \beta_c^2)$ such that all $\beta \in \mathcal I$ the inequality \eqref{Eq:Ineqbeta} holds. From the parametric equation we can obtain the $i$-th equation for the transverse mode by setting $\beta = \frac{\alpha}{\Delta} \lambda_i$ and $\theta_i$'s will decay to zero exponentially fast if \begin{equation}\label{Eq:CondEig} \beta_c^1 < \frac{\alpha}{\Delta} \lambda_2 \le \cdots \le \frac{\alpha}{\Delta} \lambda_N < \beta_c^2. \end{equation} Hence, if the eigenvalues satisfy \begin{equation}\label{SyncC} \frac{\lambda_N}{\lambda_2} < \frac{\sigma+1}{\sigma-1}, \end{equation} \noindent then one can find an interval $I\subset\mb R$ for the coupling strength, such that Eq. \eqref{Eq:CondEig} is satisfied for every $\alpha\in I$. \noindent {\bf Step 3. Bounds for Laplacian Eigenvalues}. Theorem \ref{Gp} below shows that almost every graph $G \in \mathcal{G}_p$ $$ \frac{\lambda_N(G)}{\lambda_2(G)} = 1 + o(1) $$ Hence, condition Eq. (\ref{SyncC}) is met and we guarantee that the transversal instabilities are damped uniformly and exponentially fast, and as a consequence the manifold $\mathcal{S}$ is normally attracting. We illustrate such a network in Figure \ref{2k}. Indeed, since the coordinates $\theta_i$ of the linear approximation decay to zero exponentially $\theta_i(t) \le C e^{-\eta t}$ for all $i=2,\cdots,N$ with $\eta>0$, then the full nonlinear equations synchronize. Indeed, $\| \Psi (t) \| \le \widetilde Ce^{-\eta t}$, which means that the the first variational equation Eq. \ref{mu} is uniformly stable. To tackle the nonlinearities in the remainder, we notice that for any $\varepsilon>0$ there is $\delta_0>0$ and $ C_{\varepsilon}>0$ such that for all $| x_i (t_0) - x_j(t_0) | \le \delta_0$, the nonlinearity is small and by a Gr\"onwall type estimate we have $$ | x_i (t) - x_j(t) | \le C_{\varepsilon} e^{- (t-t_0) (\eta - \varepsilon)}. $$ this will precisely happen when the condition Eq. (\ref{SyncC}) is satisfied. The open set for coupling function follows as uniform exponential attractivity is an open property. The proof of item a) is therefore complete. For the proof of $b)$ we use Steps 1 and 2, and only change the spectral bounds. From Theorem \ref{SpectralBounds} we obtain $$ \frac{\lambda_N}{\lambda_2} > \frac{d_{N,N}}{d_{1,N}} $$ hence as heterogeneity increases as the ratio tends to infinity for $N \rightarrow \infty$ and condition Eq. (\ref{SyncC}) is never met regardless the value of $\alpha$. Hence there are always unstable modes, and the synchronization manifold $\mathcal S$ is unstable. \end{proof} The spectrum of the Laplacian is related to many important graph invariants. In particular the diameter $D$ of the graph, which is the maximum distance between any two nodes. Therefore, if the graph is connected the $D$ is finite. \begin{theorem}\label{SpectralBounds} Let $G$ be a simple network of size $N$ and $L$ its associated Laplacian. Then: \begin{enumerate} \item \emph{\cite{Mohar91}} $ \lambda_2 \ge \displaystyle \frac{4}{N D} $ \item \emph{\cite{Fiedler73}} $ \lambda_2 \le \displaystyle \frac{N}{N-1} d_1 $ \item \emph{\cite{Fiedler73}} $ \frac{N}{N-1}d_{\max} \le \lambda_N \le 2 d_{\max} $ \end{enumerate} \label{boundl} \end{theorem} The proof of the theorem can be found in references we provide in the theorem. \begin{theorem}[\cite{Mohar92}]\label{Gp} Consider the ensemble of random graphs $\mathcal{G}_p$ with $p > \frac{\log N}{N}$, then a.s. $$ \lambda_2 > Np - f(N) \,\,\,\, \mbox{~and~} \,\,\,\, \lambda_N< pN + f(N) $$ where $$ f(N) = \sqrt{(3+\varepsilon)(1-p)pN \log N} $$ for $\varepsilon>0$ arbitrary. \end{theorem} ~ \noindent {\it Regular Networks.} Consider a network of $N$ nodes, in which each node is coupled to its $2K$ nearest neighbors. See an Illustration in Figure \ref{2k} when $K=2$. In such regular network every node has the same degree $2K$. \begin{figure} \caption{On the left panel we present a regular network where every node connects to its $2$ left and $2$ right nearest neighbors. Such networks shows poor synchronization properties in the large $N$ limit if $K\ll N$ as shown in Eq. (\ref{reg}). On the right panel, we depict a random (Erd\"os-R\'enyi) network where every connection is a Bernouilli random variable with success probability $p=0.3$. Such random networks tend to be homogeneous (nodes have $pN$ connections) and they exhibit excellent synchronization properties. } \label{2k} \end{figure}\\ Whenever, $K \ll N$ the network will not display synchronization. This is because the diameter $D$ of the network (the maximal distance between any two nodes) is proportional to $N$. In this case, roughly speaking, the network is essentially disconnected as $N \rightarrow \infty$. However, as $K \rightarrow N/2$ the network is optimal for synchronization. Here the diameter of the network is extremely small as the graph is close to a full graph. Indeed, since the Laplacian is circulant, it can be diagonalized by discrete Fourier Transform, and eigenvalues of a regular graph can be obtained explicitly \cite{Barahona2002} \[ \lambda_j = 2K -\frac{\sin \left( \frac{(2K +1)\pi (j-1) }{N}\right)}{\sin \pi (j-1) /N}, \mbox{~for~} j=2,\dots,N. \] Hence, we can obtain the asymptotics in $K \ll N$ for the synchronization Eq. (\ref{SyncC}). Using a Taylor expansion in this expression, we obtain \begin{equation}\label{reg} \frac{\lambda_N}{\lambda_2} \approx \frac{(3\pi +2)N^2}{2\pi^3K^ 2 } \end{equation} Hence, when $K \ll N$ synchronization is never attained. From a graph theoretic perspective, when $K\ll N$, e.g. $K$ is fixed and $N \rightarrow \infty$ then $\lambda_2 \sim 1/N^2$, implying that the bound in Theorem \ref{SpectralBounds} is tight, as the diameter of such networks is roughly $D \sim N$. This is in stark constrast to random graphs, the mean degree of each nodes is approximately $d_{i,N} = pN$. However, even in the limit $d_{i,N} \ll N$, randomness drastically reduces the diameter of the graph, in fact, in the model we have $D \propto \log N$ (again $p > \log N / N$). Although the regular graphs exhibit a quite different synchronization scenario when compared to homogeneous random graphs, if we include a layer of highly connected nodes can still exhibit distinct dynamics across levels. \noindent \section{Random Graphs}\label{Sec:ApRandGrap} A {\em random graph model} of size $N$ is a probability measure on the set $\mathcal G(N)$ of all graphs on $N$ vertices. Very often random graphs are defined by models that assign probabilities to the presence of given edges between two nodes. The random graphs we consider here are a slight generalization of the model proposed in \cite{FanChung}, adding a layer of hubs to their model. Our terminology is that of \cite{FanChung,Bollobas}. Let ${\bf w}(N)=(w_1,\dots,w_N)$ be an ordered vector of positive real numbers, i.e. such that $w_1\le w_2 \le \dots \le w_N$. We construct a random graph where the expectation of the degrees is close to the one as listed in ${\bf w}(N)$ (see Proposition~\ref{prop:expectationdegree}). Let $\rho=1/(w_1+\dots+w_N)$. Given integers $0\le M<N$, we say that ${\bf w}$ is an {\em admissible heterogeneous vector of degrees with $M$ hubs and $L=N-M$ low degree nodes}, if \begin{equation} w_{N} w_{L}\rho\leq 1 . \label{eq:prob} \end{equation} To such a vector $w={\bf w}(N)$ we associate the probability measure $\mathbb P_{\bf w}$ on the set $\mathcal G(N)$ of all graphs on $N$ vertices, i.e., on the space of $N\times N$ random adjacency matrices $A$ with coefficients $0$ and $1$, taking the entries of $A$ i.i.d. and so that \[ \mathbb P_{\bf w}(A_{in}=1) =\left\{ \begin{array}{c r} w_i w_n \rho & \mbox{~ when ~} i\le L \mbox{~\, or ~} n\le L\\ r&\mbox{~when~} i,n\geq L \end{array} \right. \] We assigned constant probability $0\leq r\leq 1$ of having a connection among the hubs to simplify computations later, but different probability could have been assigned without changing the final outcome. Notice that the admissibility condition \eqref{eq:prob} ensures that the above probability is well defined. The pair $\mathcal G_{\bf w}=(\mathcal G(N),\mathbb P_{\bf w})$ is called a {\em random graph} of size $N$. We are going to prove the following proposition. \begin{proposition}\label{Prop:SatHetRand} Let $\{{\bf w}(N)\}_{N\in\mb N}$ be a sequence of admissible vectors of heterogeneous degrees such that ${\bf w}(N)$ has $M:=M(N)$ hubs. If there exists $p\in[1,\infty)$ such that the entries of the vector satisfy \begin{align} &\lim_{N\rightarrow\infty}{w_1}^{-1}L^{1/p}{\beta}^{1/q}=0 \label{Cond1}\\ &\lim_{N\rightarrow\infty}{w_1}^{-1/p}{M}^{1/p}=0 \label{Cond2}\\ &\lim_{N\rightarrow\infty}{w_1}^{-2}{\beta}L^{1+2/p}=0 \label{Cond3}\\ &\lim_{N\rightarrow\infty}w_1^{-1}ML^{1/p}=0\label{Cond4} \end{align} with $\beta(N):=\max\{w_{L}, N^{1/2}\log N\}$, then for any $\eta>0$ the probability that a graph in $\mathcal G_{\bf w}$ satisfies \eqref{Eq:ThmCond1}- \eqref{Eq:ThmCond3} tends to 1, for $N\rightarrow\infty$. \end{proposition} To prove the theorem above we need the following result on concentration of the degrees of a random graph around their expectation. \begin{proposition}\label{prop:expectationdegree} Given an admissible vector of degrees ${\bf w}$ and the associated random graph $\mathcal G_{\bf w}$, the in-degree of the $k-$th node, $d_k=\sum_{\ell=1}^{n}A_{k\ell}$, satisfies for every $k\in \mb N$ and $C\in\mb R^+$ \[ \mathbb P\left(|d_k-\mathbb E[d_k]|>C\right)\leq \exp\left\{-\frac{NC^2}{2}\right\}, \] where \[ \mathbb E[d_k]=\left\{\begin{array}{cr} w_k&1\leq k\leq L\\ w_k\left(1-\rho\sum_{\ell=L+1}^Nw_\ell\right)+Mr&k>L \end{array}\right.. \] \end{proposition} \begin{proof} Suppose $1\leq k\leq L$: \[ \mathbb E[d_k]=\sum_{\ell=1}^Nw_kw_\ell\rho=w_k, \] From Hoeffding inequality we know that \[ \mathbb P\left(\left|\frac{1}{N}\sum_{\ell=1}^{N}A_{k\ell}-\frac{w_k}{N}\right|>\frac{C}{N}\right)\leq 2\exp\{-N C^2/2\}. \] Suppose $k>L$. \[ \mathbb E[d_k]=\sum_{\ell=1}^Lw_kw_\ell\rho+r M=w_k\left(1-\rho\sum_{\ell=L+1}^Nw_\ell\right)+r M. \] Again by Hoeffding \[ \mathbb P\left(\left|d_k-\mathbb E[d_k]\right|>N\varepsilon\right)\leq 2\exp\{-N\varepsilon^2/2\}. \] \end{proof} \begin{proof}[Proof of Proposition \ref{Prop:SatHetRand}] For every $N\in\mb N$ consider the graphs in $\mathcal G(N)$ \[ Q_N:=\bigcap_{k=1}^N\{|d_k-\mathbb E[d_k]|<C_k(N)\} \] for given numbers $\{C_k(N)\}_{N\in\mb N,k\in[N]}\subset\mb R^+$. Since $d_k$ are independent random variables, one obtains \begin{align*} \mathbb P(Q_N)&\geq\prod_{k=1}^N\left(1-\exp\left\{-K\frac{C_k(N)^2}{N}\right\}\right) \end{align*} if we choose $C_k(N)=(N\log(N))^{1/2}g(N)$ with $g(N)\rightarrow\infty$ at any speed then \[ \lim_{n\rightarrow\infty}\mathbb P(Q_N)=1. \] \noindent Taken any graph $G\in Q_N$, the maximum degree satisfies \begin{align*} \Delta\geq w_1\left(1-\mathcal O(M^{-1}w_1^{-1}L)\right)-C(N) \end{align*} and the maximum degree for a low degree node will be $\delta<w_{L}+C(N)$. So, from conditions \eqref{Cond1}-\eqref{Cond4}, in the limit for $N\rightarrow\infty$ \begin{align*} \frac{M^{1/p}}{\Delta^{1/p}}&\leq \frac{M^{1/p}}{w_1^{1/p}}\frac{1}{\left[1-\frac{C(N)}{w_1}\right]^{1/p}}\rightarrow 0\\ \frac{N^{1/p}\delta^{1/q}}{\Delta}&\leq\frac{N^{1/p}\left[w_{L}+C(N)\right]^{1/q}}{w_1\left[1-\frac{C(N)}{w_1}\right]}\leq\frac{\left[\frac{L^{q/p}w_{L}}{w_1^{q}}+\frac{L^{q/p}C(N)}{w_1^{q}}\right]^{1/q}}{\left[1-\frac{C(N)}{w_1}\right]}\rightarrow 0\\ \frac{ML^{1/p}}{\Delta}&\leq\frac{ML^{1/p}}{w_1}\frac{1}{\left[1-\frac{C(N)}{w_1(n)}\right]}\rightarrow 0\\ \frac{L^{1+2/p}\delta}{\Delta^{2}}&\leq \frac{L^{1+2/p}}{w_1^2}\frac{\left[w_{L}+C(N)\right]}{\left[1-\frac{C(N)}{w_1}\right]^{2}}=\frac{\left[\frac{L^{1+2/p}w_{L}}{w_1^2}+\frac{L^{1+2/p}C(N)}{w_1^2}\right]}{\left[1-\frac{C(N)}{w_1}\right]^{2}}\rightarrow 0 \end{align*} which proves the proposition. \end{proof} \end{appendices} \ \end{document}
arXiv
# FFT in signal processing: Discrete Fourier Transform The Discrete Fourier Transform (DFT) is a mathematical algorithm that transforms a sequence of values representing a signal into a frequency spectrum. It is widely used in signal processing for analyzing the frequency components of a signal. Consider a signal $x(t)$ sampled at a rate of $T$ with a length of $N$ samples. The DFT of the signal can be computed using the following formula: $$X(k) = \sum_{n=0}^{N-1} x(n) e^{-j2\pi kn/N}$$ where $k$ is the frequency index, $j$ is the imaginary unit, and $N$ is the number of samples. In Python, the DFT can be computed using the `numpy.fft` module. Here's an example: ```python import numpy as np x = np.array([1, 2, 3, 4]) # Sample signal X = np.fft.fft(x) print(X) ``` ## Exercise Compute the DFT of the following signal: $$x(t) = \left\{ \begin{array}{ll} 1 & \text{if } 0 \le t < 1 \\ 2 & \text{if } 1 \le t < 2 \\ 3 & \text{if } 2 \le t < 3 \\ 4 & \text{if } 3 \le t < 4 \\ \end{array} \right.$$ # FFT in image processing: 2D Discrete Fourier Transform The 2D Discrete Fourier Transform (2D DFT) is an extension of the 1D DFT to two dimensions. It is used in image processing to analyze the frequency components of an image. The 2D DFT of an image $f(x, y)$ can be computed using the following formula: $$F(u, v) = \sum_{x=0}^{M-1} \sum_{y=0}^{N-1} f(x, y) e^{-j2\pi(ux/M + vy/N)}$$ where $u$ and $v$ are the frequency indices, $M$ and $N$ are the image dimensions, and $j$ is the imaginary unit. In Python, the 2D DFT can be computed using the `numpy.fft` module. Here's an example: ```python import numpy as np f = np.array([[1, 2], [3, 4]]) # Image data F = np.fft.fft2(f) print(F) ``` ## Exercise Compute the 2D DFT of the following image: $$f(x, y) = \left\{ \begin{array}{ll} 1 & \text{if } 0 \le x < 1, 0 \le y < 1 \\ 2 & \text{if } 1 \le x < 2, 0 \le y < 1 \\ 3 & \text{if } 0 \le x < 1, 1 \le y < 2 \\ 4 & \text{if } 1 \le x < 2, 1 \le y < 2 \\ \end{array} \right.$$ # Using NumPy for FFT in Python NumPy is a powerful library for numerical computing in Python. It provides a wide range of functions for working with arrays and performing mathematical operations. The Fast Fourier Transform (FFT) can be computed using NumPy's `fft` module. Here's an example of how to compute the FFT of a signal using NumPy: ```python import numpy as np x = np.array([1, 2, 3, 4]) # Signal data X = np.fft.fft(x) print(X) ``` ## Exercise Compute the FFT of the following signal: $$x(t) = \left\{ \begin{array}{ll} 1 & \text{if } 0 \le t < 1 \\ 2 & \text{if } 1 \le t < 2 \\ 3 & \text{if } 2 \le t < 3 \\ 4 & \text{if } 3 \le t < 4 \\ \end{array} \right.$$ # Image processing with Python: PIL and OpenCV Python Image Library (PIL) and OpenCV are popular libraries for image processing in Python. They provide functions for loading, manipulating, and analyzing images. Here's an example of how to load and display an image using PIL: ```python from PIL import Image image = Image.open("image.jpg") image.show() ``` And here's an example of how to load and display an image using OpenCV: ```python import cv2 image = cv2.imread("image.jpg") cv2.imshow("Image", image) cv2.waitKey(0) cv2.destroyAllWindows() ``` ## Exercise Load and display the following image using PIL: ``` image.jpg ``` # Signal processing with Python: SciPy and PyWavelets SciPy is a library for scientific computing in Python. It provides functions for performing various types of signal processing, including filtering and spectral analysis. PyWavelets is a library for wavelet analysis in Python. Here's an example of how to perform a bandpass filter using SciPy: ```python import numpy as np from scipy.signal import butter, filtfilt def bandpass_filter(data, lowcut, highcut, fs, order=5): nyq = 0.5 * fs low = lowcut / nyq high = highcut / nyq b, a = butter(order, [low, high], btype="band") filtered_data = filtfilt(b, a, data) return filtered_data data = np.random.rand(1000) # Signal data filtered_data = bandpass_filter(data, 0.1, 0.4, fs=100) ``` ## Exercise Apply a bandpass filter to the following signal: $$x(t) = \left\{ \begin{array}{ll} 1 & \text{if } 0 \le t < 1 \\ 2 & \text{if } 1 \le t < 2 \\ 3 & \text{if } 2 \le t < 3 \\ 4 & \text{if } 3 \le t < 4 \\ \end{array} \right.$$ # Applications of FFT in image processing: Image compression, filtering, and feature extraction The FFT can be used in image processing for various applications, including image compression, filtering, and feature extraction. These applications can be implemented using the 2D DFT and NumPy's `fft` module. Here's an example of how to compress an image using the 2D DFT and JPEG compression: ```python import numpy as np from scipy.misc import imread, imsave image = imread("image.jpg") F = np.fft.fft2(image) compressed_image = np.fft.ifft2(F).real imsave("compressed_image.jpg", compressed_image) ``` ## Exercise Compress the following image using the 2D DFT and JPEG compression: ``` image.jpg ``` # Applications of FFT in signal processing: Spectral analysis, noise reduction, and detection The FFT can be used in signal processing for various applications, including spectral analysis, noise reduction, and detection. These applications can be implemented using the 1D DFT and NumPy's `fft` module. Here's an example of how to perform spectral analysis on a signal using the 1D DFT: ```python import numpy as np x = np.array([1, 2, 3, 4]) # Signal data X = np.fft.fft(x) spectrum = np.abs(X) ``` ## Exercise Perform spectral analysis on the following signal using the 1D DFT: $$x(t) = \left\{ \begin{array}{ll} 1 & \text{if } 0 \le t < 1 \\ 2 & \text{if } 1 \le t < 2 \\ 3 & \text{if } 2 \le t < 3 \\ 4 & \text{if } 3 \le t < 4 \\ \end{array} \right.$$ # Conclusion and future directions In this textbook, we have explored the applications of the FFT in signal processing and image processing using Python. We have seen how to compute the DFT and 2D DFT using NumPy, how to perform various image processing tasks using PIL and OpenCV, and how to implement various signal processing algorithms using SciPy and PyWavelets. In future directions, researchers can continue to develop more efficient algorithms for computing the FFT, such as the Fast Fourier Transform (FFT), and explore its applications in new domains, such as machine learning and artificial intelligence. ## Exercise Research and discuss the future directions of the FFT in signal processing and image processing using Python.
Textbooks
Primary: 35C07, 35K57, 35J61; Secondary: 37B25, 92D30. Dynamics of a diffusive age-structured HBV model with saturating incidence Xichao Duan, Sanling Yuan, Kaifa Wang 1. School of Management, University of Shanghai for Science and Technology, Shanghai 200093 2. College of Science, Shanghai University for Science and Technology, Shanghai 200093 3. Department of Mathematics, School of Biomedical Engineering, Third Military Medical University, Chongqing 400038 In this paper, we propose and investigate an age-structured hepatitis B virus (HBV) model with saturating incidence and spatial diffusion where the viral contamination process is described by the age-since-infection. We first analyze the well-posedness of the initial-boundary values problem of the model in the bounded domain $\Omega\subset\mathbb{R}^n$ and obtain an explicit formula for the basic reproductive number $R_0$ of the model. Then we investigate the global behavior of the model in terms of $R_0$: if $R_0\leq1$, then the uninfected steady state is globally asymptotically stable, whereas if $R_0>1$, then the infected steady state is globally asymptotically stable. In addition, when $R_0>1$, by constructing a suitable Lyapunov-like functional decreasing along the travelling waves to show their convergence towards two steady states as $t$ tends to $\pm\infty$, we prove the existence of traveling wave solutions. Numerical simulations are provided to illustrate the theoretical results. Keywords: travelling wave solutions.; spatial diffusion; basic reproductive number; global stability; Age-structured HBV model Citation: Xichao Duan, Sanling Yuan, Kaifa Wang. Dynamics of a diffusive age-structured HBV model with saturating incidence. Mathematical Biosciences and Engineering, 2016, 13(5): 935-968. doi: 10.3934/mbe.2016024 1. J. Virol., 83 (2009), 7659-7667. 2. Lancet., 2 (1981), 1129-1133. 3. Proc. Natl. Acad. Sci. USA., 94 (1997), 6971-6976. 4. SIAM J. Appl. Math., 59 (1998), 455-493. 5. Springer-Verlag, Berlin, 1985. 6. Comp. Math. Appl., 68 (2014), 288-308. 7. Discrete. Contin. Dyn. Syst. B., 7 (2007), 251-273. 8. Proc. R. Soc. Edinb. A., 139 (2009), 459-482. 9. Arch. Ration. Mech. Anal., 195 (2010), 311-331. 10. Nonlinearity, 24 (2011), 2891-2911. 11. Oecologia 122 (2000), 200-209. 12. J. Immunol., 145 (1990), 3442-3449. 13. IMA J. Appl. Math., 75 (2010), 392-417. 14. J. Theor. Biol., 229 (2004), 281-288. 15. Springer-Verlag, New York, 1993. 16. Comp. Math. Appl., 66 (2013), 1488-1497. 17. 2011-01-13. 18. Proc. Natl. Acad. Sci. USA., 93 (1996), 7247-7251. 19. in: Pitman Res. Notes Math. Ser., vol. 247, Longman Scientific & Technical, Harlow, 1991. 20. Cell, 110 (2002), 135-138. 21. J. Dynam. Differential Equations, 23 (2011), 817-842. 22. SIAM. J. Appl. Math., 72 (2012), 25-38. 23. Proc. R. Soc. Lond. A., 115 (1927), 700-721. 24. Proc. R. Soc. Lond. B., 138 (1932), 55-83. 25. Proc. R. Soc. Lond. B., 141 (1933), 94-112. 26. Central African Republic. BMC Infectious Diseases, 13 (2013), p286. 27. Bull. Math. Biol., 72 (2010), 1492-1505. 28. SIAM J. Appl. Math., 70 (2010), 2434-2448. 29. Nonlinear Anal. RWA., 24 (2015), 18-35. 30. Appl. Anal., 89 (2010), 1109-1140. 31. SIAM J. Math. Anal., 37 (2005), 251-275. 32. J. Math. Anal. Appl., 408 (2013), 225-246. 33. Trans. of A.M.S., 321 (1990), 1-44. 37. SIAM Rev., 41 (1999), 3-44. 38. Trends Cell Biol., 12 (2002), 569-579. 40. Nonlinear Anal., 8 (1984), 667-682. 41. AIDS, 21 (2007), 163-168. 42. Microb. Infect., 4 (2002), 829-835. 44. Proc. R. Soc. Lond. A., 457 (2001), 1841-1853. 48. Math. Biosci., 210 (2007), 78-95. 49. J. Theor. Biol., 253 (2008), 36-44. 51. Ann. Intern. Med., 101 (1984), 613-616. 52. Springer, New York, 1996. 54. Nonlinear Anal. RWA., 15 (2014), 118-139. 1. Junyuan Yang, Rui Xu, Jiaxu Li, Threshold dynamics of an age–space structured brucellosis disease model with Neumann boundary condition, Nonlinear Analysis: Real World Applications, 2019, 50, 192, 10.1016/j.nonrwa.2019.04.013 2. Junyuan Yang, Xiaoyan Wang, Dynamics and asymptotical profiles of an age-structured viral infection model with spatial diffusion, Applied Mathematics and Computation, 2019, 360, 236, 10.1016/j.amc.2019.05.007 3. Calvin Tadmon, Severin Foko, Non-standard finite difference method applied to an initial boundary value problem describing hepatitis B virus infection, Journal of Difference Equations and Applications, 2019, 1, 10.1080/10236198.2019.1709064 Copyright Info: 2016, Xichao Duan, et al., licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0)
CommonCrawl
Fischer group Fi24 In the area of modern algebra known as group theory, the Fischer group Fi24 or F24′ is a sporadic simple group of order    221 · 316 · 52 · 73 · 11 · 13 · 17 · 23 · 29 = 1255205709190661721292800 ≈ 1×1024. For general background and history of the Fischer sporadic groups, see Fischer group. Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve History and properties Fi24 is one of the 26 sporadic groups and is the largest of the three Fischer groups introduced by Bernd Fischer (1971, 1976) while investigating 3-transposition groups. It is the 3rd largest of the sporadic groups (after the Monster group and Baby Monster group). The outer automorphism group has order 2, and the Schur multiplier has order 3. The automorphism group is a 3-transposition group Fi24, containing the simple group with index 2. The centralizer of an element of order 3 in the monster group is a triple cover of the sporadic simple group Fi24, as a result of which the prime 3 plays a special role in its theory. Representations The centralizer of an element of order 3 in the monster group is a triple cover of the Fischer group, as a result of which the prime 3 plays a special role in its theory. In particular it acts on a vertex operator algebra over the field with 3 elements. The simple Fischer group has a rank 3 action on a graph of 306936 (=23.33.72.29) vertices corresponding to the 3-transpositions of Fi24, with point stabilizer the Fischer group Fi23. The triple cover has a complex representation of dimension 783. When reduced modulo 3 this has 1-dimensional invariant subspaces and quotient spaces, giving an irreducible representation of dimension 781 over the field with 3 elements. Generalized Monstrous Moonshine Conway and Norton suggested in their 1979 paper that monstrous moonshine is not limited to the monster, but that similar phenomena may be found for other groups. Larissa Queen and others subsequently found that one can construct the expansions of many Hauptmoduln from simple combinations of dimensions of sporadic groups. For Fi24 (as well as Fi23), the relevant McKay-Thompson series is $T_{3A}(\tau )$ where one can set the constant term a(0) = 42 (OEIS: A030197), ${\begin{aligned}j_{3A}(\tau )&=T_{3A}(\tau )+42\\&=\left(\left({\tfrac {\eta (\tau )}{\eta (3\tau )}}\right)^{6}+3^{3}\left({\tfrac {\eta (2\tau )}{\eta (\tau )}}\right)^{6}\right)^{2}\\&={\frac {1}{q}}+42+783q+8672q^{2}+65367q^{3}+371520q^{4}+1741655q^{5}+\dots \end{aligned}}$ Maximal subgroups Linton & Wilson (1991) found the 22 conjugacy classes of maximal subgroups of Fi24 as follows: • Fi23 Centralizes a 3-transposition in the automorphism group Fi24. • 2.Fi22:2 • (3 x O+ 8 (3):3):2 • O– 10 (2) • 37.O7(3) • 31+10:U5(2):2 • 211.M24 • 22.U6(2):S3 • 21+12:3.U4(3).2 • 32+4+8.(A5 x 2A4).2 • (A4 x O+ 8 (2):3):2 • He:2 (Two classes, fused by an outer automorphism) • 23+12.(L3(2) x A6) • 26+8.(S3 x A8) • (G2(3) x 32:2).2 • (A9 x A5):2 • A7 x 7:6 • [313]:(L3(3) x 2) • L2(8):3 x A6 • U3(3):2 (Two classes, fused by an outer automorphism) • L2(13):2 (Two classes, fused by an outer automorphism) • 29:14 References • Aschbacher, Michael (1997), 3-transposition groups, Cambridge Tracts in Mathematics, vol. 124, Cambridge University Press, doi:10.1017/CBO9780511759413, ISBN 978-0-521-57196-8, MR 1423599 contains a complete proof of Fischer's theorem. • Fischer, Bernd (1971), "Finite groups generated by 3-transpositions. I", Inventiones Mathematicae, 13 (3): 232–246, doi:10.1007/BF01404633, ISSN 0020-9910, MR 0294487 This is the first part of Fischer's preprint on the construction of his groups. The remainder of the paper is unpublished (as of 2010). • Fischer, Bernd (1976), Finite Groups Generated by 3-transpositions, Preprint, Mathematics Institute, University of Warwick • Linton, Stephen A.; Wilson, Robert A. (1991), "The maximal subgroups of the Fischer groups Fi24 and Fi24'", Proceedings of the London Mathematical Society, Third Series, 63 (1): 113–164, doi:10.1112/plms/s3-63.1.113, ISSN 0024-6115, MR 1105720 • Wilson, Robert A. (2009), The finite simple groups, Graduate Texts in Mathematics 251, vol. 251, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-84800-988-2, ISBN 978-1-84800-987-5, Zbl 1203.20012 • Wilson, R. A. ATLAS of Finite Group Representation. External links • MathWorld: Fischer Groups • Atlas of Finite Group Representations: Fi24
Wikipedia
Transitive closure In mathematics, the transitive closure R+ of a homogeneous binary relation R on a set X is the smallest relation on X that contains R and is transitive. For finite sets, "smallest" can be taken in its usual sense, of having the fewest related pairs; for infinite sets R+ is the unique minimal transitive superset of R. Transitive binary relations Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric Total, Semiconnex Anti- reflexive Equivalence relation Y ✗ ✗ ✗ ✗ ✗ Y ✗ ✗ Preorder (Quasiorder) ✗ ✗ ✗ ✗ ✗ ✗ Y ✗ ✗ Partial order ✗ Y ✗ ✗ ✗ ✗ Y ✗ ✗ Total preorder ✗ ✗ Y ✗ ✗ ✗ Y ✗ ✗ Total order ✗ Y Y ✗ ✗ ✗ Y ✗ ✗ Prewellordering ✗ ✗ Y Y ✗ ✗ Y ✗ ✗ Well-quasi-ordering ✗ ✗ ✗ Y ✗ ✗ Y ✗ ✗ Well-ordering ✗ Y Y Y ✗ ✗ Y ✗ ✗ Lattice ✗ Y ✗ ✗ Y Y Y ✗ ✗ Join-semilattice ✗ Y ✗ ✗ Y ✗ Y ✗ ✗ Meet-semilattice ✗ Y ✗ ✗ ✗ Y Y ✗ ✗ Strict partial order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y Strict weak order ✗ Y ✗ ✗ ✗ ✗ ✗ Y Y Strict total order ✗ Y Y ✗ ✗ ✗ ✗ Y Y Symmetric Antisymmetric Connected Well-founded Has joins Has meets Reflexive Irreflexive Asymmetric Definitions, for all $a,b$ and $S\neq \varnothing :$ :} ${\begin{aligned}&aRb\\\Rightarrow {}&bRa\end{aligned}}$ ${\begin{aligned}aRb{\text{ and }}&bRa\\\Rightarrow a={}&b\end{aligned}}$ ${\begin{aligned}a\neq {}&b\Rightarrow \\aRb{\text{ or }}&bRa\end{aligned}}$ ${\begin{aligned}\min S\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\vee b\\{\text{exists}}\end{aligned}}$ ${\begin{aligned}a\wedge b\\{\text{exists}}\end{aligned}}$ $aRa$ ${\text{not }}aRa$ ${\begin{aligned}aRb\Rightarrow \\{\text{not }}bRa\end{aligned}}$ Y indicates that the column's property is always true the row's term (at the very left), while ✗ indicates that the property is not guaranteed in general (it might, or might not, hold). For example, that every equivalence relation is symmetric, but not necessarily antisymmetric, is indicated by Y in the "Symmetric" column and ✗ in the "Antisymmetric" column, respectively. All definitions tacitly require the homogeneous relation $R$ be transitive: for all $a,b,c,$ if $aRb$ and $bRc$ then $aRc.$ A term's definition may require additional properties that are not listed in this table. This article is about the transitive closure of a binary relation. For the transitive closure of a set, see transitive set § Transitive closure. For example, if X is a set of airports and x R y means "there is a direct flight from airport x to airport y" (for x and y in X), then the transitive closure of R on X is the relation R+ such that x R+ y means "it is possible to fly from x to y in one or more flights". More formally, the transitive closure of a binary relation R on a set X is the smallest (w.r.t. ⊆) transitive relation R+ on X such that R ⊆ R+; see Lidl & Pilz (1998, p. 337). We have R+ = R if, and only if, R itself is transitive. Conversely, transitive reduction adduces a minimal relation S from a given relation R such that they have the same closure, that is, S+ = R+; however, many different S with this property may exist. Both transitive closure and transitive reduction are also used in the closely related area of graph theory. Transitive relations and examples A relation R on a set X is transitive if, for all x, y, z in X, whenever x R y and y R z then x R z. Examples of transitive relations include the equality relation on any set, the "less than or equal" relation on any linearly ordered set, and the relation "x was born before y" on the set of all people. Symbolically, this can be denoted as: if x < y and y < z then x < z. One example of a non-transitive relation is "city x can be reached via a direct flight from city y" on the set of all cities. Simply because there is a direct flight from one city to a second city, and a direct flight from the second city to the third, does not imply there is a direct flight from the first city to the third. The transitive closure of this relation is a different relation, namely "there is a sequence of direct flights that begins at city x and ends at city y". Every relation can be extended in a similar way to a transitive relation. An example of a non-transitive relation with a less meaningful transitive closure is "x is the day of the week after y". The transitive closure of this relation is "some day x comes after a day y on the calendar", which is trivially true for all days of the week x and y (and thus equivalent to the Cartesian square, which is "x and y are both days of the week"). Existence and description For any relation R, the transitive closure of R always exists. To see this, note that the intersection of any family of transitive relations is again transitive. Furthermore, there exists at least one transitive relation containing R, namely the trivial one: X × X. The transitive closure of R is then given by the intersection of all transitive relations containing R. For finite sets, we can construct the transitive closure step by step, starting from R and adding transitive edges. This gives the intuition for a general construction. For any set X, we can prove that transitive closure is given by the following expression $R^{+}=\bigcup _{i=1}^{\infty }R^{i}.$ where $R^{i}$ is the i-th power of R, defined inductively by $R^{1}=R$ and, for $i>0$, $R^{i+1}=R\circ R^{i}$ where $\circ $ denotes composition of relations. To show that the above definition of R+ is the least transitive relation containing R, we show that it contains R, that it is transitive, and that it is the smallest set with both of those characteristics. • $R\subseteq R^{+}$: $R^{+}$ contains all of the $R^{i}$, so in particular $R^{+}$ contains $R$. • $R^{+}$ is transitive: If $(s_{1},s_{2}),(s_{2},s_{3})\in R^{+}$, then $(s_{1},s_{2})\in R^{j}$ and $(s_{2},s_{3})\in R^{k}$ for some $j,k$ by definition of $R^{+}$. Since composition is associative, $R^{j+k}=R^{j}\circ R^{k}$; hence $(s_{1},s_{3})\in R^{j+k}\subseteq R^{+}$ by definition of $\circ $ and $R^{+}$. • $R^{+}$ is minimal, that is, if $T$ is any transitive relation containing $R$, then $R^{+}\subseteq T$: Given any such $T$, induction on $i$ can be used to show $R^{i}\subseteq T$ for all $i$ as follows: Base: $R^{1}=R\subseteq T$ by assumption. Step: If $R^{i}\subseteq T$ holds, and $(s_{1},s_{3})\in R^{i+1}=R\circ R^{i}$, then $(s_{1},s_{2})\in R$ and $(s_{2},s_{3})\in R^{i}$ for some $s_{2}$, by definition of $\circ $. Hence, $(s_{1},s_{2}),(s_{2},s_{3})\in T$ by assumption and by induction hypothesis. Hence $(s_{1},s_{3})\in T$ by transitivity of $T$; this completes the induction. Finally, $R^{i}\subseteq T$ for all $i$ implies $R^{+}\subseteq T$ by definition of $R^{+}$. Properties The intersection of two transitive relations is transitive. The union of two transitive relations need not be transitive. To preserve transitivity, one must take the transitive closure. This occurs, for example, when taking the union of two equivalence relations or two preorders. To obtain a new equivalence relation or preorder one must take the transitive closure (reflexivity and symmetry—in the case of equivalence relations—are automatic). In graph theory In computer science, the concept of transitive closure can be thought of as constructing a data structure that makes it possible to answer reachability questions. That is, can one get from node a to node d in one or more hops? A binary relation tells you only that node a is connected to node b, and that node b is connected to node c, etc. After the transitive closure is constructed, as depicted in the following figure, in an O(1) operation one may determine that node d is reachable from node a. The data structure is typically stored as a Boolean matrix, so if matrix[1][4] = true, then it is the case that node 1 can reach node 4 through one or more hops. The transitive closure of the adjacency relation of a directed acyclic graph (DAG) is the reachability relation of the DAG and a strict partial order. The transitive closure of an undirected graph produces a cluster graph, a disjoint union of cliques. Constructing the transitive closure is an equivalent formulation of the problem of finding the components of the graph.[1] In logic and computational complexity The transitive closure of a binary relation cannot, in general, be expressed in first-order logic (FO). This means that one cannot write a formula using predicate symbols R and T that will be satisfied in any model if and only if T is the transitive closure of R. In finite model theory, first-order logic (FO) extended with a transitive closure operator is usually called transitive closure logic, and abbreviated FO(TC) or just TC. TC is a sub-type of fixpoint logics. The fact that FO(TC) is strictly more expressive than FO was discovered by Ronald Fagin in 1974; the result was then rediscovered by Alfred Aho and Jeffrey Ullman in 1979, who proposed to use fixpoint logic as a database query language.[2] With more recent concepts of finite model theory, proof that FO(TC) is strictly more expressive than FO follows immediately from the fact that FO(TC) is not Gaifman-local.[3] In computational complexity theory, the complexity class NL corresponds precisely to the set of logical sentences expressible in TC. This is because the transitive closure property has a close relationship with the NL-complete problem STCON for finding directed paths in a graph. Similarly, the class L is first-order logic with the commutative, transitive closure. When transitive closure is added to second-order logic instead, we obtain PSPACE. In database query languages Since the 1980s Oracle Database has implemented a proprietary SQL extension CONNECT BY... START WITH that allows the computation of a transitive closure as part of a declarative query. The SQL 3 (1999) standard added a more general WITH RECURSIVE construct also allowing transitive closures to be computed inside the query processor; as of 2011 the latter is implemented in IBM Db2, Microsoft SQL Server, Oracle, PostgreSQL, and MySQL (v8.0+). SQLite released support for this in 2014. Datalog also implements transitive closure computations.[4] MariaDB implements Recursive Common Table Expressions, which can be used to compute transitive closures. This feature was introduced in release 10.2.2 of April 2016.[5] Algorithms Efficient algorithms for computing the transitive closure of the adjacency relation of a graph can be found in Nuutila (1995). Reducing the problem to multiplications of adjacency matrices achieves the least time complexity, viz. that of matrix multiplication (Munro 1971, Fischer & Meyer 1971), which is $O(n^{2.3728596})$ as of December 2020. However, this approach is not practical since both the constant factors and the memory consumption for sparse graphs are high (Nuutila 1995, pp. 22–23, sect.2.3.3). The problem can also be solved by the Floyd–Warshall algorithm in $O(n^{3})$, or by repeated breadth-first search or depth-first search starting from each node of the graph. For directed graphs, Purdom's algorithm solves the problem by first computing its condensation DAG and its transitive closure, then lifting it to the original graph. Its runtime is $O(m+\mu n)$, where $\mu $ is the number of edges between its strongly connected components.[6][7][8][9] More recent research has explored efficient ways of computing transitive closure on distributed systems based on the MapReduce paradigm.[10] See also • Ancestral relation • Deductive closure • Reflexive closure • Symmetric closure • Transitive reduction (a smallest relation having the transitive closure of R as its transitive closure) References 1. McColl, W. F.; Noshita, K. (1986), "On the number of edges in the transitive closure of a graph", Discrete Applied Mathematics, 15 (1): 67–73, doi:10.1016/0166-218X(86)90020-X, MR 0856101 2. (Libkin 2004:vii) 3. (Libkin 2004:49) 4. (Silberschatz et al. 2010:C.3.6) 5. "Recursive Common Table Expressions Overview". mariadb.com. 6. Purdom Jr., Paul (Mar 1970). "A transitive closure algorithm". BIT Numerical Mathematics. 10 (1): 76–94. doi:10.1007/BF01940892. 7. Paul W. Purdom Jr. (Jul 1968). A transitive closure algorithm (Computer Sciences Technical Report). Vol. 33. University of Wisconsin-Madison. 8. ""Purdom's algorithm" on AlgoWiki". 9. ""Transitive closure of a directed graph" on AlgoWiki". 10. (Afrati et al. 2011) • Foto N. Afrati, Vinayak Borkar, Michael Carey, Neoklis Polyzotis, Jeffrey D. Ullman, Map-Reduce Extensions and Recursive Queries, EDBT 2011, March 22–24, 2011, Uppsala, Sweden, ISBN 978-1-4503-0528-0 • Aho, A. V.; Ullman, J. D. (1979). "Universality of data retrieval languages". Proceedings of the 6th ACM SIGACT-SIGPLAN Symposium on Principles of programming languages - POPL '79. pp. 110–119. doi:10.1145/567752.567763. • Benedikt, M.; Senellart, P. (2011). "Databases". In Blum, Edward K.; Aho, Alfred V. (eds.). Computer Science. The Hardware, Software and Heart of It. pp. 169–229. doi:10.1007/978-1-4614-1168-0_10. ISBN 978-1-4614-1167-3. • Heinz-Dieter Ebbinghaus; Jörg Flum (1999). Finite Model Theory (2nd ed.). Springer. pp. 123–124, 151–161, 220–235. ISBN 978-3-540-28787-2. • Fischer, M.J.; Meyer, A.R. (Oct 1971). "Boolean matrix multiplication and transitive closure" (PDF). In Raymond E. Miller and John E. Hopcroft (ed.). Proc. 12th Ann. Symp. on Switching and Automata Theory (SWAT). IEEE Computer Society. pp. 129–131. doi:10.1109/SWAT.1971.4. • Erich Grädel; Phokion G. Kolaitis; Leonid Libkin; Maarten Marx; Joel Spencer; Moshe Y. Vardi; Yde Venema; Scott Weinstein (2007). Finite Model Theory and Its Applications. Springer. pp. 151–152. ISBN 978-3-540-68804-4. • Keller, U., 2004, Some Remarks on the Definability of Transitive Closure in First-order Logic and Datalog (unpublished manuscript)* Libkin, Leonid (2004), Elements of Finite Model Theory, Springer, ISBN 978-3-540-21202-7 • Lidl, R.; Pilz, G. (1998), Applied abstract algebra, Undergraduate Texts in Mathematics (2nd ed.), Springer, ISBN 0-387-98290-6 • Munro, Ian (Jan 1971). "Efficient determination of the transitive closure of a directed graph". Information Processing Letters. 1 (2): 56–58. doi:10.1016/0020-0190(71)90006-8. • Nuutila, Esko (1995). Efficient transitive closure computation in large digraphs. Finnish Academy of Technology. ISBN 951-666-451-2. OCLC 912471702. • Abraham Silberschatz; Henry Korth; S. Sudarshan (2010). Database System Concepts (6th ed.). McGraw-Hill. ISBN 978-0-07-352332-3. Appendix C (online only) External links • "Transitive closure and reduction", The Stony Brook Algorithm Repository, Steven Skiena. Order theory • Topics • Glossary • Category Key concepts • Binary relation • Boolean algebra • Cyclic order • Lattice • Partial order • Preorder • Total order • Weak ordering Results • Boolean prime ideal theorem • Cantor–Bernstein theorem • Cantor's isomorphism theorem • Dilworth's theorem • Dushnik–Miller theorem • Hausdorff maximal principle • Knaster–Tarski theorem • Kruskal's tree theorem • Laver's theorem • Mirsky's theorem • Szpilrajn extension theorem • Zorn's lemma Properties & Types (list) • Antisymmetric • Asymmetric • Boolean algebra • topics • Completeness • Connected • Covering • Dense • Directed • (Partial) Equivalence • Foundational • Heyting algebra • Homogeneous • Idempotent • Lattice • Bounded • Complemented • Complete • Distributive • Join and meet • Reflexive • Partial order • Chain-complete • Graded • Eulerian • Strict • Prefix order • Preorder • Total • Semilattice • Semiorder • Symmetric • Total • Tolerance • Transitive • Well-founded • Well-quasi-ordering (Better) • (Pre) Well-order Constructions • Composition • Converse/Transpose • Lexicographic order • Linear extension • Product order • Reflexive closure • Series-parallel partial order • Star product • Symmetric closure • Transitive closure Topology & Orders • Alexandrov topology & Specialization preorder • Ordered topological vector space • Normal cone • Order topology • Order topology • Topological vector lattice • Banach • Fréchet • Locally convex • Normed Related • Antichain • Cofinal • Cofinality • Comparability • Graph • Duality • Filter • Hasse diagram • Ideal • Net • Subnet • Order morphism • Embedding • Isomorphism • Order type • Ordered field • Ordered vector space • Partially ordered • Positive cone • Riesz space • Upper set • Young's lattice
Wikipedia
\begin{document} \newcommand{\spacing}[1]{\renewcommand{\baselinestretch}{#1}\large\normalsize} \spacing{1.14} \title[Two-step homogeneous geodesics]{Two-step homogeneous geodesics in some homogeneous Finsler manifolds} \author {M. Hosseini} \address{Masoumeh Hosseini\\ Department of Pure Mathematics \\ Faculty of Mathematics and Statistics\\ University of Isfahan\\ Isfahan\\ 81746-73441-Iran.} \email{hoseini\[email protected]} \author {H. R. Salimi Moghaddam} \address{Hamid Reza Salimi Moghaddam\\ Department of Pure Mathematics \\ Faculty of Mathematics and Statistics\\ University of Isfahan\\ Isfahan\\ 81746-73441-Iran.\\ Scopus Author ID: 26534920800 \\ ORCID Id:0000-0001-6112-4259\\} \email{[email protected] and [email protected]} \keywords{homogeneous space, invariant Riemannain metric, invariant Finsler metric, homogeneous geodesic, $(\alpha,\beta)$-metric.\\ AMS 2020 Mathematics Subject Classification: 53C30, 53C60, 53C25, 22E60.} \begin{abstract} A natural generalization of a homogeneous geodesic on homogeneous Riemannian spaces $G/H$, which is called a two-step homogeneous geodesic, is a geodesic of the form $\gamma(t)=\pi(\exp(tx)\exp(ty))$, where $x,y$ belongs to the Lie algebra of $G$. In this paper, we extend this concept to homogeneous Finsler spaces. We give some sufficient conditions for $(\alpha,\beta)$-spaces and cubic spaces to admit a one-parameter family of invariant Finsler metrics to be two-step Finsler geodesic orbit space. Also we give some examples of these spaces. \end{abstract} \maketitle \section{\textbf{Introduction}} A connected Riemannian manifold $(M,h)$ such that its largest connected group of isometries $G$ acts transitively on $M$ is called a homogeneous Riemannian manifold. Let $o$ be an arbitrary point of $M$ and $H$ be the isotropy group at $o$. Then, $M$ is diffeomorphic to the homogeneous space $G/H$. In this case, there exists an $Ad(H)$-invariant decomposition $\frak{g}=\frak{m}\oplus\frak{h}$, where $\frak{g}$ and $\frak{h}$ are the Lie algebras of $G$ and $H$, respectively, and $\frak{m}$ is a linear subspace of $\frak{g}$. Although, in general, such a decomposition is not unique but, using the natural projection $\pi:G\longrightarrow G/H$, the tangent space $T_oM$ can be identified with the subspace $\frak{m}$. Let $\gamma(t)$ be a geodesic through the origin $o$ of $M=G/H$. It is called a homogeneous geodesic if there exists a nonzero vector $x\in\frak{g}$ such that \begin{equation*} \gamma(t)=\pi (\exp(tx)), \ \ \ \ t\in\Bbb{R}. \end{equation*} In this case the nonzero vector $x$ is called a geodesic vector. In fact there exists a one-to-one correspondence between the set of geodesic vectors and the set of homogeneous geodesics through the origin $o$ (see \cite{Berestovskii-Nikonorov}, \cite{Kobayashi-Nomizu} and \cite{Kowalski-Szenthe}). Kowalski and Vanhecke showed that a vector $0\neq x\in\frak{g}$ is a geodesic vector if and only if, for any $z\in\frak{g}$, \begin{equation*} \langle x_\frak{m},[x,z]_\frak{m}\rangle=0, \end{equation*} where $\langle,\rangle$ is the inner product induced on $\frak{g}$ by the Riemannian metric $h$, and the subscript $\frak{m}$ denotes the projection into $\frak{m}$ with respect to the decomposition $\frak{g}=\frak{m}\oplus\frak{h}$ (see proposition 2.1 of \cite{Kowalski-Vanhecke}). In \cite{Kowalski-Szenthe}, Kowalski and Szenthe showed that every homogeneous Riemannian manifold admits a homogeneous geodesic through any point $o\in M$. This result generalized to the case of pseudo-Riemannian manifolds by Dusek in \cite{Dusek1}. Yan and Deng generalized the result to Randers metrics \cite{Yan-Deng1}. Dusek proved the same result for odd-dimensional Finsler metrics \cite{Dusek2, Dusek3}. Yan and Huang proved the result in general regular Finsler spaces \cite{Yan-Huang}. The result generalized to a special type of non-regular Finsler metrics (Kropina metric) by the authors (see \cite{Hosseini-Salimi Moghaddam}). In \cite{Arvanitoyeorgos-Panagiotis Souris}, Arvanitoyeorgos and Panagiotis Souris studied a generalization of the concept of homogeneous geodesic, of the form \begin{equation}\label{two-step equation} \gamma(t)=\pi(\exp(tx)\exp(ty)), \ \ \ \ x, y \in\frak{g}, \end{equation} which is called two-step homogeneous geodesic. They gave sufficient conditions on homogeneous Riemannian manifolds to admit two-step homogeneous geodesics. Also they studied two-step g.o. spaces. \\ The main purpose of this paper is to study of two-step homogenous geodesics in the case of Finsler spaces. In section 2 we give some preliminaries of Finsler geometry. A short review of the concepts of naturally reductive Finsler spaces and geodesic vectors in the case of $(\alpha,\beta)$-metrics will be presented in section 3 . We will give the main results in section 4. The sufficient conditions for $(\alpha,\beta)$-spaces and cubic spaces to be two-step Finsler g.o. space will be given. Also, we give some examples of these spaces. \section{\textbf{Preliminaries}} Let $M$ be a smooth manifold equipped with a continuous function $F:TM\longrightarrow [0,+\infty)$ satisfying the conditions \begin{enumerate} \item $F$ is a differentiable function on $TM\setminus\{0\}$, \item for any $x\in M, y\in T_xM$ and $\lambda\geq0$, $F(x,\lambda y)=\lambda F(x,y)$, \item for any $(x,y)\in TM\setminus\{0\}$, the matrix $(g_{ij}(x,y))=\big{(}\frac{1}{2}\frac{\partial^2F^2(x,y)}{\partial y^i\partial y^j}\big{)}$, which is called the hessian matrix, be positive definite, where $(x^1,\cdots,x^n)$ is a local coordinate system on an open subset $U$ of $M$, and $(x^1,\cdots,x^n;y^1,\cdots,y^n)$ is the natural coordinate system on $TU$. \end{enumerate} Then the function $F$ and the pair $(M,F)$ are called a Finsler metric and a Finsler manifold, respectively.\\ A rich family of Finsler metrics is the family of $(\alpha,\beta)$-metrics which are defined by a Riemannian metric together with a one-form. Suppose that $h$ is a Riemannian metric on a manifold $M$ and for any $(x,y)\in TM$ let $\alpha(x,y)=\sqrt{h(y,y)}$. Suppose that $\beta$ is a one-form on $M$ and $\phi:(-b_0,b_0)\longrightarrow\Bbb{R}^+$ is a $C^\infty$ function satisfying the condition \begin{equation}\label{alpha-beta condition} \phi(s)-s\phi'(s)+(b^2-s^2)\phi''(s)>0, \ \ \ \ |s|\leq b<b_0, \end{equation} and $\|\beta\|_\alpha<b_0$. Then the function $F=\alpha\phi(\beta/\alpha)$ defines a Finsler metric on $M$. In the above definition easily we can replace the one-form $\beta$ with a vector field $X$ such that $\beta(x,y)=\langle X (x),y\rangle$. \\ For any Finsler manifold $(M,F)$, one defines the fundamental tensor $g$ and the Cartan tensor $C$ on the pull-back tangent bundle $\pi^\ast TM$ over $TM\setminus\{0\}$, as follows, \begin{eqnarray*} g_y(u,v) &=& g_{ij}(x,y)u^iv^j, \\ C_y(u,v,w) &=& C_{ijk}(x,y)u^iv^jw^k, \end{eqnarray*} where $g_{ij}(x,y)=(\frac{1}{2}F^2)_{y^iy^j}$ and $C_{ijk}(x,y)=(\frac{1}{4}F^2)_{y^iy^jy^k}$.\\ Let $(g^{ij})$ be the inverse matrix of $(g_{ij})$, then for any $y=y^i\frac{\partial}{\partial x^i}\in T_xM\setminus\{0\}$ we define the following quantities \begin{eqnarray*} \gamma^i_{jk} &=& \frac{1}{2}g^{is}\big{(}\frac{\partial g_{sj}}{\partial x^k}-\frac{\partial g_{jk}}{\partial x^s}+\frac{\partial g_{ks}}{\partial x^j}\big{)} \\ N^i_j &=& \gamma^i_{jk}y^k-C^i_{jk}\gamma^k_{rs}y^ry^s, \end{eqnarray*} where $C^i_{jk}=g^{is}C_{sjk}$.\\ The Chern connection is a unique linear connection on the pull-back tangent bundle $\pi^\ast TM$ such that its coefficients on the standard coordinate system are defined by \begin{equation*} \Gamma^i_{jk}=\gamma^i_{jk}-g^{il}\big{(}C_{ljs}N^s_k-C_{jks}N^s_l+C_{kls}N^s_j\big{)}. \end{equation*} Suppose that $V$ and $W$ are two vector fields defined along a smooth curve $\sigma:[0,r]\longrightarrow M$. If $T=T(t)=\dot{\sigma}(t)$ denotes the velocity field of the curve $\sigma$ then we define $D_TV$ with reference vector $W$ by the following equation, \begin{equation*} D_TV=\big{(}\frac{dV^i}{dt}+V^jT^k(\Gamma^i_{jk})_{(\sigma,W)}\big{)}\frac{\partial}{\partial x^i}|_{\sigma(t)}. \end{equation*} The curve $\sigma$ is called a geodesic (Finslerian geodesic) if $D_T\big{(}\frac{T}{F(T)}\big{)}=0$. We recall that a Finsler metric $F$ is called of Berwald type (or Berwaldian) if its Chern connection coefficients $\Gamma^i_{jk}$, in standard coordinate system, are functions of $x$ only. Also it is called of Douglas type if it is projectively equivalent to a Riemannian metric on $M$. \section{\textbf{Naturally reductive Finsler spaces and geodesic vectors}} In this short section we study the concepts of geodesic vector and naturally reductive spaces for homogeneous Finsler spaces, where the Finsler metric is an $(\alpha,\beta)$-metric. More precisely, let $(M=G/H,F)$ be a homogeneous Finsler space, where $F$ is an $(\alpha,\beta)$-metric which is defined by an invariant Riemannian metric $h$ and a vector field $X$. In \cite{Salimi-Parhizkar}, the second author and Parhizkar showed that, under some conditions, any geodesic vector of $(M,F)$ is a geodesic vector of $(M,h)$ and vice versa. In the following proposition we show that one of the conditions is not necessary. \begin{prop} \label{salimi-parhizkar proposition} Let $(M=G/H,F)$ be a homogeneous Finsler space with reductive decomposition $\mathfrak{g}=\mathfrak{h} \oplus \mathfrak{m}$, where $F$ is an invariant $(\alpha,\beta)$-metric defined by an invariant Riemannian metric $h$ and an invariant vector field $X$. Suppose that $0\neq y \in \mathfrak{m}$, and $X$ is orthogonal to $[y,\mathfrak{m}]_{\mathfrak{m}}$. Then, $y$ is a geodesic vector of $(M,F)$ if and only if it is a geodesic vector of $(M,h)$. \end{prop} \begin{proof} It suffices to consider the proof of Theorem 2.3 given in \cite{Salimi-Parhizkar}. It is proved that \begin{equation}\label{g_y_m} g_{y_{\mathfrak{m}}}(y_{\mathfrak{m}},[y,z]_{\mathfrak{m}})=h(y_{\mathfrak{m}},[y,z]_{\mathfrak{m}})\left( \phi ^2(r_{\mathfrak{m}}) -\phi(r_{\mathfrak{m}})\phi '(r_{\mathfrak{m}})r_{\mathfrak{m}}\right), \end{equation} where $r_{\mathfrak{m}}=\frac{h(X,y_{\mathfrak{m}})}{\sqrt{h(y_{\mathfrak{m}},y_{\mathfrak{m}})}}$. Suppose that $f(s)=\phi(s)-s\phi'(s)$, where $\phi$ is the function which is used in the definition of the $(\alpha,\beta)$-metric $F$. Elementary calculus with the equation \eqref{alpha-beta condition} show that $f$ is a positive function. Now the positivity of the function $\phi$ shows that $\phi ^2(r_{\mathfrak{m}}) -\phi(r_{\mathfrak{m}})\phi '(r_{\mathfrak{m}})r_{\mathfrak{m}}>0$, which completes the proof. \end{proof} The above proposition shows that Theorem 2.3 of \cite{Salimi-Parhizkar} without the condition $\phi''(r_{\mathfrak{m}})\leq 0$ is true.\\ We can express a similar result for invariant $(\alpha,\beta)$-metrics of Douglas type as follows. \begin{prop} \label{Douglas type} Let $(M=G/H,F)$ be a homogeneous Finsler space, where $F$ is an invariant $(\alpha,\beta)$-metric of Douglas type. Then there exists an invariant Riemannian metric $h$ on $M$ such that $(M,F)$ and $(M,h)$ have the same geodesic vectors. \end{prop} \begin{proof} By Theorem 1.1 in \cite{Liu-Deng}, any $(\alpha,\beta)$-metric of Douglas type is of Berwald type or it is of Randers type. If $F$ is of Berwald type then the Finsler metric $F$ and the Riemannian metric corresponded to $\alpha$ have the same geodesics. If $F$ is a Douglas metric of Randers type, then there exist an invariant Riemannian metric $h$ and an invariant vector field $X$ orthogonal to $[\mathfrak{m},\mathfrak{m}]_{\mathfrak{m}}$ such that $F(x,y)=\sqrt{h(y,y)}+h(X,y)$. Now, previous proposition completes the proof. \end{proof} \begin{remark} So, in the cases of the above propositions, all results about geodesic vectors in Riemannian case extend to the Finsler case, automatically. \end{remark} Now we study the concept of naturally reductive Finsler space. Let start with the definition of naturally reductive Riemannian space. \begin{definition} A homogeneous Riemannian manifold $(M=G/H,h)$ is called naturally reductive if there exists an $Ad(H)$-invariant decomposition $\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{m}$ such that $$\langle[x,y]_{\mathfrak{m}},z\rangle+\langle y,[x,z]_{\mathfrak{m}}\rangle=0\qquad \forall \, x,y,z\in \mathfrak{m},$$ where $\langle,\rangle$ indicates the inner product on $\mathfrak{m}$ induced by $h$, and $[,]_{\mathfrak{m}}$ is the projection to $\mathfrak{m}$ with respect to the above decomposition. \end{definition} If all geodesics of a homogeneous Riemannian manifold, with respect to the largest connected group of isometries, are homogeneous geodesics then the homogeneous Riemannian manifold is called a geodesic orbit space (g.o. space). It is shown that any naturally reductive homogeneous Riemannian space is a g.o. space (see \cite{Kobayashi-Nomizu}), but there exist g.o. spaces which are in no way naturally reductive (see \cite{Kaplan}).\\ In \cite{Deng- Hou}, the above definition of naturally reductive homogeneous Riemannian spaces generalized to Finsler spaces as follows. \begin{definition} \label{definition Deng and Hou} Let $G/H$ be a homogeneous manifold equipped with an invariant Finsler metric $F$. It is called a naturally reductive Finsler space if there exists an invariant Riemannian metric $h$ on $G/H$ such that $(G/H,h)$ is naturally reductive, in the sense of Riemannian manifolds, and the Levi-Civita connection of $h$ and the Chern connection of $F$ coincide. \end{definition} In this definition the Finsler metric is of Berwald type. \\ \begin{remark} \label{Naturally reductive (alpha ,beta )-space} If $(M=G/H,F)$ is a naturally reductive Finsler space such that $F$ is an invariant $(\alpha,\beta)$-metric defined by a Riemannian metric $h$ and a vector field $X$, then the homogeneous Riemannian space $(M,h)$ is naturally reductive. If $(M=G/H,F)$ is naturally reductive then, there exists a Riemannian metric $\tilde{h}$ such that $(M,\tilde{h})$ is a naturally reductive homogeneous Riemannian space, and the Levi-Civita connection of $\tilde{h}$ and the Chern connection of $F$ coincide. On the other hand, $(M,F)$ is of Berwald type so the Levi-Civita connection of $h$ and the Chern connection of $F$ also coincide. In fact, $(M,\tilde{h})$ and $(M,h)$ have the same Levi-Civita connection. It is well known that a Levi-Civita connection determines the Riemannian metric up to a constant conformal factor (see \cite{Schmidt}). Therefore, there exists a positive real number $\mu$ such that $h=\mu \tilde{h}$ and so $(M,h)$ is also naturally reductive. \end{remark} \section{\textbf{Two-step homogeneous g.o. Finsler spaces}} As we have mentioned in the first section, the concept of two-step homogeneous geodesic was defined by Arvanitoyeorgos and Panagiotis Souris in \cite{Arvanitoyeorgos-Panagiotis Souris}. In this section we investigate this notion in the case of homogeneous Finsler spaces. Let $(G/H,F)$ be a homogeneous Finsler manifold and $\pi:G\longrightarrow G/H$ denotes the projection map. Let $o=\pi(e)$ be the origin of $G/H$. \begin{definition} A geodesic $\gamma $ on $G/H$ is said to be two-step homogeneous if there exist $x,y\in\mathfrak{g}$ such that $$\gamma (t)= \pi (\exp tx \exp ty) \qquad \forall \, t\in \mathbb{R}.$$ \end{definition} \begin{definition} Let $(G/H,F)$ be a homogeneous Finsler space. It is called a two-step homogeneous g.o. space (two-step homogeneous geodesic orbit space) if all geodesics $\gamma $ with $\gamma(0) = o$, are two-step homogeneous. \end{definition} In the following the existence of two-step homogeneous g.o. Finsler spaces is investigated and some examples of such spaces are given. \begin{theorem} \label{main theorem} Suppose that $(M=G/H,F)$ is a naturally reductive Finsler space and $F$ is an invariant $(\alpha,\beta)$-metric arisen from an invariant Riemannian metric $h$ and an invariant vector field $X$. Let $\langle,\rangle$ be the corresponding inner product on $\mathfrak{m}=T_o(G/H)$ defined by $h$. If $\mathfrak{m}=\mathfrak{m}_1\oplus \mathfrak{m}_2$ is an $Ad(H)$-invariant orthogonal decomposition of $\mathfrak{m}$ and $[\mathfrak{m}_1,\mathfrak{m}_2]\subseteq \mathfrak{m}_1$, and $X$ belongs to $\mathfrak{m}_2$, then $M$ admits a one-parameter family of invariant Finsler metrics $F_{\lambda }$, $\lambda \in \mathbb{R}^+$, such that $(M,F_{\lambda })$ is a two-step Finsler g.o. space. Each metric $F_{\lambda }$ is of the following form: \begin{itemize} \item $F_{\lambda }=\alpha _{\lambda }\phi (\dfrac{\beta}{\alpha _{\lambda }})$, where $0<\lambda <1$, \item $F_{\lambda }=\alpha _{\lambda }\phi (\dfrac{\beta _{\lambda}}{\alpha _{\lambda }})$, where $\lambda >1$. \end{itemize} Here $\alpha _{\lambda }$ is the norm of the Riemannian metric on $M$ which corresponds to the inner product $\langle ,\rangle _{\lambda} =\langle ,\rangle \vert _{\mathfrak{m}_1}\oplus \lambda \langle , \rangle\vert _{\mathfrak{m}_2}$ and $\beta _{\lambda}$ is the one-form that corresponds to the vector field $X_{\lambda}=\dfrac{1}{\sqrt{\lambda}} X$. \end{theorem} \begin{proof} The homogeneous Finsler space $(M,F)$ is a naturally reductive Finsler space, so Remark \ref{Naturally reductive (alpha ,beta )-space} implies that the homogeneous Riemannian space $(M,h)$ is naturally reductive. By Corollary 2.4 in \cite{Arvanitoyeorgos-Panagiotis Souris}, $(M,h_{\lambda})$ is a two-step g.o. space. Here $h_{\lambda }$ is the Riemannian metric corresponding to the inner product $\langle ,\rangle _{\lambda }$ on $T_o(G/H)$. Now, we consider $F_{\lambda}$ as we mentioned above. In each case we prove that the Finsler metric $F_{\lambda}$ is of Berwald type. If $y=y_1+y_2$ and $z=z_1+z_2$, where $y_1,z_1\in \mathfrak{m}_1$ and $ y_2,z_2\in \mathfrak{m}_2$, then $$\langle y ,z \rangle _{\lambda} =\langle y_1 ,z_1\rangle + \lambda \langle y_2, z_2\rangle =(1-\lambda ) \langle y_1 ,z_1\rangle + \lambda \langle y, z\rangle =\langle y ,z\rangle + (\lambda -1) \langle y_2, z_2\rangle.$$ On the other hand, $F$ is an $(\alpha ,\beta)$-metric of Berwald type. According to Proposition 3.1 in \cite{Bahmandoust-Latifi} we have $$\langle [y,X]_{\mathfrak{m} },z\rangle +\langle [z,X]_{\mathfrak{m} },y\rangle=0, \qquad \langle [y,z]_{\mathfrak{m} },X\rangle=0 \quad \forall y,z\in \mathfrak{m}.$$ If $X\in \mathfrak{m}_2$ then $$\langle [y ,z]_{\mathfrak{m} },X \rangle _{\lambda}=(1-\lambda )\langle [y ,z]_{\mathfrak{m}},0\rangle + \lambda \langle [y ,z]_{\mathfrak{m}}, X\rangle=0,$$ and \begin{align*} \langle [y,X]_{\mathfrak{m} },z\rangle _{\lambda} +\langle [z,X]_{\mathfrak{m} },y\rangle _{\lambda} &=\langle [y,X]_{\mathfrak{m} },z\rangle +(\lambda -1)\langle [y,X]_{\mathfrak{m} },z_2\rangle \\ &+\langle [z,X]_{\mathfrak{m} },y\rangle +(\lambda -1)\langle [z,X]_{\mathfrak{m} },y_2\rangle \\ &=(\lambda -1) (\langle [y_1,X]_{\mathfrak{m} },z_2\rangle +\langle [y_2,X]_{\mathfrak{m} },z_2\rangle \\ &+\langle [z_1,X]_{\mathfrak{m} },y_2\rangle +\langle [z_2,X]_{\mathfrak{m} },y_2\rangle )\\ &=(\lambda -1) (\langle [y_1,X]_{\mathfrak{m} },z_2\rangle + \langle [z_1,X]_{\mathfrak{m} },y_2\rangle). \end{align*} Since $[\mathfrak{m}_1,\mathfrak{m}_2]\subseteq \mathfrak{m}_1$, the above relation equals to zero. Also, $$\langle X,X\rangle _{\lambda} =\lambda \langle X,X\rangle <\lambda b_0.$$ Therefore, in each case, $F_{\lambda }$ is an $(\alpha ,\beta )$-metric of Berwald type and so $(M,F_{\lambda})$ and $(M,h_{\lambda})$ have the same geodesics. \end{proof} We next express similar result for homogeneous cubic metric spaces. $F$ is called an m-th root Finsler metric if $F=\sqrt[m]{T}$ and $T=h_{i_1\cdots i_m}(x)y^{i_1}\cdots y^{i_m}$, where $h_{i_1\cdots i_m}$ in all its indices is symmetric. The third root metrics are called the cubic metrics. \begin{lem} \label{cubic metric} Let $(M=G/H,F=\sqrt [3]{T})$ be a cubic space and $T=h.b$, where $h=h_{ij}y^iy^j$ is an invariant Riemannian metric and $b=b_i(x)y^i$ is an invariant one-form such that $\Vert b\Vert ^2=h^{ij}b_ib_j=1$. Let $X$ be the invariant vector field corresponding to $b$. $F$ is of Berwald type if and only if $$\langle [y,X]_{\mathfrak{m} },z\rangle +\langle [z,X]_{\mathfrak{m} },y\rangle=0, \qquad \langle [y,z]_{\mathfrak{m} },X\rangle=0 \quad \forall y,z\in \mathfrak{m}.$$ \end{lem} \begin{proof} Theorem 9 in \cite{Brinzei} implies that $F$ is of Berwald type if and only if $b$ is parallel with respect to $h$. It suffices to use the same argument as in the proof of Theorem 3.1 of \cite{An-Deng Monatsh}. \end{proof} \begin{lem} \label{tow connection with the same geodesics} Suppose that $(M,h)$ and $(M,\tilde{h})$ are two Riemannian manifolds with the same geodesics. Then, the Levi-Civita connections of $(M,h)$ and $(M,\tilde{h})$ coincide. \end{lem} \begin{proof} Let $\nabla $ and $\tilde{\nabla}$ be the Levi-Civita connections of $(M,h)$ and $(M,\tilde{h})$, respectively. $\nabla $ and $\tilde{\nabla}$ are symmetric. Therefore, Proposition 4.10 in \cite{Schmidt} shows that $\nabla =\tilde{\nabla}$. \end{proof} \begin{theorem} \label{two-step cubic space} Let $(M=G/H,F=\sqrt [3]{T})$ be a naturally reductive cubic space and $T=h.b$, where $h=h_{ij}y^iy^j$ is an invariant Riemannian metric and $b=b_i(x)y^i$ is an invariant one-form such that $\Vert b\Vert ^2=h^{ij}b_ib_j=1$. Let $X$ be the corresponding vector field to $b$ and $\langle ,\rangle $ be the corresponding inner product on $\mathfrak{m}=T_o(G/H)$ with respect to $h$. If $\mathfrak{m}=\mathfrak{m}_1\oplus \mathfrak{m}_2$ is an $Ad(H)$-invariant orthogonal decomposition of $\mathfrak{m}$, and $[\mathfrak{m}_1,\mathfrak{m}_2]\subseteq \mathfrak{m}_1$ and $X$ belongs to $\mathfrak{m}_2$, then $M$ admits a one-parameter family of invariant Finsler metrics $F_{\lambda }=\sqrt [3]{T_{\lambda }}$, $\lambda \in \mathbb{R}^+$ such that $(M,F_{\lambda })$ is a two-step Finsler g.o. space, where $T_{\lambda }=h_{\lambda }.b_{\lambda}$, $h_{\lambda }$ is the Riemannian metric on $M$ which corresponds to the inner product $$\langle ,\rangle _{\lambda} =\langle ,\rangle \vert _{\mathfrak{m}_1}\oplus \lambda \langle , \rangle\vert _{\mathfrak{m}_2},$$ and $b_{\lambda }$ is a one-form corresponding to the vector field $X_{\lambda}=\dfrac{1}{\sqrt{\langle X ,X\rangle _{\lambda}}} X$. \end{theorem} \begin{proof} Clearly, any naturally reductive Finsler space is of Berwald type. So, $F$ is a Berwaldian cubic metric and based on remark 10 in \cite{Brinzei}, the geodesics of $(M,F)$ and the geodesics of $(M,h)$ coincide. On the other hand, $F$ is naturally reductive, this means that there exists a naturally reductive Riemannian metric $\tilde{h}$ such that the Chern connection of $F$ and the Levi-Civita connection of $\tilde{h}$ coincide. Hence, Lemma \ref{tow connection with the same geodesics} implies that $h$ and $\tilde{h}$ have the same connection. A similar argument as in the Remark \ref{Naturally reductive (alpha ,beta )-space} shows that $(M,h)$ is naturally reductive. By Corollary 2.4 in \cite{Arvanitoyeorgos-Panagiotis Souris}, $(M,h_{\lambda})$ is a two-step g.o. space. Since $F$ is of Berwald type, Lemma \ref{cubic metric} and the proof of Theorem \ref{main theorem} implies that $F_{\lambda}$ is a Berwald metric. Hence, Theorem 9 in \cite{Brinzei} concludes the proof. \end{proof} \begin{example} Let $(G,h)$ be a Lie group with a bi-invariant Riemannian metric. Let $K$ be a connected subgroup of $G$ and let $\mathfrak{g}$, $\mathfrak{k}$ be the Lie algebras of $G$ and $K$, respectively. The bi-invariant metric $h$ induces an $Ad$-invariant positive definite inner product $\langle ,\rangle$ on $\mathfrak{g}$ such that $\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{m}$ is an orthogonal decomposition. Then, this decomposition is $Ad(K)$-invariant. In fact, for any $x_{\mathfrak{k}}, y_{\mathfrak{k}}\in \mathfrak{k}$ and $x_{ \mathfrak{m}}\in \mathfrak{m}$, we have $$\langle [x_{\mathfrak{k}},x_{\mathfrak{m}}], y_{\mathfrak{k}}\rangle = -\langle x _{\mathfrak{m}}, [x_{\mathfrak{k}}, y_{\mathfrak{k}}]\rangle = -\langle x_{\mathfrak{m}}, z_{\mathfrak{k}}\rangle = 0.$$ It follows that $ [x_{\mathfrak{k}},x_{\mathfrak{m}}]\in \mathfrak{m}$, and since $K$ is connected, then $Ad(K)\mathfrak{m} \subseteq \mathfrak{m}$. Let $F$ be any $(\alpha,\beta)$-metric on $G$ defined by $h$ and a left invariant vector field $X \in \mathfrak{k}\cap Z(\mathfrak{g})$. The Riemannian metric $h$ and the vector field $X$ are right invariant. Therefore, the Finsler metric $F$ is also bi-invariant. Theorem 5 in \cite{Latifi-Toomanian} shows that $(G,F)$ is a naturally reductive Finsler space. Now, we can apply Theorem \ref{main theorem} and consider $\langle , \rangle _{\lambda}$ as follows: $$\langle ,\rangle _{\lambda} =\langle ,\rangle \vert _{\mathfrak{m}}\oplus \lambda \langle , \rangle\vert _{\mathfrak{k}}.$$ Therefore, $(G,F_{\lambda})$ is a two-step homogeneous g.o. space. \end{example} \begin{example} Let $G$ be a Lie group equipped with a bi-invariant Riemannian metric $h$ and $\mathfrak{g}$ denotes its Lie algebra. Then $\mathfrak{g}= Z(\mathfrak{g})\oplus\mathfrak{g}'$ is an orthogonal decomposition of $\mathfrak{g}$ (see \cite{Alexandrino-Bettiol}). We consider an $(\alpha,\beta)$-metric $F$ induced by $h$ and a vector field $X \in Z(\mathfrak{g})$ on $G$. The Riemannian metric $h$ and the vector field $X$ are bi-invariant and so the $(\alpha,\beta)$-metric $F$ is also bi-invariant. Based on the Theorem 5 in \cite{Latifi-Toomanian}, $F$ is a naturally reductive Finsler metric. Suppose that $h_{\lambda}$ is a Riemannian metric on $G$ which corresponds to the inner product $\langle ,\rangle _{\lambda} =\langle ,\rangle \vert _{\mathfrak{g}'}\oplus \lambda \langle , \rangle\vert _{Z(\mathfrak{g})}$ on $\mathfrak{g}$. Then, Theorem \ref{main theorem} shows that $G$ with $(\alpha ,\beta )$-metric $F_{\lambda }$ induced by the Riemannian metric $h_{\lambda}$ and the vector $X_{\lambda} \in Z(\mathfrak{g})$, is a Berwald space. So $F_{\lambda }$ and $h_{\lambda}$ have the same geodesics. Now, Theorem 2.3 of \cite{Arvanitoyeorgos-Panagiotis Souris} shows that $(G,F_{\lambda })$ is a two-step g.o. space. \end{example} The next theorem follow directly from Proposition 5.1 in \cite{Arvanitoyeorgos-Panagiotis Souris} and Theorem \ref{main theorem}. \begin{theorem} Let $G$ be a Lie group equipped with a bi-invariant Riemannian metric $h$ and let $H\subset K$ be connected closed subgroups of $G$. Let $\langle,\rangle$ be the $Ad$-invariant inner product on the Lie algebra $\mathfrak{g}$ corresponding to the bi-invariant Riemannian metric $h$. We consider $T_o(G/H)=\mathfrak{m}$ , $T_o(G/K)=\mathfrak{m}_1$ and $T_o(K/H)=\mathfrak{m}_2$ which $\mathfrak{m}$, $\mathfrak{m}_1$ and $\mathfrak{m}_2$ are subspaces of $\mathfrak{g}$ and $\mathfrak{m}=\mathfrak{m}_1\oplus \mathfrak{m}_2$. Let $F_{\lambda }$ be the $G$-invariant $(\alpha,\beta)$-metric on $G/H$ corresponding to the $Ad(H)$-invariant positive definite inner product $$\langle , \rangle _{\lambda}=\langle , \rangle \vert _{\mathfrak{m}_1} +\lambda \langle , \rangle \vert _{\mathfrak{m}_2} \qquad \lambda >0$$ on $\mathfrak{m} $ and a vector field $X\in \mathfrak{m}_2$ which is orthogonal to $[\mathfrak{m},\mathfrak{m}]_{\mathfrak{m}} $ with respect to $\langle ,\rangle$. Then, $(G/H ,F_{\lambda })$ is a two-step g.o. space. \end{theorem} \begin{proof} Let $\mathfrak{g}$, $\mathfrak{k}$ and $\mathfrak{h}$ denote the Lie algebras of the Lie groups $G$, $K$ and $H$, respectively. Since the inner product $\langle,\rangle$ is $Ad$-invariant, we can find the subspaces $\mathfrak{m} _1$ and $\mathfrak{m} _2$ such that $$\mathfrak{g}=\mathfrak{k}\oplus \mathfrak{m}_1\qquad \textit{and} \qquad \mathfrak{k}=\mathfrak{h} \oplus \mathfrak{m} _2$$ are orthogonal decompositions with respect to $\langle,\rangle$. Therefore, $\mathfrak{m}=\mathfrak{m}_1\oplus \mathfrak{m}_2$ is the orthogonal decomposition with respect to $\langle , \rangle\vert _{\mathfrak{m}}$ such that $[\mathfrak{m}_1,\mathfrak{m}_2]\subseteq \mathfrak{m}_2$. Let $h_{\lambda}$ be the Riemannian metric on $G/H$ arises from $\langle , \rangle _{\lambda} $. Then $(G/H,h_{\lambda })$ is a two-step g.o. space (see the proof of Proposition 5.1 in \cite{Arvanitoyeorgos-Panagiotis Souris}). Since $\langle,\rangle$ is an $Ad$-invariant inner product, if $X\in [\mathfrak{m},\mathfrak{m}]^{\perp }_{\mathfrak{m}}$ then we have $$\langle [y,X]_{\mathfrak{m} },z\rangle +\langle [z,X]_{\mathfrak{m} },y\rangle=0, \qquad \langle [y,z]_{\mathfrak{m} },X\rangle=0 \quad \forall y,z\in \mathfrak{m}.$$ If, in addition, $X\in \mathfrak{m}_2$, by a similar calculation as in the proof of Theorem \ref{main theorem}, we obtain $$\langle [y,X]_{\mathfrak{m} },z\rangle _{\lambda } +\langle [z,X]_{\mathfrak{m} },y\rangle _{\lambda }=0, \qquad \langle [y,z]_{\mathfrak{m} },X\rangle _{\lambda }=0 \quad \forall y,z\in \mathfrak{m}.$$ Suppose that $F_{\lambda }$ is the $(\alpha,\beta)$-metric corresponding to the inner product $\langle,\rangle _{\lambda }$ and the vector $X\in\mathfrak{m}_2\cap [\mathfrak{m} ,\mathfrak{m}]^{\perp }$. Hence, $F_{\lambda }$ is of Berwald type and the Finsler metric $F_{\lambda }$ and the Riemannian metric $h_{\lambda }$ have the same geodesics. \end{proof} Finally, we give a family of two-step geodesic orbit space by an extension of the notion of navigation data of a Randers metric (see \cite{Huang}). Suppose that $(M,F)$ is a Finsler space and $W$ is a vector field such that, for all $x\in M$, $F(x,W(x))<1$. $\tilde{F}$ is called a Finsler metric with the navigation data $(F,W)$ if $$\tilde{F}(x,y)=1 \qquad \Longleftrightarrow \qquad F(x,y-W(x))=1.$$ \begin{theorem} Suppose that $(M=G/H,F)$ is a two-step geodesic orbit space and $W$ is a $G$-invariant Killing vector field on $M$. Let $(M,\tilde{F})$ be the Finsler space with navigation data $(W,F)$. Then every geodesic of $(M,\tilde{F})$ is a two-step homogeneous geodesic. \end{theorem} \begin{proof} Assume that $\phi_t$ denotes the flow of the Killing vector field $W$. Suppose that $G'$ is the group generated by $\phi _t$ and $G$. If $H'$ is the isotropy subgroup of $G'$ at $o=eH$, then $H\subset H'$ and $\mathfrak{g'}=\mathfrak{h'}\oplus \mathfrak{m}$ is a reductive decomposition of $G'/H'$. Let $W_0$ be the element corresponding to $\phi _t$ in $\mathfrak{g'}$ , then based on the Theorem 6.12 in \cite{Yan-Deng}, every geodesic $\gamma $ of $(M,\tilde{F})$ passing $p$ has the following form $$\gamma (t)=\exp(tW_0).\rho (t)$$ where $\rho(t)= \exp (tx)\exp (ty).p$ is a geodesic of $(M,F)$. On the other hand, since $W$ is $G$-invariant, $W_0$ belongs to the center of $\mathfrak{g'}$. Hence $$\gamma (t)=\exp(tW_0)\exp (tx)\exp (ty).p=\exp(t(W_0+x))\exp (ty).p$$ and the proof is complete. \end{proof} \begin{cor} Suppose that $(M=G/H,h)$ is a Riemannian two-step geodesic orbit space and $W$ is a $G$-invariant Killing vector field on $M$, where $\Vert W\Vert _h<1$. If $F$ is a Randers metric with navigation data $(h,W)$, then $(M,F)$ is a two-step geodesic orbit space. \end{cor} \begin{cor} Suppose that $(M=G/H,h)$ is a Riemannian two-step geodesic orbit space and $W$ is a $G$-invariant Killing vector field on $M$. If $F$ is a Kropina metric with navigation data $(h,\dfrac{W}{\Vert W\Vert_h})$, then $(M,F)$ is a two-step geodesic orbit space. \end{cor} \end{document}
arXiv
What is the difference between metric spaces and vector spaces? Does a metric space have an origin? That is, does it have $(0,0)$. Does a vector space have an origin? It seems whatever you can do in a metric space can also be done in a vector space. Is this true? vector-spaces metric-spaces RamanujanRamanujan $\begingroup$ They are entirely different concepts. As to an origin, a general metric space does not have anything that behaves like the ordinary number zero does. A vector space does have a unique "zero-like" object, that is, a vector, that we often call $0$, such that $0+v=v+0=v$ for any vector $v$ in the space. Although the concepts of vector space and metric space are entirely different, some familiar spaces, uch as $\mathbb{R^3}$, are simultaneously vector spaces and metric spaces, and there is interaction between the vector structure and the metric structure. $\endgroup$ – André Nicolas $\begingroup$ The answers below tell the tale, but I am interested to know what you mean by the penultimate sentence in your question. What sort of things have you seen done in metric spaces that can (apparently) also be done in vector spaces? $\endgroup$ No, a metric space does not have any particular distinguished point called "the origin". A vector space does: it is defined by the property $0 + x = x$ for every $x$. In general, in a metric space you don't have the operations of addition and scalar multiplication that you have in a vector space. On the other hand, in general a vector space does not have a notion of "distance". Robert IsraelRobert Israel $\begingroup$ Normed linear vector spaces may have distance functions. For example $d(\vec x, \vec y)=||\vec x - \vec y||$. $\endgroup$ Vector spaces necessarily have a vector called the "zero vector"; in the special case of the vector space $k^n$ (where $k$ is a field), this vector is often called "the origin", since $k^n$ also can be seen as a geometric object (the $n$-dimensional affine space). But vector spaces don't necessarily have something we call "the origin": the collection of all polynomials with real coefficients is a real vector space, but we don't normally refer to the zero polynomial as "the origin", even though it is the zero vector of this vector space. Metric spaces are sets with a metric defined on them. For example, the collection of all complex numbers with complex norm $1$, and with metric given by the usual distance between them as complex numbers, is a metric space. Any nonempty subset of the real numbers, with the usual distance function, is a metric space; and any nonempty set $X$, with distance defined by $d(x,y) = 0$ if $x=y$ and $d(x,y)=1$ if $x\neq y$, is a metric space. There need not be anything that we can reasonably call "the origin." Arturo MagidinArturo Magidin A metric space is a set with a distance. That's all you know. That means the set may not have an algebraic structure. for example, {chair, apple} is a set. define d(apple, apple) = d(chair, chair) = 0 and d(chair, apple) = d(apple, chair) = 1. That's a metric space, and it doesnt look like a vectorial space at all. RafaelRafael I think the OP is confusing a vector space with a normed vector space,which indeed shares many properties of general metric spaces.And for a very simple reason: A norm induces a metric on the underlying set on which the map is defined.This is fairly simple to prove from the definitions and the questioner should try and do it. A general vector space does NOT necessarily have these properties,of course.It is not even necessary that a general vector space admits any notion of distance whatsoever. Mathemagician1234Mathemagician1234 $\begingroup$ Every vector space (over a subfield of $\mathbb{C}$) admits many different norms, and thus can be made into a metric space in multiple ways. $\endgroup$ – Adam Smith $\begingroup$ You're kidding,right? Yes,there are many different norms that can be constructed on a vector space,leading to many possible metrics and their respective topologies.How does that make what I just posted incorrect and deserving of a -1? It does NOT follow from the definition of a vector space in and of itself that that's true. My point was the definition of a vector space in and of itself-without a norm-does NOT admit any natural notion of distance and this was the point of confusion for the questioner.The notion of a norm is independent and needs to be considered separately. $\endgroup$ – Mathemagician1234 $\begingroup$ You made the following false statement : "It is not even necessary that a general vector space admits any notion of distance whatsoever." The word natural does not appear there. $\endgroup$ A metric space is a set with a notion of distance defined between points of that set. This notion of distance is a function known as the metric (which must satisfy a set of axioms pertaining to distance). This metric takes in any two points and maps them onto a real number which characterises the distance between those two points. A vector space is a set containing objects called vectors which interact in some pre-defined way determined by the axioms. Vectors are measured relative to some reference frame and thus have a notion of magnitude and direction from some origin. tinotino This is an ancient question; it also has been correctly answered. But maybe I can provide a perspective from Applied Mathematics. A vector space with a finite number of dimensions $n$ must have a global coordinate structure (actually, many such structures) specified by $n$ vectors. For example, $\mathbb R^3$ has a global coordinate structure whose basis vectors are $$ X = (1, 0, 0) $$ $$ Y = (0, 1, 0) $$ $$ Z = (0, 0, 1) $$ What I mean by "coordinate structure" is that points of $\mathbb R^3$ can be specified as a linear combination of these three vectors: $$ v = (a X + b Y + c Z, d X + e Y + f Z, g X + h Y + i Z) $$ For the canonical basis above this is easy to see; there are more complicated choices of basis, but at any rate, this works for every element of $\mathbb R^3$. Now take something as planet Earth. Obviously Earth isn't a mathematical object, and admits multiple mathematical models -- cosmologists see it as three-dimensional and who knows what contemporary physicists are up to. But we don't see Earth in three dimensions -- it's the literal 2D ground on which we walk. Still, it's notorious that a proper two-dimensional coordinate system for the Earth cannot be found. One way to put this is that this model of the Earth is a two-dimensional manifold embedded in $\mathbb R^3$. But technically manifolds don't need to be embedded in anything; they just need to have some local structure that's out of scope for us. But the point is that we can easily define distances on Earth: two possible choices are geodesic (shortest flyover) and shortest-possible walking distance. We don't care how high the Everest is in a putative third dimension, we care how far it is on foot. So that's a metric space (please check the axioms to convince yourself this is true); but not a vector space. My current research has to do with Nearest Neighbors methods for data science. A conceptual point that I stress is that NN methods are useful when the data can be given a distance structure, but not a (credible) vector space structure. Take something like underwear: we can clearly say that U1 is closer to U2 than to U3, but it's much harder to imagine that there are basis vectors such that all underwear is just combinations of it. I.e. often the domain problems that arrive at the data scientist's desk have definite similarity structure but not a clear global coordinate structure. (Of course, the Nearest Neighbor algorithm ultimately does calculations with computer arrays representing vectors; the entire challenge is to find "metric learning" algorithms that figure out the appropriate distance function near each point and map the data onto something on which the standard vector distances represent the abstract, ground-truth distances between objects. But this too is besides the point.) Not the answer you're looking for? Browse other questions tagged vector-spaces metric-spaces or ask your own question. Prove an open ball is an open set Difference between metric and norm made concrete: The case of Euclid Is there a meaning to convergence, limits and closedness in pseudo-metric spaces? Abstracted Metric and Measure Spaces Metric spaces and normed vector spaces Difference between open sets in subspaces and metric spaces Relation between metric spaces, normed vector spaces, and inner product space. Vector Spaces, Normed Vector Spaces and Metric spaces Are metric vector spaces interesting? difference between the metric tensor, dot product and metric
CommonCrawl
A statistical method for single sample analysis of HumanMethylation450 array data: genome-wide methylation analysis of patients with imprinting disorders Faisal I Rezwan1, Louise E Docherty1,2, Rebecca L Poole1,2, Gabrielle A Lockett1, S Hasan Arshad3,4, John W Holloway1, I Karen Temple1,5 & Deborah JG Mackay1,2 Clinical Epigenetics volume 7, Article number: 48 (2015) Cite this article The Illumina Infinium HumanMethylation450 BeadChip is an array-based technology for analysing DNA methylation at approximately 475,000 differentially methylated cytosines across the human genome. Hitherto, the array has been used for case-control studies, where sample numbers can be sufficient to yield statistically robust data on a genome-wide basis. We recently reported an informatic pipeline capable of yielding statistically and biologically significant results using only five cases, which expanded the use of this technology to rare disease studies. However, the clinical application of these technologies requires the ability to perform robust analysis of individual patients. Here we report a novel informatic approach for methylation array analysis of single samples, using the Crawford-Howell t-test. We tested our approach on patients with ultra-rare imprinting disorders with aberrant DNA methylation at multiple locations across the genome, which was previously detected by targeted testing. However, array analysis outperformed targeted assays in three ways: it detected loci not normally analysed by targeted testing, detected methylation changes too subtle to detect by the targeted testing and reported broad and consistent methylation changes across genetic loci not captured by point testing. This method has potential clinical utility for human disorders where DNA methylation change may be a biomarker of disease. Epigenetic modulation of gene expression is responsible for tissue specific and temporal changes across growth and development. The most widely studied of these epigenetic modifications is DNA methylation of 5-methylcytosine at CpG dinucleotides. Aberrations of DNA methylation are associated with a range of diseases, including imprinting disorders and cancer [1]. Recent advances in technologies have made it possible to study the epigenetic changes associated with these diseases using robust genome-wide technologies including the Infinium HumanMethylation450 BeadChip (henceforward denoted the 450 k array; www.Illumina.com). The 450 k array measures the intensity of fluorescent signal from methylated and unmethylated probes at approximately 475,000 CpG dinucleotides across the genome, including CpG islands, promoters, gene bodies, intergenic regions and the majority of imprinted loci. These intensities are then used to calculate DNA methylation levels, with advantageous throughput, cost, coverage and technical consistency. To date, many studies, utilising the 450 k array, have used case-control designs [2-6]. The limitation to the majority of these studies is that the bioinformatic analysis used requires a large number of cases and controls to obtain statistically significant results. Recently, we developed a novel informatic pipeline yielding statistically and biologically significant results using small case number analysis (case = 5) [7], which expanded the use of this technology to rare disease studies. However, the clinical application of these technologies requires the ability to perform robust analysis of individual patients. Humans harbour approximately 100 known imprinted genes, characterised by the epigenetic control of gene expression, often through parent-of-origin-specific methylation that is applied in the germ line and conserved through subsequent development in all tissues. As yet, disruption of the methylation state at eight imprinted loci has been associated with imprinting disorders (Beckwith-Wiedemann syndrome (BWS; MIM #130659), Silver-Russell syndrome (SRS; MIM #180860), transient neonatal diabetes mellitus (TNDM; MIM #601410), Prader-Willi syndrome (PWS; MIM #176270), Angelman syndrome (AS; MIM #105830), matUPD14-like (Temple syndrome) and patUPD14-like (Wang-Kagami) syndromes and pseudohypoparathyroidism 1B (PHP-1B; MIM #103580)). Rare patients with multi-locus methylation disorders (MLMD) [8-11] form a uniquely informative group of samples that can be used to develop a sensitive and specific single sample 450 k array bioinformatic pipeline. Informatically, there are a number of approaches to single case analysis. A single normalised sample can be compared against a large sample group standardised in the same way [12]. However, collecting a large normative sample can be both time-consuming and challenging [13]. Another approach is to compare one or more tests to the performance of the same individuals by chi-square tests. However, the significant raw difference between different performances (or scores) can be diminished by comparison against control performance (or score) and vice versa. Alternatively, a single sample's performance (or score) can be compared to that of a matched control group. Whereas the standardised method requires a large number of samples and intra-individual comparisons require assessment of two or more independent variables, the single case-control method requires only a moderate number of controls [12]. In single case-control analysis, the most common means of detecting significant differences is to convert the case's score to a z-score using the control sample mean and standard deviation and referring the score to a table of areas under the normal curve [14]. However, this might not accurately estimate the parameters if the control sample is large enough to assume that the mean and the standard deviation are used as population parameter rather than sample statistics [15]. In many cases, the number of controls can be quite small (even smaller than 10). Therefore, it is logical to use a t-test method using the t-distribution. A number of studies used one-sample t-tests in their single case-control studies, and to date, several studies used Crawford and Howell's t-test methods as mentioned in [16]. The Weisberg t-test, for identifying outliers, is also capable of single sample analysis. The different t-tests will be briefly described in the following section. However, all of these studies involved neuropsychological rather than 450 k array data. Here we demonstrate the effectiveness of a single case-control method for analysing 450 k array data from patients with multi-locus and single-locus imprinting disorders. Using 450 k array data from patients with known regions and severity of DNA hypomethylation, we were able to optimise our informatic approach: firstly, by comparison of various t-test methods and secondly, by varying the control group size to identify the smallest control size required to detect biologically and statistically significant changes in methylation at known regions of hypomethylation specific to each patient. As mentioned earlier, we developed a pipeline for small sample size (n cases = 5) against large control groups, using patients with TND-MLMD and BWS-MLMD and broadly similar patterns of methylation change as determined by targeted testing. The pipeline applied a linear model as the statistical method, and CpGs were selected where they were hypomethylated compared with controls, with an adjusted P value < 1.33 × 10− 7 and M values between −1 and +1 (equivalent to 0.26 ≤ β ≤ 0.7) in normal controls, to enrich for the intermediate methylation consistent with the hemimethylation of genomic imprinting. We focused our attention on genes or DNA regions containing at minimum two CpGs within 2000 nucleotides. Using this approach, we detected 21 hypomethylated regions in the TND-MLMD and 34 regions in BWS-MLMD pooled samples [7], including regions of hypomethylation that were previously unknown and consistent with genomic imprinting. Targeted testing showed that some of these regions were not hypomethylated in all the samples. Therefore, though analysis of small case numbers vs. large control numbers could identify differential methylation robustly, it failed to identify patient-specific regions without targeted follow-up testing. We applied single sample t-tests (Crawford-Howell, Weisberg and one-sample t-tests) instead of linear regression as the statistical method and modified the filtration criteria: hypomethylated DNA sequences with characteristics consistent with imprinting were selected as those containing a minimum of three consecutive CpGs within 2000 nucleotides with M value between −1 and +1 in normal controls and P value <0.05. Selection of the CH t-test after comparative evaluation of t-test performance Here we compared three types of t-tests (Crawford-Howell, one-sample and Weisberg t-tests) for their ability to identify known regions of differential methylation by single sample analysis while predicting less variability using a randomly selected control group size of 50 which was batch-matched, that is derived from the same batch of 450 k analyses as the patient DNA. Using simulated data, it has already been shown that the Crawford-Howell t-test (denoted as CH t-test henceforth) works better than the one-sample t-test irrespective of the number of controls and the one-sample t-test has a high Type I error rate [16]. It is true that, in case of single sample t-test, using single value against a control group is highly unorthodox as this type of t-test is used to test whether a sample mean differs significantly from a known population mean. However, it has been used in a number of studies [17-19] in this manner. The CH and Weisberg t-tests are more efficient in identifying significant hypomethylation than the one-sample t-test. For example, in TND-MLMD patients both CH and Weisberg t-tests were able to identify a number of sites including the cardinal disease locus PLAGL1 in all patients. Though both the CH and Weisberg t-tests showed similar results for several loci, the Weisberg test generated slightly less significant P values (differences in P values ranging from 10−9 to 10−15 at the PLAGL1 locus). The difference in P values is attributed to the difference in minimum P value threshold of CH t-test and Weisberg t-tests. Conversely, the one-sample t-test did identify significant sites along with many more false positives. The results of different t-tests examining PLAGL1 in TND-MLMD 5 are presented in Figure 1 and Additional file 1: Table S7. The point estimates (estimated percentage of the control population that would be expected to obtain lower score than the case) for both CH and Weisberg t-tests are well within the 95% confidence interval of the noncentrality parameter derived from the case scores. However, in case of one-sample t-test, there are a number of instances in which the point estimates do not fit in to that confidence interval, showing that one-sample t-test predicted large numbers of insignificant hypomethylation signals as significant. Likewise, the CH and Weisberg t-tests produced similar results for BWS-MLMD patients, identifying several regions of differential methylation, including the cardinal disease locus KCNQ1OT1, whereas the one-sample t-test again detected many false positive sites as significant differential methylation. The results of different t-tests around KCNQ1OT1 in BWS-MLMD 1 are presented in Additional file 1: Figure S1 and Additional file 1: Table S8, which shows the same type of outcome. The same samples with varying control size (5, 10, 20, 30, 40 and 50) showed the same trend in the efficiency of the t-tests. The performance of three t-tests (one-sample, Weisberg and Crawford-Howell t-tests) around the PLAGL1 region in TND-MLMD 1. The x-axis denotes the genomic location of PLAGL1 on chromosome 6. The y-axis represents the estimated percentage of the control population that would be expected to obtain lower score than the case (point estimate), which is calculated according to the one-sample (OS, red crossed line), Weisberg (WB, green line with green square markers) and Crawford-Howell (CH, blue line) methods. The blue shade represents 95% confidence interval of the point estimates from the noncentrality parameter from a noncentral t-distribution. Therefore, both CH and Weisberg t-tests are capable of identifying regions of differential methylation with low false positive rate in single sample case-controls analysis. However, the CH t-test has the advantage of calculating effect size for single sample case-control studies, which is absent from the Weisberg t-test; therefore, we selected CH t-test for further analysis due to this and the more significant P values generated, and all further tests described here were performed using this method. Detection of cardinal locations in MLMD patients using the single sample analysis Previous targeted testing of our samples identified several known regions of biologically significant differential methylation in addition to the cardinal disease loci. However, the magnitude of differential methylation varied at different imprinted loci, for example all TND cases have complete hypomethylation at the PLAGL1 locus, whereas individuals with BWS showed varying levels of hypomethylation at KCNQ1OT1 (see Figure 2). These data provided us with valuable information on the inter-individual differences of both the severity of differential methylation and the regions affected. Comparison of detection of methylation changes between targeted DNA methylation testing and single sample analysis. Column headers indicate the loci tested and their genomic locations. Rows denote targeted testing (TT) and single sample analysis (450 k) results of individual patients, grouped by their presenting disorder. The DNA methylation at differentially methylated loci was estimated by methylation-specific PCR (msPCR) in TT. A methylation ratio of 1 is equivalent to hemizygous methylation, as seen in normal controls; a ratio of 2 indicates two-fold excess of unmethylated over methylated template; 'Total' indicates no detectable methylated sequences. The intensity of blue shading reflects the severity of hypomethylation. A dash indicates no data, normally because insufficient DNA prevented completion of all testing. For 450 k, the P values have been determined by Fisher's combined P value method for independent tests. The ∞ symbol means no significant methylation changes were detected at that region and 0 is yielded while the P value is too small (<10−350). BWS-MLMD, Beckwith-Wiedemann syndrome-multi-locus methylation disorders; TND-MLMD, transient neonatal diabetes-multi-locus methylation disorders. Analysis of TND-MLMD and BWS-MLMD patients' 450 k data, using the single sample analysis pipeline and a randomly selected batch-matched control group of size 50, identified several regions of differential methylation in a number of cases. These regions are largely dependent on the magnitude of the differential methylation as predicted by targeted testing. In all patients, the cardinal disease loci were identified: that is, PLAGL1 in TND-MLMD patients and KCNQ1OT1 in BWS patients. Figure 3 (with Additional file 1 Table S9) illustrates the identification of hypomethylation at KCNQ1OT1 in BWS-MLMD 4 (data for other BWS-MLMD patients are in Additional file 1: Figure S2 and for TND-MLMD cases in Additional file 1: Figure S3). Identification of hypomethylation at the cardinal loci in an MLMD sample. Upper panel: Genomic location from the UCSC genome browser, illustrating the KCNQ1 gene and the imprinting control region. Lower panel: graphical presentation of 450 k DNA methylation data across the KCNQ1 gene in BWS-MLMD 4. The x-axis corresponds to the genomic location as illustrated in the upper panel. The primary y-axis (left) represents the CH P value (solid blue line); the secondary y-axis (right) represents the difference in M value between BWS-MLMD 4 and controls (dashed black line). BWS-MLMD, Beckwith-Wiedemann syndrome-multi-locus methylation disorders. Detection of methylation disturbance at multiple locations using single sample analysis Apart from the cardinal loci, the pipeline also detected additional significantly hypomethylated loci, including but not limited to those identified by targeted testing. Hypomethylation was detected at well-established imprinted loci including SNRPN, GNAS, MEST and GRB10, more recently identified loci including ZNF331, FAM50B, HM13, ERLIN2, LOC100130522, WRB and NHP2L1, and previously uninvestigated regions (such as SVOPL and MAFG; Additional file 1: Table S1). To assess the sensitivity of the pipeline, we focused on three imprinted loci: SNRPN (chr15: 25068738–25201732), GNAS (chr20:57,380,000-57,400,000) and WRB (chr21: 40752116–40752116). Significant hypomethylation at SNRPN was confirmed in TND-MLMD 5 and BWS-MLMD 4 by targeted testing (Figure 2) which is also detected by our pipeline (Figure 4, Additional file 1: Table S10). Moreover, TND-MLMD 2 was also found to show differential methylation in some SNRPN CpGs outside the differentially methylated region (DMR) - this was not detected by msPCR analysis (Additional file 1: Figure S4), and it shows a slightly different methylation pattern than that of TND-MLMD 5. Likewise, the mosaic hypomethylation of WRB detected in three TND-MLMD samples was confirmed by targeted testing methylome analysis (TND-MLMDs 2, 3, 4 and 5: Additional file 1: Figure S5). Identification of hypomethylation at the SNRPN locus in MLMD samples. Upper panel: Genomic location from the UCSC genome browser, illustrating the SNRPN gene and the imprinting control region. Lower panel: graphical presentation of 450 k DNA methylation data across the SNRPN gene in BWS-MLMD 4 (red) and TND-MLMD 5 (blue). The x-axis corresponds to the genomic location as illustrated in the upper panel. The primary y-axis (left) represents the CH P value (solid lines); the secondary y-axis (right) represents the difference in M value between the cases and controls (dashed lines). BWS-MLMD, Beckwith-Wiedemann syndrome-multi-locus methylation disorders; TND-MLMD, transient neonatal diabetes-multi-locus methylation disorders. The pipeline detected significant methylation changes across the GNAS locus in three out of five BWS-MLMD and one out of five TND-MLMD patients. Figure 5 (with Additional file 1: Table S11) illustrates the GNAS locus in BWS-MLMD 4 and TND-MLMD 2, showing that methylation changes are detected at consecutive probes across the locus, which is more informative than the point determinations of targeted testing. Identification of hypomethylation at the GNAS locus in MLMD samples. Upper panel: Genomic location from the UCSC genome browser, illustrating the GNAS locus and three regions of high CpG density harbouring differentially methylated regions. Lower panel: graphical presentation of 450 k DNA methylation data across the GNAS locus TND-MLMD 2 (red) and BWS-MLMD 4 (blue). The x-axis corresponds to the genomic location as illustrated in the upper panel. The primary y-axis (left) represents the CH P value (solid lines); the secondary y-axis (right) represents the difference in M values between cases and controls (dashed lines). Note the hypomethylation clearly visible at three locations in TND-MLMD 2, coinciding with the more subtle hypomethylation detectable in BWS-MLMD 4 primarily through significance of P value. BWS-MLMD, Beckwith-Wiedemann syndrome-multi-locus methylation disorders; TND-MLMD, transient neonatal diabetes-multi-locus methylation disorders. Use of the single sample approach on 'simple' patients In addition to the samples with MLMD, we analysed samples from patients with 'simple' imprinting disorders where targeted testing indicated that hypomethylation was restricted to the cardinal disease locus. In all four TND samples, the cardinal region of differential methylation was PLAGL1 as expected. For two samples, TND-SIMPLE 2 and 3, hypomethylation of five and one additional imprinted regions was identified respectively, characteristic of MLMDs (Additional file 1: Table S2). Additionally, one of the samples (TND-SIMPLE 2) had many novel regions of hypomethylation not previously associated with imprinted loci. One of these, GLP2R was also observed in TND-SIMPLE 3 as the only hypomethylated locus not associated with a known imprinting region. Likewise, all the BWS samples were hypomethylated at the cardinal locus KCNQ1OT1, but one sample (BWS-SIMPLE3) showed hypomethylation at multiple imprinting loci, characteristic of MLMD. This shows that 450 k-based analyses can detect methylation changes that may go undetected by the point determinations of targeted testing. (Additional file 1: Table S3). Applying the single sample analysis to biological replicates To assess whether the CH t-test detected false positives from control group variations, we processed one sample with two completely different batch-matched groups of 50 controls. The two tests respectively selected 205 and 184 hypomethylated CpG sites, with 170 sites in 12 regions in common (see Additional file 1: Table S4). Determining the minimum number of controls Significance test To assess the effect of control group size on detection of known regions of differential methylation, BWS-MLMD samples were analysed with varying numbers of controls (5, 10, 20, 30, 40 or 50) using the CH t-test. With 5 controls, no cardinal sites for BWS-MLMDs were detected. When control group size = 10, numerous regions of hypomethylation were identified (KCNQ1, PLAGL1, DIRAS3, MEST, GNAS, PEG3, NHP2L1, and PPIEL) though not WRB. With 20 controls all known regions of differential methylation were detected, the use of 30, 40 or 50 controls added little sensitivity. Similar results were obtained for TND-MLMD cases. In summary, 10 controls can produce statistically and biologically significant results, though 20 controls are preferable to obtain higher sensitivity. Additional file 1: Table S5 presents the sites found in TND-MLMD and BWS-MLMD samples using the varying control sizes using the CH t-test. No further significant improvement was observed using a larger control group (>50 controls), therefore, we restricted our maximum number of controls to 50. Effect size calculation In order to determine the effect of the CH t-test on the magnitude of differential methylation, the effect size for each sample was calculated against variable numbers of controls (5, 10, 20, 30, 40 or 50). For both TND-MLMD and BWS-MLMD samples, at the majority of differentially methylated regions, the effect sizes were similar irrespective of the number of controls. However, at some regions of differential methylation, the effect size was greater when control group size = 5 rather than ≥10. To determine the reliability of the effect size, point estimates and 95% confidence intervals of those effect sizes were calculated. In general for TND-MLMD samples, the confidence interval was strikingly wider with 5 than 10 controls. For example, at the PLAGL1 region in TND-MLMD3, with 5 controls the effect size was −24.650, but the confidence interval was wide (−41.162 to −8.549), indicating this effect size as unreliable. With 40 controls, the effect size was much smaller (−13.875), but its confidence interval (−16.953 to −10.789) was tighter (Additional file 1: Table S6). With 20 controls, the effect size was intermediate −21.309, with confidence interval −28.033 to −14.574. The width of the confidence interval is attributed to the extreme hypomethylation of the PLAGL1 locus, which is a typical biological finding in TND but detrimental to effect size. For subtle changes in methylation, the effect sizes were much smaller, and use of 10, 20 or 30 controls resulted in large effect sizes with tight confidence intervals in both TND-MLMD and BWS-MLMD. Therefore, 20 controls appeared optimal for single sample analyses. There are a number of motivations for developing this single sample case-control method for analysing Illumina 450 k methylation data. Firstly, the study population - patients with imprinting disorders - is small, and classic case-control studies would not yield statistically significant results. Secondly, individual patients have unique clinical features and unique epimutations and therefore require individual analysis to yield relevant epigenetic data with clinical utility. Thirdly, our former small-sample analysis approach [7] requires large control numbers to attain high statistical robustness, which is not always feasible for analysing single patients. Fourthly, use of large control batches would be prohibitively expensive if epigenomic array analysis is to be adopted as a pragmatic tool for epigenetic diagnosis of patients. Technical replication of the same sample in different batches, with different controls, clearly confirmed that our approach robustly detected statistically significant sites. The method also identified outlier samples, which is not possible for grouped case-control studies. The pipeline clearly identified one BWS-HIL patient with large abnormal DNA methylation variations (may be due to technical variations) in multiple locations, though these may be due to technical variation (Additional file 1: Table S1). It should be noted that the threshold P value of 0.05 is not as stringent as that used for case-control analyses (< 1.33 × 10− 7) but nonetheless does robustly identify imprinted loci. The CH t-test method has the advantage of reporting not only probability of significant methylation changes but also the magnitude of the change by its effect size point estimate and confidence intervals. The power calculation interval shows the uncertainty of the point estimate of the effect size and its variation with the number of controls [20]. Using this metric gave a concrete indication of the number of controls required to yield significant results. The optimal number of controls for this approach was determined empirically, as the number of controls for which known imprinted loci were robustly detected. In broad terms, fewer than ten controls gave unreliable effect sizes (large confidence intervals), whereas control sizes of 10 and 20 gave improvements in confidence, and a modest additional improvement was achieved for >20 controls. We, therefore, suggest that 20 controls in the same batch are optimal for this approach, and using 10 controls is feasible in statistical terms. However, a requirement for large numbers of controls is not ideal for use in a diagnostic setting where cost is a consideration. We are currently attempting to identify robust methods for identifying methylation changes without the need for batch-matched controls. Though it is true that use of smaller numbers of controls runs the risk of violating the normality assumption, the effect of departure from normality is modest in case of the CH-test, as it is capable of controlling the Type I error rate [21,22]. While using large numbers of controls assures the normality of the distribution from the controls, in our empirical tests, we observed only incremental increases in statistical power with increase in control number above 20 controls. We found the 450 k array to have unexpected benefits compared with targeted testing. Firstly, 450 k analysis is by definition an epigenome-wide approach and therefore detected DNA methylation variation at other loci not normally assessed in targeted testing for imprinting disorders. This expands of the scope of differentially methylated regions for future analysis. Secondly, 450 k data analysis was sensitive to subtle methylation changes at differentially methylated regions to the point where it detected variations that were undetected in targeted testing. Two cases that appeared by targeted testing to show 'simple' methylation changes (one TND and one BWS) were shown by 450 k array to have MLMD with subtle variations at several imprinted loci, which may be relevant to the clinical presentation of these individuals. This sensitivity probably stems from the fact that differentially methylated regions of imprinted genes frequently span tens or hundreds of CpG dinucleotides. While targeted testing is a single-point analysis, so a subtle variation may not be distinguishable from the normal range, whereas using the 450 k array a subtle variation may be reiterated many times sequentially, increasing its statistical robustness. Thirdly, differentially methylated regions are typified by dense clustering of CpG dinucleotides, and 450 k analysis reports on multiple CpGs in any given locus, and therefore, it gives information about the extent of methylation anomalies across a locus. This may offer novel information about the extent and effects of methylation changes across gene clusters. An obvious limitation of 450 k-based analysis is that the array targets only a small percentage of potentially methylated cytosines in the genome; therefore, additional loci affected in these patients may remain undetected by this method. However, the disadvantage of incomplete coverage is offset by the advantages of cost and technical consistency. 450 k array-based analysis has not previously been used on patients with imprinting disorders, because their rarity and heterogeneity precluded the use of established case-control cohort studies. This is potentially very important for imprinting disorders, where standard diagnostic testing is fragmented, time-consuming and variably sensitive, and where clinically heterogeneous and overlapping features (for example pre- and post-natal growth dysregulation) can be associated with multiple epigenetic mutations, many of which are not included in current testing regimes. 450 k analysis offers potential for diagnosis of known imprinting disorders and for detection of novel patterns of methylation anomalies. This may lead to substantial improvements in the diagnostic rate and translational research for imprinting disorders, in the same way that genome-wide array analysis has advanced the clinical genetics of common diseases over the last fifteen years [23-25]. Intriguingly, methylation variation may also act as a biomarker of underlying genetic anomalies. It is well known that some deleterious genetic/genomic variations can be detected by means of consequent methylation changes: for example, FRAX triplet-repeat expansions cause promoter methylation and inactivation of the FRAX gene and Fragile X mental retardation [26], deletions and rearrangements of the IGF2 enhancer attenuate IGF2 expression with co-ordinate hypomethylation of promoter sequences [27] and genetic rearrangements in Lynch syndrome are detectable as epigenetic inactivation of MSH2 [28]. We suggest that epigenome-wide DNA methylation analysis may be a powerful adjunct to genomic analysis, since it may indirectly indicate genomic variations that do not alter coding sequence but do alter gene expression. Using the Crawford-Howell t-test in single sample case-control studies is a novel approach for analysing Illumina 450 k array methylation data. By this method, we identified statistically and biologically significant hypomethylation in individuals at both known and novel sites. We suggest that single sample analysis makes possible the use of the 450 k array as a translational research or diagnostic tool for human disorders associated with disturbance of DNA methylation. Study and control populations For this study, we selected patients with two imprinting disorders, Transient Neonatal Diabetes (TND) and Beckwith-Wiedemann Syndrome (BWS). These patients have been described previously, and their methylation levels determined at several imprinted loci by targeted testing [7,9,29]. In our recent study [7], five multi-locus methylation disorder patient samples from each clinically classified group (TND or BWS) were processed in separate batches with 245 and 221 anonymous healthy controls, respectively, from an unrelated study. In this study, we additionally included four TND and three BWS patients where targeted testing detected DNA hypomethylation only at the cardinal disease loci, with no known involvement of any other imprinted locus (denoted 'simple' BWS and TND cases). These samples were processed in a third batch with 63 anonymous healthy controls from an unrelated study. Batch-matched controls were chosen as the control group and randomly selected for each sample analysed in the single sample analysis pipeline. To assess the methylation level in each sample, a standard workflow was followed. The DNA in each sample was extracted from the whole blood by the standard procedure described in [30], and DNA concentration was determined using PicoGreen dsDNA Quantitation Kit (Molecular Probes, Inc., OR, USA). One microgram of DNA was bisulfite-treated for converting cytosine to thymine using the EZ 96-DNA Methylation Kit (Zymo Research, CA, USA). Illumina Infinium HumanMethylation450 BeadChip (Illumina, Inc., CA, USA), which was processed following standard protocol [31], was used to estimate genome-wide DNA methylation. Multiple identical control samples were assigned to each batch to assess assay variability and control batch effects. The BeadChips were scanned by the BeadStation and the methylation levels, as beta (β) values, were extracted using the Methylation Module of GenomeStudio (version 2011.1). The methylation data were then pre-processed further, as described in the following section. Single sample analysis pipeline The single sample analysis pipeline was developed combining the Illumina Methylation Analyzer (IMA) package [32] and implementation of single sample t-tests within the R statistical analysis environment (http://www.r-project.org). In the first stage, the IMA package is used for pre-processing and quality control, and the output data are used single sample analysis. The workflow of the pipeline is shown in Figure 6, and the steps are described as follows. Workflow of the single case-control pipeline. Each single case was pre-processed with the controls using IMA package, and then the Crawford-Howell t-test method was implemented to identify differentially methylated sites. To reduce the rate of false positives, filtration criteria were set to obtain filtered results. Pre-processing of the 450 k data first removes any CpG sites with missing values, followed by removal of any sample where >90% CpG sites have detection P value >0.05, and any CpG sites where >75% samples have detection P value >10−5. Probes on the X and Y chromosomes were removed to discard any sex bias within the samples. The beta-values were converted to logit transformed M values, and quantile normalisation was used to normalise signal intensities to reduce inter-array variation [33]. Peak correction [34] was applied to correct differences between Infinium I and Infinium II type assays. No batch correction was required as each case and its corresponding controls were drawn from the same batch. Statistically significant differences between the pre-processed M values of cases and controls were determined using single sample t-tests. Statistical tests for identifying significant differential methylation In our single sample studies, we mainly used the CH t-test method (described in [12] and [35]) for statistical analysis of the pre-processed data. The reasons for this selection are presented in the Results and Discussion sections. This method is an alternative t-test method, which treats control sample statistics as statistics rather than as a population parameter. The CH t-test is described by $$ {t}_{CH}=\frac{x^{*}-\overline{x}}{s\;\sqrt{\frac{n+1}{n}}} $$ Where x * is the single sample score, \( \overline{x} \) and s are the mean and standard deviation of scores in control samples, respectively, and n is the size of the control sample. If the t-value (tCH) falls below the one-tailed 5% critical value for t on n-1 degrees of freedom (df), then it can be said that the case score sufficiently differs from the control population to refute the null hypothesis. For an example, a control sample of 10 samples (n = 10) has a mean of 0.5 (\( \overline{x}=0.5 \)) and standard deviation of 0.1 (s = 0.1). If the case score is 0.4 (x* = 0.4), the t-value, from CH-test, is 0.954 with 9 df and a one-tailed probability using Student t-distribution of 0.365. Therefore, the case score is not low enough to reject the null hypothesis that the case score is drawn from the control population. To establish the optimal test for or single sample analysis, we compared the CH t-test method to other two t-tests, namely - one-sample and Weisberg t-tests. The one-sample t-test draws inferences regarding significant differences between a single case and control scores. It compares the known control sample mean with the score of a single case, which is hypothesised as a population mean. The formula for the one-sample t-test is $$ {t}_{OS}=\frac{\overline{x}-{x}^{*}}{s/\sqrt{n}} $$ However, the one-sample t-test exhibits a high type I error. For an example, if we use the same measures as above (\( n=10;\;\overline{x}=0.5;s=0.1;{x}^{*}=0.4 \)), we obtain a t-value of 3.162 with 9 df and the one-tailed probability is 0.012, which incorrectly rejects the null hypothesis. On the other hand, the Weisberg t-test for outliers (described in [36]) also can detect abnormal scores of a single sample against a limited number of control samples. The formula for the Weisberg t-test is $$ {t}_{WB}=\frac{x^{*}-\overline{x}}{s\;\sqrt{\frac{n}{n-1}}} $$ If we apply the same example in case of Weisberg t-test, we obtain a t-value of −0.949 with 8 df, and the one-tailed probability is 0.371. Power calculations for single sample analysis In order to determine the magnitude of loss of methylation at significant sites/regions for single sample case-control analysis, application of a significance test alone is not ideal [37]. Therefore, we applied a power calculation for the CH t-test to generate an effect size estimate using the P value from [20]. This is similar to Cohen's d, which is the difference between the means of case and control samples in standardised units divided by the pooled standard deviation of the two samples [20]. Similarly for the CH t-test, the effect size index is calculated using the difference between the single case score (x) and the mean of controls (\( \overline{x} \)) divided by the standard deviation of controls (s x ): $$ {z}_{cc}=\frac{x-\overline{x}}{s_x} $$ For an example, a control group of 10 samples (n = 10) has a mean of 0.5 (\( \overline{x}=0.5 \)) and standard deviation of 0.1 (s = 0.1). If the case score is 0.4 (x * = 0.4), z cc = −1.0, and \( {z}_{cc}\sqrt{n}\kern0.5em =\kern0.5em -3.162 \). The noncentrality parameter for the t-distribution having −3.162 as its 0.975 percentile point with 9 df is −5.538. Therefore, the lower limit is \( -5.538/\sqrt{n}\kern0.5em =\kern0.5em -1.751 \). In the same way, the upper limit of z cc can be calculated as −0.214. The t-value from the CH t-test shows the statistical significance of the difference between case and controls, whereas the effect size index shows the level of difference between them. Along with the point estimate of the effect size, an estimate interval should also be presented in the single sample analysis. The procedure used in this paper to measure the confidence interval of the point estimate of the effect size has been described previously in [35], and the calculation is further explained in [20]. Filtering criteria To reduce false positive calls, we further filtered the results from the significant difference between case and controls groups by CH t-test and power calculation. To define sites that were hypomethylated in cases, we initially set the same stringent criteria as in [7]: one-tailed P value (adjusted using false discovery rate) < 10−7 and M value between −1 and +1 in normal controls with the beta-differences smaller than zero (to select only hypomethylated loci). Genes containing at least three CpGs meeting these criteria within <2000 bp (base pair) were selected as candidate DMRs consistent with imprinting. However, when applied to single sample analysis, these criteria were too stringent to detect known differentially methylated regions. Therefore, for single sample analyses, we used a less stringent P value, which was calculated as described in Statistical tests for identifying significant differential methylation: significant methylation changes were therefore selected as those containing a minimum of three consecutive CpGs within 2000 nucleotides with M values between −1 and +1 in normal controls and P value <0.05. Minimum number of controls Using varying numbers of controls (5, 10, 20, 30, 40 or 50) we assessed the impact of control group size in detecting known regions of hypomethylation and changes in effect size and confidence interval in our single sample analysis. Feinberg AP. Phenotypic plasticity and the epigenetics of human disease. Nature. 2007;447:433–40. Yang BZ, Zhang H, Ge W, Weder N, Douglas-Palumberi H, Perepletchikova F, et al. Child abuse and epigenetic mechanisms of disease risk. Am J Prev Med. 2013;44:101–7. Shenker NS, Polidoro S, van Veldhoven K, Sacerdote C, Ricceri F, Birrell MA, et al. Epigenome-wide association study in the European Prospective Investigation into Cancer and Nutrition (EPIC-Turin) identifies novel genetic loci associated with smoking. Hum Mol Genet. 2013;22:843–51. Moore K, McKnight AJ, Craig D, O'Neill F. Epigenome-wide association study for Parkinson's disease. Neuromolecular Med. 2014;16:845–55. Seow WJ, Kile ML, Baccarelli AA, Pan WC, Byun HM, Mostofa G, et al. Epigenome-wide DNA methylation changes with development of arsenic-induced skin lesions in Bangladesh: a case-control follow-up study. Environ Mol Mutagen. 2014;55:449–56. Abdolmaleky HM, Nohesara S, Ghadirivasfi M, Lambert AW, Ahmadkhaniha H, Ozturk S, et al. DNA hypermethylation of serotonin transporter gene promoter in drug naive patients with schizophrenia. Schizophr Res. 2014;152:373–80. Docherty LE, Rezwan FI, Poole RL, Jagoe H, Lake H, Lockett GA, et al. Genome-wide DNA methylation analysis of patients with imprinting disorders identifies differentially methylated regions associated with novel candidate imprinted genes. J Med Genet. 2014;51:229–38. Mackay DJ, Callaway JL, Marks SM, White HE, Acerini CL, Boonen SE, et al. Hypomethylation of multiple imprinted loci in individuals with transient neonatal diabetes is associated with mutations in ZFP57. Nat Genet. 2008;40:949–51. Bliek J, Verde G, Callaway J, Maas SM, De Crescenzo A, Sparago A, et al. Hypomethylation at multiple maternally methylated imprinted regions including PLAGL1 and GNAS loci in Beckwith-Wiedemann syndrome. Eur J Hum Genet. 2009;17:611–9. Azzi S, Rossignol S, Steunou V, Sas T, Thibaud N, Danton F, et al. Multilocus methylation analysis in a large cohort of 11p15-related foetal growth disorders (Russell Silver and Beckwith Wiedemann syndromes) reveals simultaneous loss of methylation at paternal and maternal imprinted loci. Hum Mol Genet. 2009;18:4724–33. Eggermann T. Russell-Silver syndrome Imprinted genes and human disease. Am J Med Genet Part C Semin Med Genet. 2010;154c(3):355–64. doi:10.1002/ajmg.c.30274. Crawford JR, Howell DC. Comparing an individual's test score against norms derived from small samples. Clin Neuropsychol. 1998;12:5. Crawford JR. Psychometric foundations of neuropsychological assessment. In: Laura H, Goldstein JE, editors. Clinical Neuropsychology: a practical guide to assessment and management for clinicians. 2nd ed. Chichester: Wiley; 2004. Howell DC. Statistical methods for psychology. 5th ed. Belmont, CA: Duxbury Press; 2002. Crawford JR, Garthwaite PH. Comparison of a single case to a control or normative sample in neuropsychology: development of a Bayesian approach. Cogn Neuropsychol. 2007;24:343–72. Crawford JR, Garthwaite PH. Single-case research in neuropsychology: a comparison of five forms of t-test for comparing a case to controls. Cortex. 2012;48:1009–16. Reinhold N, Markowitsch HJ. Emotion and consciousness in adolescent psychogenic amnesia. J Neuropsychol. 2007;1:53–64. Vecera SP, Rizzo M. What are you looking at? Impaired 'social attention' following frontal-lobe damage. Neuropsychologia. 2004;42:1657–65. Brand M, Kalbe E, Kracht LW, Riebel U, Munch J, Kessler J, et al. Organic and psychogenic factors leading to executive dysfunctions in a patient suffering from surgery of a colloid cyst of the Foramen of Monro. Neurocase. 2004;10:420–5. Crawford JR, Garthwaite PH, Porter S. Point and interval estimates of effect sizes for the case-controls design in neuropsychology: rationale, methods, implementations, and proposed reporting standards. Cogn Neuropsychol. 2010;27:245–60. Crawford JR, Garthwaite PH. Testing for suspected impairments and dissociations in single-case studies in neuropsychology: evaluation of alternatives using monte carlo simulations and revised tests for dissociations. Neuropsychology. 2005;19:318–31. Crawford JR, Garthwaite PH, Azzalini A, Howell DC, Laws KR. Testing for a deficit in single-case studies: effects of departures from normality. Neuropsychologia. 2006;44:666–77. de Vries BB, Pfundt R, Leisink M, Koolen DA, Vissers LE, Janssen IM, et al. Diagnostic genome profiling in mental retardation. Am J Hum Genet. 2005;77:606–16. Menten B, Maas N, Thienpont B, Buysse K, Vandesompele J, Melotte C, et al. Emerging patterns of cryptic chromosomal imbalance in patients with idiopathic mental retardation and multiple congenital anomalies: a new series of 140 patients and review of published reports. J Med Genet. 2006;43:625–33. Stankiewicz P, Beaudet AL. Use of array CGH in the evaluation of dysmorphology, malformations, developmental delay, and idiopathic mental retardation. Curr Opin Genet Dev. 2007;17:182–92. Gerhardt J, Zaninovic N, Zhan Q, Madireddy A, Nolin SL, Ersalesi N, et al. Cis-acting DNA sequence at a replication origin promotes repeat expansion to fragile X full mutation. J Cell Biol. 2014;206:599–607. Gronskov K, Poole RL, Hahnemann JM, Thomson J, Tumer Z, Brondum-Nielsen K, et al. Deletions and rearrangements of the H19/IGF2 enhancer region in patients with Silver-Russell syndrome and growth retardation. J Med Genet. 2011;48:308–11. Ligtenberg MJ, Kuiper RP, Chan TL, Goossens M, Hebeda KM, Voorendt M, et al. Heritable somatic methylation and inactivation of MSH2 in families with Lynch syndrome due to deletion of the 3′ exons of TACSTD1. Nat Genet. 2009;41:112–7. Mackay DJ, Boonen SE, Clayton-Smith J, Goodship J, Hahnemann JM, Kant SG, et al. A maternal hypomethylation syndrome presenting as transient neonatal diabetes mellitus. Hum Genet. 2006;120:262–9. Miller SA, Dykes DD, Polesky HF. A simple salting out procedure for extracting DNA from human nucleated cells. Nucleic Acids Res. 1988;16:1215. Bibikova M, Fan JB. GoldenGate assay for DNA methylation profiling. Methods Mol Biol. 2009;507:149–63. Wang D, Yan L, Hu Q, Sucheston LE, Higgins MJ, Ambrosone CB, et al. IMA: an R package for high-throughput analysis of Illumina's 450 K Infinium methylation data. Bioinformatics. 2012;28:729–30. Dempster EL, Pidsley R, Schalkwyk LC, Owens S, Georgiades A, Kane F, et al. Disease-associated epigenetic changes in monozygotic twins discordant for schizophrenia and bipolar disorder. Hum Mol Genet. 2011;20:4786–96. Dedeurwaerder S, Defrance M, Calonne E, Denis H, Sotiriou C, Fuks F. Evaluation of the Infinium Methylation 450 K technology. Epigenomics. 2011;3:771–84. Crawford JR, Garthwaite PH. Investigation of the single case in neuropsychology: confidence limits on the abnormality of test scores and test score differences. Neuropsychologia. 2002;40:1196–208. Weisberg S. Probability and mathematical statistics: applied linear regression. 2nd ed. New York: John Wiley & Sons; 1985. Sullivan GM, Feinn R. Using effect size-or why the p value is not enough. J Grad Med Educ. 2012;4:279–82. The authors would like to thank Peter Henneman for his valuable discussions that improved the manuscript. We thank the High-Throughput Genomics Group at the Wellcome Trust Centre for Human Genetics (funded by Wellcome Trust grant reference 090532/Z/09/Z and MRC Hub grant G0900747 91070) for the generation of the methylation data. This work was supported by the Medical Research Council, UK (MR/J000329/1 to FIR and LED). Funding for open access charge was supported by the Research Councils UK. The generation of the normal control population methylation data was supported by the National Institute of Allergy and Infectious Diseases under Award Number R01 AI091905-01 (PI: Wilfried Karmaus). Human Development and Health, Faculty of Medicine, University of Southampton, Tremona Road, Southampton, Hampshire, SO16 6YD, UK Faisal I Rezwan , Louise E Docherty , Rebecca L Poole , Gabrielle A Lockett , John W Holloway , I Karen Temple & Deborah JG Mackay Wessex Regional Genetics Laboratory, Salisbury NHS Foundation Trust, Salisbury District Hospital, Salisbury, Wilts, SO2 8BJ, UK Louise E Docherty The David Hide Asthma and Allergy Research Centre, St Mary's Hospital, Newport, Isle of Wight, PO30 5TG, UK S Hasan Arshad Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Tremona Road, Southampton, Hampshire, SO16 6YD, UK Wessex Clinical Genetics Service, Princess Anne Hospital, University Hospital Southampton NHS Foundation Trust, Southampton, UK I Karen Temple Search for Faisal I Rezwan in: Search for Louise E Docherty in: Search for Rebecca L Poole in: Search for Gabrielle A Lockett in: Search for S Hasan Arshad in: Search for John W Holloway in: Search for I Karen Temple in: Search for Deborah JG Mackay in: Correspondence to Faisal I Rezwan. FIR developed the bioinformatic pipeline and performed all the analyses. LED performed laboratory work, supported by RLP. GAL, SHA and JWH provided control cohorts and data derived therefrom. IKT accrued the patient cohort, and DJGM was the PI on the project. All authors read and approved the final manuscript. Supplementary figures and tables. Rezwan, F.I., Docherty, L.E., Poole, R.L. et al. A statistical method for single sample analysis of HumanMethylation450 array data: genome-wide methylation analysis of patients with imprinting disorders. Clin Epigenet 7, 48 (2015) doi:10.1186/s13148-015-0081-5 Illumina HumanMethylation450 array Single case-control analysis Crawford-Howell t-test
CommonCrawl
Profiling the EU lobby organizations in Banking and Finance Borut Sluban1 na1, Mojca Mikac2 na1, Petra Kralj Novak2, Stefano Battiston1 & Igor Mozetič ORCID: orcid.org/0000-0002-5466-06082 Creating a map of actors and their leanings is important for policy makers and stakeholders in the European Commission's 'Better Regulation Agenda'. We explore publicly available information about the European lobby organizations from the Transparency Register, and from the open public consultations in the area of Banking and Finance. We consider three complementary types of information about lobbying organizations: (i) their formal categorization in the Transparency Register, (ii) their responses to the public consultations, and (iii) their self-declared goals and activities. We consider responses to the consultations as the most relevant indicator of the actual leaning of an individual lobbyist. We partition and cluster the organizations according to their demonstrated interests and the similarities among their responses. Thus each lobby organization is assigned a profile which shows its prevailing interest in consultations' topics, similar organizations in interests and responses, and a prototypical question and answer. We combine methods from network analysis, clustering, and text mining to obtain these profiles. Due to the non-homogeneous consultations, we find that it is crucial to first construct a response network based on interests in consultations topics, and only then proceed with more detailed analysis of the actual answers to consultations. The results provide a first step in the understanding of how lobby organizations engage in the policy making process. Policy changes and initiatives are often triggered by the stakeholders that are going to be affected by those future policies, e.g. a specific sector of the industry. In democratic countries policy makers typically consult a limited number of experts and the largest stakeholders directly involved before issuing a new policy proposal. However, this process leaves citizens and smaller stakeholders underrepresented in the process of policy-shaping. Therefore in many countries, governments are working on improving the communication with citizens and stakeholders to increase their involvement in the law-making process. As an example, the European Commission (EC) has been making a significant effort to engage an increasing number of citizens in the EU law-making process by means of open public consultations (European Political Strategy Centre 2018). This was formerly known as the "Your Voice in Europe" initiative where citizens and stakeholders were encouraged to provide feedback to policy proposals by means of responses to the open public consultations. Typically, the responses are limited to a few hundred, mostly coming from the lobbying organizations that are active in the specific policy areas. There are several empirical studies on interest group mobilization in the EU examining the number and type of interest groups politically active in the EU. The EC lobbying register was inspected in (Coen and Katsaitis 2013) to assess the density and diversity of the interest group population per policy domain. The density of interest organizations per economic sector in the EU is explained in (Berkhout et al. 2015) on the basis of political and economic institutional factors. Based on an analysis of the EC online consultations, in (Rasmussen et al. 2014) it was found that organized interests can potentially act as a transmission belt between the public and the decision makers. Higher mobilization rates were found on those issues that fall within policy areas that are regarded as salient by the general public and those with consequences for budgetary spending. However, little research has been carried out so far on the structure of networks in which lobbies operate (Wolf et al. 2014). In (Zeng and Battiston 2016) data from the EU Transparency Register (TR) and the Orbis database was combined to construct a multiplex lobby network consisting of the affiliation, shareholding, interlocking and client relations between lobby organizations. No simple relation was found between the network centrality of the organizations and their size, for instance in terms of funds deployed in lobbying. However, each network layer was found to provide complementary information to characterize the influence of the organizations. Regarding inter-layer influence, a comprehensive review of multilayer networks can be found in (Kivelä et al. 2014). Other related previous works looked at the community structure of networks of corporations arising from ownership relations (Vitali and Battiston 2014) and interlocking directors (i.e., common members in board of directors, (Piccardi et al. 2010; Heemskerk and Takes 2016)). However, to the best of our knowledge, the authors did not consider the links arising specifically among lobbying organizations in the context of their activity in the policy making process. They have all used variants of the Louvain algorithm (Blondel et al. 2008) to detect the communities. A method to generate statistically validated networks as a projection of a bipartite graph is given in (Tumminello et al. 2011), and can be applied to bipartite lobbyist-policy, lobbyist-consultation or lobbyist-position networks. In our closely related work (Sluban et al. 2017) we already analyzed how lobby organizations respond to the EC's public consultations in the area of Banking and Finance. We considered 363 lobby organizations from the Transparency Register, their responses to 12 consultations, their formal categorization into organization types, and their self-described areas of interest and activities. We constructed a network of organizations which showed similarities between their policy positions raised in the consultation. We compared the communities of the preference patterns network with predefined organization types and organization clusters calculated from their textual descriptions. We found relatively low values of the comparison measures, and concluded that the declared goals and activities do not align well with the preference patterns as demonstrated in responses to consultations. This motivated the current study where we re-focus our research on profiling the lobby organizations with respect to their responses to consultations. In this study we extend the set of consultations to 21 and the number of lobby organizations to 565. We also shift the focus of the analysis. Previously we focused on the themes of consultations in which the organizations participated by comparing three data sources (categorizations, self-descriptions and responses to consultations) pairwise. In this paper we focus on the analysis of profiles of the lobby organizations themselves. As we concluded in our previous work, we consider responses to consultations the most relevant indicator of the actual leanings of individual lobby organizations. We refine our profiling method by focusing on the answers to the consultations. Information whether an organization participated in a particular consultation is, of course, interesting, however, analyzing the actual answers sheds more light on the viewpoint of a certain organization with regard to the questions. Thus, the profiles of lobby organizations are characterized by the clusters of organizations with similar interests and actual responses (co-voting) to consultations. Additionally, we characterize each co-voting cluster with prototypical organization and with questions/answers with the highest agreement in a cluster. As in our previous work, we re-analyze their self-described areas of interest and activities in order to get yet another view on the organizations. The paper is structured as follows. In "Data and preprocessing" section we provide details about data sources, in particular the Transparency Register and the 21 public consultations. "Profiling lobby organizations" section describes main methods used and the results. In "Topic communities of responding organizations" subsection we create a response network between the organizations, and detect communities with similar interests. "Clusters of co-voting organizations" subsection describes how to further partition the communities into clusters of organizations with similar answers to the same consultation questions. In "Characterizing clusters by typical organizations and questions/answers" subsection we additionally characterize clusters by their medoid organizations and most typical questions and answers. In "Clustering of descriptions" subsection we show how to process textual data to create tag clouds of the similar lobby organizations according to their self-descriptions. "Interactive exploration of the lobby profiles" section gives an overview of and a link to the Lobby Profile Explorer. This is an openly accessible web application which supports interactive exploration of lobby organizations and their responses to public consultations. We conclude the paper in "Conclusions" section with lessons learned. Data and preprocessing We focus on lobby organizations registered in the EU Transparency Register (2018) and active in the area of Banking and Finance (Consultations (banking and finance) 2018). We analyze and compare three aspects of these organizations: their formal categorization, their responses to public consultations and their self-described goals and activities. The study covers 565 organizations that responded to multiple choice questions in 21 public consultations, from June 2014 to November 2017. The transparency register was set up by the European Parliament and the EC to increase open access to information about "what interests are being pursued, by whom and with what budgets". The Transparency Register provides information about a main category and subcategory in which an organization is registered (Transparency Register Data 2018). Distribution of the organizations over the categories and subcategories is shown in Table 1. The majority (74%) of the 565 organizations are in the "II - In-house lobbyists and trade/business/professional associations" main category. Therefore, in subsequent analyzes, these organizations are further categorized in more specific subcategories of "II". Table 1 Transparency Register categories and subcategories, and the distribution of the 565 lobby organizations (Org) analyzed in this study Public consultations (Public Consultations 2018) are used by the EC to involve citizens and stakeholders in the law-making process. From June 2014 to November 2017, there were 21 relevant consultations in the area of Banking and Finance. The list of analyzed consultations is shown in Table 2. On average, there are 44 questions per consultation, but the number varies from 3 to 151. There are typically 3 or 4 possible answers to a question. The actual number of questions and possible answers per consultation are also in Table 2. Table 2 Public consultations analyzed in this study and the number of lobby organizations (Org) which responded to them We extracted the data from the consultation questionnaires for organizations which provided at least one answer to a multiple choice question. This allows us to find exact matches of their responses in contrast to open ended questions where comparison of two answers is more involved. Each response to a consultation is transformed into a binary vector denoting which of the answers to the multiple choice questions are provided. For each organization participating in at least one consultation, a joint vector from all 21 consultations is created. We omit the non-informative answers, such as "No Answer", "Don't know", "No opinion", "Not relevant", etc. The result is a 3,295-dimensional binary vector for each organization called voting vector. The voting vector is subsequently used to compare similarities and differences in answers between different organizations that have similar interests. In particular, the analysis consists of two steps. First we identify organizations with similar interests, i.e., those that responded to the same consultations. We construct a response network and compute topic communities, combining organizations with similar interests. Then we analyze each topic community separately, by comparing the voting vectors of member organizations. We thus combine two views on the consultations: (i) interest in the topic, where we ignore the complexity of questions and answers, and (ii) actual answers, where voting vectors are compared. Goals and activities. During the registration in the Transparency Register, an organization itself describes its goals and main activities. We extract all these descriptions and merge them into a single text document for each organization. We remove any URLs as we consider only the content, and do not inspect links to other sources. We take into account only English documents. Each document is split into sentences; this is necessary since some documents contain text written in more than one language. Eventually, we consider only organizations which have English descriptions that are longer than 50 characters. As a consequence, from the initial 618 organizations which responded to consultations we eliminated 53, thus considering 565 organizations in further analyzes. The language detection and text processing is implemented in the LATINO text mining library (2018). Profiling lobby organizations This section presents methods applied and the main results. In "Topic communities of responding organizations" subsection we start from the list of consultations and organizations which responded to them. We create a response network which links organizations responding to the same consultations, and detect communities in it. A community corresponds to a set of organizations which are interested in consultations about similar topics, and are therefore named topic communities. In "Clusters of co-voting organizations" subsection we further refine the analysis, and inspect the actual answers to the consultation questions. Based on the similarity of answers, each topic community is partitioned into clusters, named co-voting clusters. Note that the size and complexity of an individual consultation is irrelevant to detect topic communities, but it is crucial when computing co-voting clusters within topic communities. Thus, both aspects are taken into consideration in a balanced way: interests in consultations and topics, and the actual answers via voting. Each co-voting cluster is additionally characterized in "Characterizing clusters by typical organizations and questions/answers" subsection by its medoid organization, and a question and answer most agreed upon. Finally, we apply text mining to self-descriptions of the lobby organizations, cluster them and produce descriptive tag clouds ("Clustering of descriptions" subsection). Topic communities of responding organizations The goal of this subsection is to group lobby organizations into communities with similar interests with regard to the consultations. We start with a bipartite graph comprised of 565 organizations and 21 consultations, where there is an edge if an organization responded to a consultation. The majority of organizations (51%) responded to one consultation only, 16% responded to two consultations, 8.5% to three, 5.3% to four, and so forth. From Table 2 we can see that consultations with the highest response rate are consultations #3, #9, #16 and #8. As our goal is to profile and describe activities of the organizations, we project the bipartite graph to a weighted response network. Nodes in the network (N=565) represent the organizations, and edges (M=90,954) reflect their participation in the same consultation. Two nodes are linked by an edge if the organizations responded to the same consultation where the weight is the number of the same consultations. The network is constructed and analyzed using Gephi (Bastian et al. 2009). The response network has a density score of 0.285. The degree distribution is as follows: share of nodes with a degree ≤ 100 is 32.6%, between 100 and 200 is 35.2%, between 200 and 300 is 20.5%, between 300 and 400 is 10.1%, and there are 9 nodes with a degree > 400 (1.6%). The highest node degree is 451, and the average weighted node degree is 115.4. The response network has a clustering coefficient of 0.867, relatively high. In the response network, we identify communities of organizations which exhibit similar interests, i.e., they respond to the same consultations. We detect the communities by applying the Louvain method (Blondel et al. 2008; Lambiotte et al. 2009). The method was applied several times with different parameter values, and eventually the default parameters, resulting in maximum modularity, were used: randomize=On, use edge weights=On, resolution=1.0. The community detection yields five non-overlapping communities with the modularity value of 0.227. The Louvain method is non-deterministic and running it multiple times results in slightly different community partitions. We check the robustness of the results by applying the method 50 times with random seed and the same parameters. The similarity of the 50 resulting partitions is then compared by the Rand index (Rand 1971). It has a value between 0 and 1, with 0 indicating that the two community partitions are totally different and 1 indicating that two partitions are the same. We calculate the Rand index pairwise and get relatively high values (average Rand index = 0.892 with 95% CI = [0.89, 0.90] and p-value < 2.2·10−16). This indicates that the partitioning in the five detected communities is relatively stable. The response network with the detected communities is depicted in Fig. 1. Different node colors correspond to the five detected communities. The network visualization is produced using the ForceAtlas2 layout in Gephi. Response network of the 565 lobby organizations. Two organizations are linked if they respond to the same consultation. Different colors denote the five detected communities. The communities are labeled by the prevailing topics in common consultations. Node size is proportional to the number of consultations to which the organization responded The detected communities partition the set of lobby organizations into five non-overlapping sets. Each community represents participation in common consultations and engagement in certain topics. However, organizations also respond to some consultations outside their core community, therefore the correspondence between the communities and consultations is not one-to-one. We argue that computing the non-overlapping communities in the first phase, and then showing explicit overlaps across consultations provides better insight than detection of overlapping communities. In Fig. 2 we show for each community the distribution of its members' responses to the individual consultations. The communities are labeled according to their main topics of engagement, therefore they are called topic communities. The correspondence between the communities and consultations can be intuitively presented with the Sankey diagram (Sankey Diagram 2018). The proportional flow diagram shows how many organizations from different communities responded to individual consultations. Relations between the topic communities and consultations. The Sankey diagram links detected topic communities (left-hand side) to the consultations (right-hand side). The thickness of a link corresponds to the number of organizations that responded to a consultation. The diagram clearly shows the overlaps between the topic communities We observe that the organizations comprising the first and the largest community are mainly focused on two consultations: #3 (Public consultation Building a Capital Markets Union) and #16 (Capital Markets Union mid-term review 2017). We labelled this as the Capital Market Union community. In the second largest community, the main topic of interest is consultation #9 (Public consultation on non-financial reporting guidelines), therefore we labelled this community as Non-Financial Reporting, and so forth. Detailed results for all the communities are in Table 3 where we also show the top consultations to which a large number of organizations responded. We can observe that communities form around consultations that are represented by higher degree nodes (#3, #9, #16, #8) in the bipartite graph. Table 3 Detected topic communities of the lobby organizations, their number and share in each community, and top consultations which received the most responses (the number of responding organizations is in parentheses) Clusters of co-voting organizations Topic communities are groups of lobby organizations which respond to common consultations. In this subsection we analyze their actual answers to questions in the consultations. We use the high-dimensional voting vectors to compute co-voting similarities between organizations. Within each topic community we form clusters of organizations with similar responses to consultations, i.e., similar voting vectors. Let a and b denote voting vectors of organizations A and B, respectively. We define co-voting similarity between A and B as the cosine similarity between vectors a and b: $$\cos (\angle(\mathbf{a},\mathbf{b})) = \frac{\mathbf{a}\cdot \mathbf{b}}{|\mathbf{a}|\cdot|\mathbf{b}|}. $$ Cosine similarity is calculated as the normalized dot product of a and b. It ranges between 0 and 1, where 0 indicates complete dissimilarity, and 1 complete agreement. For clustering, we define the distance between two voting vectors as: $$\text{distance}(\mathbf{a}, \mathbf{b}) = 1 - \cos (\angle(\mathbf{a},\mathbf{b})). $$ We apply Ward's method (Ward 1963) with agglomerative hierarchical clustering over the voting vectors. Ward's method, precisely called minimum variance method, minimizes the total within-cluster variance. The resulting hierarchy of clusters can be represented by a dendrogram, where any level of agglomeration can be selected. We decided to uniformly split each topic community into three co-voting clusters. It turns out that after partitioning into three clusters, at least one cluster has considerably higher co-voting agreement than the original community. Clusters could be further partitioned, but one should avoid too small clusters. The resulting clusters are shown in Table 4. Table 4 Clusters within topic communities The level of selected agglomeration is validated by a network analysis. We construct yet another network with organizations as nodes and values of cosine similarity as weights on the edges. We apply the Louvain method on each individual community. The Capital Market Union community is partitioned in seven subcommunities with modularity of 0.181, the Non-Financial Reporting, Corporate Tax Transparency, and Retail Financial Services community are partitioned into three subcommunities with modularity levels of 0.114, 0.316, and 0.189, respectively. Connecting Europe Facility is partitioned in two subcommunities with modularity of 0.335. We compared each community partitioning to the co-voting clusters by Rand index. The values of Rand index are 0.587, 0.860, 0.737, 0.802, 0.803, respectively, relatively high for all the communities except for the first and the largest Capital Market Union community. We can conclude that a uniform agglomeration into three co-voting clusters is a sensible choice for all the communities, except for the first community. However, for the sake of uniformity and to avoid too many clusters with a small number of members in each, we settled for three co-voting clusters also in this case. This is not an optimal choice and in the future a better criterion to select an appropriate number of co-voting clusters should be devised. We analyze several properties of the co-voting clusters: level of agreement between the organizations, distribution of the Transparency Register categories, and the dominant category in each cluster. The co-voting agreement between organizations in a co-voting cluster is computed by Krippendorff's Alpha agreement measure (Krippendorff 2013). Alpha is typically used as a measure to quantify the extent of agreement among human raters. When raters agree perfectly, Alpha=1, and when the level of agreement equals the agreement by chance, Alpha=0. Besides its typical applications, Alpha was already used to quantify the agreement between annotators in machine learning (Mozetič et al. 2016), and co-voting agreements and disagreements in the European Parliament (Cherepnalkoski et al. 2016). In our case, Alpha measures the level of agreement between answers to consultations (see Table 4). In general, we observe fairly low values of Alpha, in comparison to other domains. In the case of public consultations, the questionnaires are thematically very broad, and we are applying the Krippendorff's Alpha to non-typical data. In some clusters, the degree of agreement remains at the level of their respective communities, while in others the agreement increases. In particular, in clusters 1.2, 3.2, and 4.3 Alpha considerably increases as a community is partitioned into co-voting clusters. In certain topic communities (e.g., Non-Financial Reporting) the agreement is already high, and there is no significant difference between the clusters and the overall community agreement. We can infer that such topics are sufficiently noncontroversial, and that the responding organizations have a common view on the subject. Another interesting property of the co-voting clusters is the distribution of the Transparency Register (TR) categories within them. The last two columns in Table 4 show the prevailing TR category and its share in each cluster. We also compare the distribution of the TR categories within each cluster to their overall distribution in TR. We measure the similarity between the two distributions (P, Q) by Jensen-Shannon divergence (JSD) (Lin 1991): $$\mathit{JSD}(P,Q) = H\left(\frac{1}{2} P + \frac{1}{2} Q\right) - \frac{1}{2}(H(P) + H(Q)) $$ where H(P) is the Shannon entropy of a discrete distribution P. JSD ranges between 0 and 1, where 0 indicates identical distributions, and 1 completely different distributions. We note that some clusters, e.g., 2.3 and 4.2, have very different distribution of the TR categories in comparison to the prior. Characterizing clusters by typical organizations and questions/answers In this subsection we additionally characterize the co-voting clusters. Table 5 shows representative organizations for each cluster. Technically, an organization is a medoid of a cluster if it has minimal average co-voting distance to all other organizations in the cluster. Note that medoids do not always belong to the dominant TR category in the cluster (see Table 4). Table 5 The medoid organizations of each co-voting cluster Another interesting characterization are the questions and answers with the highest agreements per each cluster. When the majority (at least 75%) of the organizations in the cluster responded to a consultation, we extracted the question/answer that was the most unanimous. The results are given in Table 6. Table 6 Prototypical questions and answers for each co-voting cluster We can draw some overall conclusions about the five topic communities and their further refinements into the co-voting clusters. The largest community, Capital Market Union, is composed of a wide, not clearly differentiated interests, comprised of various associations and companies. Organizations in the Capital Market Union community, are relatively active — on average they responded to 3.4 consultations. The level of agreement is low in this community, except in cluster 1.2 where a somewhat higher agreement can be attributed to the cluster's small size (only 18 members). As already noted, the partitioning of this community into three co-voting clusters is not optimal, and the cluster 1.3 should probably be further partitioned into sub-clusters. The second community, Non-Financial Reporting, is homogeneous with a high degree of agreement between its member organizations. All organizations in this community participated in consultation #9 (Public consultation on non-financial reporting guidelines). Most of the organizations in a co-voting cluster 2.1 are of opinion that the most important non-financial aspect of disclosure should be relevance/materiality. In this cluster, organizations participated on average in 2.2 consultations. The cluster is mainly comprised of associations. Cluster 2.2, mainly comprised of companies, participated in 2.6 consultations on average. Cluster 2.3 is very small (6 organizations only, mainly trade unions) who agree that companies should have better understanding of the non-financial risks. The third community, Corporate Tax Transparency, is the most interesting one. Two of its clusters (3.1 and 3.2) comprise organizations with almost directly opposing responses to consultations. In cluster 3.1 we observe opposition to the tax transparency, whereas cluster 3.2 argues for responsible taxation wherever enterprises make profit. The specific question that highlights these differences is whether there is a risk that tax transparency towards the public carries unintended negative consequences. The majority of lobbyists in cluster 3.1 are associations, and their answer is unanimously positive, i.e., tax transparency may have unintended consequences. The majority in cluster 3.2 are NGOs, and they answer the same question negatively. The third cluster 3.3 is not as distinctive. Organizations that form two of the smallest communities, Retail Financial Services and Connecting Europe Facility, have very specific profiles, with narrowly expressed interests. The Retail Financial Services community is comprised of 64 organizations. In this community, organizations participated in only 1.7 consultations on average, most of them (89%) participated in consultation #8 (Green Paper on retail financial services), their overall agreement is relatively low. Organizations in the co-voting cluster 4.1, mainly comprised of associations, are of opinion that the main barriers preventing firms from providing cross-border financial services are language, differences in national legislation, and additional requirements imposed by national regulators. Organizations in the co-voting cluster 4.2, mostly NGOs, believe that customers don't have access to safe, simple and understandable financial product throughout EU. All companies in the co-voting cluster 4.3, with a relatively high agreement level, participated also in consultation #12 (Public consultation on a potential EU personal pension framework-stakeholders). They agree that the level of protection during the lifetime of a product is most relevant to individual savers. In the Connecting Europe Facility community all organizations, but three, participated in one consultation only. The two co-voting clusters 5.1 and 5.2 in consultation #15 (Mid-term evaluation of the Connecting Europe Facility (CEF) - technical questionnaire), and the co-voting cluster 5.3 in consultation #14 (Mid-term evaluation of the Connecting Europe Facility (CEF) - general questionnaire). This is a very narrow and specific theme which seems to be of no interest to a wider range of organizations. The level of agreement is low in every co-voting cluster, but the members mostly agree on the following. In cluster 5.1, surprisingly comprised mainly of public authorities, the organizations are engaged in developing the physical transportation, energy and telecommunications infrastructure. In cluster 5.2 the organizations are of opinion that there is still a need to continue financial support from the EU budget for developing trans-European networks. Organizations in cluster 5.3 believe that investing in the fields of transport, energy and telecommunications should be the EU priority. From this analysis, it emerges how the co-voting patterns across communities are heterogeneous. In some cases, as for the third community, Corporate Tax Transparency, there is a clear difference in voting between groups identified ex-ante based on their TR category (i.e., NGO's versus business associations). In other cases, as for the second community, Non-Financial Reporting, the same ex-ante categories do not display significantly different co-voting behaviour. This heterogeneity can be explained in part by the level of controversy of the consultation topics. For instance, the topic of tax transparency is known to create opposing views between civic society and corporate lobbyists. In contrast, the topic of corporate social responsibility is known to find support of many stakeholders of the corporate sector because the idea that firms should disclosure non-financial information, relevant to social and environmental aspects and sustainability, is perceived as an opportunity for building reputation among consumers and customers. However, the level of controversy is not fully known ex-ante by the policy makers. Therefore, consultations provide a useful indication to policy makers on which points exactly the controversies arise. On the other hand, the heterogeneity of patterns can also be explained by the fact that both NGO's and corporations have different purposes and strategies in the policy making process which cannot be simply classified ex-ante. Clustering of descriptions The goal of this subsection is to get yet another view on the properties of the analyzed lobby organizations. We apply text mining tools to extract typical features from descriptions of goal and activities, that the organizations themselves provided in the Transparency Register. In particular, we apply the K-means clustering (Hartigan 1975) that partitions all the provided descriptions into K clusters. Organizations with similar goals and activities are then grouped in the same cluster. First, textual descriptions are preprocessed by standard text preprocessing (Feldman and Sanger 2006) methods. For each description (only parts in English are considered), the text is tokenized and stemmed, stop words are removed, unigrams and bigrams are formed, and feature vectors are constructed by the TF-IDF weighting scheme and normalization. The resulting bag-of-words vectors are an input to the K-means clustering algorithm. We then apply the KMeansClusteringFast algorithm from the LATINO library (LATINO text mining library 2018). The value of K is set to 5. We tested different values of K in the range between 2 and 10, with ten different seeds for an initial clustering setup. The quality of the resulting clusters was estimated by the Silhouette coefficient (Rousseeuw 1987). Since there was no significant difference between the quality of clusters for K between 2 and 6, we selected K=5 to match the number of detected topic communities (see "Topic communities of responding organizations" subsection). The clustering results for K=5 are shown in Table 7 and in Fig. 3. Table 7 shows, for each cluster, its short name, the number of organizations covered, and top ten centroid terms with their weights. Figure 3 shows the tag clouds, with the fifty most important centroid terms for each cluster, and size approximately proportional to the number of organizations. Tag clouds of the five clusters of organizations. The tag clouds are constructed from the self-described goals and activities of the 565 lobby organizations. Size of the clouds is proportional to the number of organizations in them Table 7 Results of clustering (K=5) applied to textual self-descriptions of organizations The relation between the detected topic communities and the textual descriptions, encapsulated in the tag clouds, can be intuitively presented with a Sankey diagram. The diagram in Fig. 4 shows proportions and distribution of the 565 organizations in the topic communities and clusters of their descriptions. Thickness of links corresponds to the number of organizations that are present in both partitions. Relations between the topic communities and tag clouds. The Sankey diagram links the detected topic communities (left-hand side) to the clusters of the self-described goals and activities (right-hand side). We observe no significant correspondence between the topic communities and clusters, also confirmed by quantitative measures The correspondence between both partitioning can be assessed by the B3 measure (Bagga and Baldwin 1998). B3 is considered the most appropriate measure for extrinsic evaluation of clustering (Amigó et al. 2009). It is similar to the Rand index, counting pairs of nodes in clusters, but is more sensitive in distinguishing small errors in big clusters from large number of small errors in small clusters. The B3 measure decomposes the evaluation into calculating the precision and recall associated with each node in two groupings. The correspondence between the two groupings is measured as the average value over all nodes, i.e., in our case all 565 organizations. Let N be the set of all nodes in two groupings, say grouping 1 and 2. For each node n∈N, we denote with L(n) the set of nodes with the same group label as n, i.e., members of the same group (community or cluster, in our case) in grouping 1. With C(n), we denote the set of all nodes which are members of the same group as n in grouping 2. The B3 precision of a node n, P(n), is computed as the fraction of nodes which have the same label as n in both groupings, from all the nodes which are in the same group as n in grouping 2. Similarly, the B3 recall of a node n, R(n), is computed as the fraction of nodes with the same label in both groupings, from all the nodes with the same label as n in grouping 1. The precision and recall is then combined into the F1 score, a harmonic mean of the precision and recall: $$ P(n) = \frac{\left|L(n)\cap C(n)\right|}{\left|C(n)\right|},\;\;\; \\ R(n) = \frac{\left|L(n)\cap C(n)\right|}{\left|L(n)\right|},\;\;\; \\ F_{1}(n) = 2\,\frac{P(n)\,R(n)}{P(n) + R(n)}. $$ The F1 score is a special case of Van Rijsbergen's effectiveness measure (Van Rijsbergen 1979), where precision and recall can be combined with different weights. The precision, recall, and F1 score of a grouping is a micro average of the scores of all the nodes. The resulting scores between the detected topic communities and the clusters of descriptions are P=0.315 and R=0.342, yielding F1=0.328. All the measures have relatively low values, and we can conclude that there is no significant matching, inclusion nor containment between the two groupings. This indicates that there might be considerable differences between the declared interests of the lobby organizations and their actual manifestation as captured by their answers to consultations. This result, which confirms our previous analysis (Sluban et al. 2017), is the main reason why in the current paper we focus on co-voting and profiling of the lobby organizations. Interactive exploration of the lobby profiles We implemented the Lobby Profile Explorer, an interactive web application that supports exploration of the 565 lobby organizations. It presents a response network of the lobby organizations that responded to the 21 public consultations in the area of Banking and Finance. The implemented visualization has a variety of features, supporting in-depth exploration of the lobby network and pairwise comparison of the lobby profiles. A screenshot of the Lobby Profile Explorer interface is shown in Fig. 5. The web application and all the data are publicly accessible at https://simpolproject.eu/tools/lobby-profile-explorer/ and at https://kt.ijs.si/lobby/. A screenshot of the Lobby Profile Explorer. On the right-hand size, the user can select consultation topics of interest for further explorations The response network is constructed from the responses to public consultations in terms of pairwise cosine similarities between the lobby organizations. The Lobby Profile Explorer supports selection of a range of similarity links to display in the network. Furthermore, the scope of the network, i.e., lobby organizations, can be refined by selecting specific consultations or individual topic communities with shared predominant interests, i.e., common consultations, as described in "Topic communities of responding organizations" subsection. In addition to the zoom and pan features, the visualization allows to explore and compare specific lobby organization responses. By hovering over or clicking on a lobby node, a panel with the organization information and answers to specific questions is displayed. While the panel is open, a selection (click) of another lobby node in the network will show a comparison of the answers the two organizations provided and highlight the matches. Such an in-depth comparison of the responses of two lobby organizations (Finance Watch and BlackRock) to a selected consultation is illustrated in Fig. 6. Interactive exploration and comparison of the lobby organizations. On the left-hand side is a network of organizations, linked by similar responses to the same consultations. On the right-hand side is a selected questionnaire, comparing answers by two selected lobby organizations We present how publicly accessible information can be used to assess the positions and leanings of major lobby organizations in the policy creation process. We focus on policy reforms in the area of Banking and Finance, and use data from the EU Transparency Register, and the EC public consultations. By combining methods from information retrieval, text mining, and network analysis we study different aspects of the lobby organizations which engage in policy shaping. Our analysis shows that the categories representing the organization type do not align well with the clusters based on their declared goals and activities. Instead, responses to common consultations and similar answers to questions better characterize the true standings and leanings of the lobby organizations. From the organizations' consultation responses we construct a response network representing inter-organization policy preferences. The community structure of this network reveals information about organizations' activities and similarities that cannot be obtained from the organizations' self-description of their goals and activities. This implies that the network analysis adds an important aspect that is complementary to text analysis in the understanding of how lobby organizations engage in the policy making process. Our findings suggest that if we want to build a map of the policy making arena we should categorize lobby organizations based on their responses to policy issues via the consultations, rather than based on their general self-declared goals and activities, or based on their formal organization type categorization. Indeed, modeling the similarities of organizations' positions in the consultations by means of networks enables not only to discover the community structure revealing actual common fields of engagement and interest, but offers also an intuitive representation of the lobbying ecosystem. Building a consensus among stakeholders and a perception of transparency on stakeholders' roles are crucial for a stable policy making process, as highlighted by the EU Better Regulation Agenda. However, as we show here, understanding stakeholders' positions cannot simply rely on their static ex-ante categorizations. In contrast, it requires to take into account the actual positions of stakeholders, embedded in the context of the topic. Our work makes therefore a contribution to this issue by providing a new methodology to carry out such an analysis. This work represents only the first step of a novel approach towards building maps of the policy arena. Future work will analyze how the design of the consultations could be improved in order to better identify the positions of the stakeholders with respect to the policy issues. The insights from this type of analysis and its future development can support the current EU policy agenda on increasing the transparency of the policy making process by enabling stakeholders and citizens to better understand which interests the various organizations represent and how they are influencing the policy debates. A l p h a : Krippendorff's Alpha agreement measure EC: TR: Transparency register JSD: Jensen-Shannon divergence Amigó, E, Gonzalo J, Artiles J, Verdejo F (2009) A comparison of extrinsic clustering evaluation metrics based on formal constraints. Inf Retr 12(4):461–486. Bagga, A, Baldwin B (1998) Entity-based cross-document coreferencing using the vector space model In: Proc. 17th Intl. Conf. on Comput. Linguistics (COLING), 79–85.. ACL, Montreal. Bastian, M, Heymann S, Jacomy M (2009) Gephi: An open source software for exploring and manipulating networks In: Proc. Intl. AAAI Conf. on Weblogs and Social Media, 361–361.. AAAI, San Jose. https://gephi.org/. Berkhout, J, Carroll BJ, Braun C, Chalmers AW, Destrooper T, Lowery D, Otjes S, Rasmussen A (2015) Interest organizations across economic sectors: explaining interest group density in the European Union. J Eur Public Policy 22(4):462–480. Blondel, VD, Guillaume J-L, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp 2008(10):10008. Cherepnalkoski, D., Karpf A., Mozetič I, Grčar M (2016) Cohesion and coalition formation in the European Parliament: Roll-call votes and Twitter activities. PLoS ONE 11(11):0166586. https://doi.org/10.1371/journal.pone.0166586. Coen, D, Katsaitis A (2013) Chameleon pluralism in the EU: an empirical study of the European Commission interest group density and diversity across policy domains. J Eur Public Policy 20(8):1104–1119. Consultations (banking and finance) (2018). https://ec.europa.eu/info/consultations-banking-and-finance_en. Accessed 23 Apr 2018. European Political Strategy Centre (2018). http://ec.europa.eu/assets/epsc/pages/60-years. Accessed 23 Apr 2018. Feldman, R, Sanger J (2006) Text Mining Handbook: Advanced Approaches in Analyzing Unstructured Data. Cambridge University Press, New York. Hartigan, JA (1975) Clustering Algorithms. Wiley, New York. Heemskerk, EM, Takes FW (2016) The corporate elite community structure of global capitalism. New Polit Econ 21(1):90–118. https://doi.org/10.1080/13563467.2015.1041483. Kivelä, M, Arenas A, Barthelemy M, Gleeson JP, Moreno Y, Porter MA (2014) Multilayer networks. J Compl Netw 2(3):203–271. https://doi.org/10.1093/comnet/cnu016. Krippendorff, K (2013) Content Analysis, An Introduction to Its Methodology. 3rd edn. Sage Publications, Thousand Oaks. Lambiotte, R, Delvenne J-C, Barahona M (2009) Laplacian dynamics and multiscale modular structure in networks. https://arxiv.org/abs/0812.1770. LATINO text mining library (2018). https://github.com/LatinoLib/LATINO. Accessed 23 Apr 2018. Lin, J (1991) Divergence measures based on the Shannon entropy. IEEE Trans Inf Theory 37(1):145–151. Mozetič, I, Grčar M, Smailović J (2016) Multilingual Twitter sentiment classification: The role of human annotators. PLoS ONE 11(5):0155036. https://doi.org/10.1371/journal.pone.0155036. Piccardi, C, Calatroni L, Bertoni F (2010) Communities in Italian corporate networks. Phys A Stat Mech Appl 389(22):5247–5258. Public Consultations (2018). https://ec.europa.eu/info/consultations_en. Accessed 23 Apr 2018. Rand, WM (1971) Objective criteria for the evaluation of clustering methods. J Am Stat Assoc 66(336):846–850. https://doi.org/10.2307/2284239. Rasmussen, A, Carroll BJ, Lowery D (2014) Representatives of the public? Public opinion and interest group activity. Eur J Polit Res 53(2):250–268. Rousseeuw, PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53–65. Sankey Diagram (2018). https://developers.google.com/chart/interactive/docs/gallery/sankey. Accessed 23 Apr 2018. Sluban, B, Smailović J, Novak PK, Mozetič I, Battiston S (2017) Mapping organizations' goals and leanings in the lobbyist network in banking and finance In: Proc. Complex Networks and Their Applications VI, 1149–1161.. Springer, Cham. https://doi.org/10.1007/978-3-319-72150-7_93. Transparency Register (2018). http://ec.europa.eu/transparencyregister. Accessed 23 Apr 2018. Transparency Register Data (2018). https://data.europa.eu/euodp/en/data/dataset/transparency-register. Accessed 23 Apr 2018. Tumminello, M, Salvatore M, Fabrizio L, Jyrki P, Rosario NM (2011) Statistically validated networks in bipartite complex systems. PLoS ONE 6(3):17994. https://doi.org/10.1371/journal.pone.0017994. ADS Article Google Scholar Van Rijsbergen, CJ (1979) Information Retrieval. Butterworth, London. Vitali, S, Battiston S (2014) The community structure of the global corporate network. PLoS ONE 9(8):104655. https://doi.org/10.1371/journal.pone.0104655. Ward, JH (1963) Hierarchical grouping to optimize an objective function. J Am Stat Assoc 58:236–244. Wolf, M, Haar K, Hoedeman O (2014) The fire power of the financial lobby: A survey of the size of the financial lobby at the EU level. Corporate Europe Observatory, The Austrian Federal Chamber of Labour and The Austrian Trade Union Federation. https://corporateeurope.org/sites/default/files/attachments/financial_lobby_report.pdf. Zeng, A, Battiston S (2016) The multiplex network of EU lobby organizations. PLoS ONE 11(10):0158062. https://doi.org/10.1371/journal.pone.0158062. We thank Jasmina Smailović for her initial contributions to this study. The authors acknowledge financial support from the EU H2020 FET projects DOLFINS (grant no. 640772) and OpenMaker (grant no. 687941), and from the Slovenian Research Agency (research core funding no. P2-103). All the data are publicly available at https://kt.ijs.si/lobby/. Borut Sluban and Mojca Mikac contributed equally to this work. Department of Banking and Finance, University of Zurich, Andreasstrasse 15, Zürich, Switzerland Borut Sluban & Stefano Battiston Department of Knowledge Technologies, Jožef Stefan Institute, Jamova 39, Ljubljana, Slovenia Mojca Mikac, Petra Kralj Novak & Igor Mozetič Borut Sluban Mojca Mikac Petra Kralj Novak Stefano Battiston Igor Mozetič BS and MM collected the data and performed the experiments. BS implemented the Lobby Profile Explorer. All the authors analyzed the results and wrote the paper. All authors read and approved the final manuscript. Correspondence to Igor Mozetič. Sluban, B., Mikac, M., Kralj Novak, P. et al. Profiling the EU lobby organizations in Banking and Finance. Appl Netw Sci 3, 44 (2018). https://doi.org/10.1007/s41109-018-0099-7 Lobby organizations Co-voting agreement
CommonCrawl
Classical Studies (2) Film, Media, Mass Communication (1) Politics and International Relations (1) Canadian Journal of Neurological Sciences (2) Journal of Social Policy (1) The Journal of Navigation (1) Bristol University Press (18) Liverpool University Press (13) Canadian Neurological Sciences Federation (2) RIN (1) Social Policy Association (1) Theories of Institutional Design (1) Cambridge Histories (2) Cambridge Histories - Ancient History & Classics (2) Cambridge Companions (1) The Cambridge Companions to Philosophy and Religion (1) The Evolutionary Map of the Universe Pilot Survey – ADDENDUM Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022 Published online by Cambridge University Press: 02 November 2022, e055 The Evolutionary Map of the Universe pilot survey Australian SKA Pathfinder Published online by Cambridge University Press: 07 September 2021, e046 We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$ 11–18 arcsec, resulting in a catalogue of $\sim$ 220 000 sources, of which $\sim$ 180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here. Cosmology with Phase 1 of the Square Kilometre Array Red Book 2018: Technical specifications and performance forecasts Square Kilometre Array Square Kilometre Array Cosmology Science Working Group:, David J. Bacon, Richard A. Battye, Philip Bull, Stefano Camera, Pedro G. Ferreira, Ian Harrison, David Parkinson, Alkistis Pourtsidou, Mário G. Santos, Laura Wolz, Filipe Abdalla, Yashar Akrami, David Alonso, Sambatra Andrianomena, Mario Ballardini, José Luis Bernal, Daniele Bertacca, Carlos A. P. Bengaly, Anna Bonaldi, Camille Bonvin, Michael L. Brown, Emma Chapman, Song Chen, Xuelei Chen, Steven Cunnington, Tamara M. Davis, Clive Dickinson, José Fonseca, Keith Grainge, Stuart Harper, Matt J. Jarvis, Roy Maartens, Natasha Maddox, Hamsa Padmanabhan, Jonathan R. Pritchard, Alvise Raccanelli, Marzia Rivi, Sambit Roychowdhury, Martin Sahlén, Dominik J. Schwarz, Thilo M. Siewert, Matteo Viel, Francisco Villaescusa-Navarro, Yidong Xu, Daisuke Yamauchi, Joe Zuntz Published online by Cambridge University Press: 06 March 2020, e007 We present a detailed overview of the cosmological surveys that we aim to carry out with Phase 1 of the Square Kilometre Array (SKA1) and the science that they will enable. We highlight three main surveys: a medium-deep continuum weak lensing and low-redshift spectroscopic HI galaxy survey over 5 000 deg2; a wide and deep continuum galaxy and HI intensity mapping (IM) survey over 20 000 deg2 from $z = 0.35$ to 3; and a deep, high-redshift HI IM survey over 100 deg2 from $z = 3$ to 6. Taken together, these surveys will achieve an array of important scientific goals: measuring the equation of state of dark energy out to $z \sim 3$ with percent-level precision measurements of the cosmic expansion rate; constraining possible deviations from General Relativity on cosmological scales by measuring the growth rate of structure through multiple independent methods; mapping the structure of the Universe on the largest accessible scales, thus constraining fundamental properties such as isotropy, homogeneity, and non-Gaussianity; and measuring the HI density and bias out to $z = 6$ . These surveys will also provide highly complementary clustering and weak lensing measurements that have independent systematic uncertainties to those of optical and near-infrared (NIR) surveys like Euclid, LSST, and WFIRST leading to a multitude of synergies that can improve constraints significantly beyond what optical or radio surveys can achieve on their own. This document, the 2018 Red Book, provides reference technical specifications, cosmological parameter forecasts, and an overview of relevant systematic effects for the three key surveys and will be regularly updated by the Cosmology Science Working Group in the run up to start of operations and the Key Science Programme of SKA1. 4 - The rise of the aspiring premier European city, 1998–2010 Michael Parkinson, University of Liverpool Book: Liverpool Beyond the Brink Published by: Liverpool University Press Print publication: 31 May 2019, pp 77-98 By the late 1990s Liverpool was closer to its leadership's aspiration to be a normal city. At this time a number of factors combined to improve its prospects of further progress. First, a New Labour government was elected in 1997 with a commitment and the capacity generated by an economic boom to increase public expenditure, tackle social deprivation in UK cities and address regional inequalities. Secondly, Liverpool's politics changed. The Labour group that had run the city for fifteen years was dramatically thrown out of office in 1998, just a year after a Labour government had been elected. The Labour group and leadership had run out of ideas and energy. The new Liberal Democrat administration took office with a very different, ambitious plan for Liverpool, to make it a premier European city, with the city centre as a key economic driver. Thirdly, there was a significant attempt to reorganise and make the local authority a modern efficient administration. Fourthly, the focus on the city centre was strengthened by the creation in 2001 of a city centre regeneration agency, Liverpool Vision. Finally, in 2003 Liverpool won and successfully delivered in 2008 the European Capital of Culture. These factors all underpinned Liverpool's renaissance. During this 'golden age' for the city a Liberal Democrat administration successfully exploited the New Labour government's commitment to cities, the revived private-sector interest in city centre investment and large amounts of national and especially European funding to make dramatic physical, economic and cultural changes that dragged Liverpool into the mainstream of national and European cities. At that time in 2001 I wrote that even for a place that had turned many corners, there was evidence that Liverpool was putting its bad old ways behind it as it attempted to become a leading European city.1 In a Celtic city that could legendarily provoke an argument in an empty room, there were signs that peace had broken out. The politics were much improved. The public sector was talking to the private sector. Even the government no longer treated the city as a pariah. Changing local authority performance and purpose The first step in the process was that the city council began to get its act together. The continuity provided by a substantial majority for the Liberal Democrats and a leader with a clear vision of where he wanted to take the city helped. Print publication: 31 May 2019, pp i-iv 2 - Liverpool goes on – but pulls back from – the brink, 1973–88 A changing economy and a changing polity This chapter looks at the way in which, during the 1970s and 1980s, Liverpool came to the brink of economic and political collapse but managed to pull back from it. In this period the rapid decline of the city's traditional port and manufacturing industries, the election of a Conservative government determined to cut public expenditure, and the peculiarities of the city's social structure and politics combined to throw Liverpool into confrontation and near chaos. The chapter outlines how economic decline and its impact upon social problems led a Militant Tendency controlled Labour council to self-destruct in a very public confrontation with national government. It also shows how that experience alarmed and frightened many inside the city and led to a gradual change in its politics, culture and policies. The period started with confusion and confrontation and ended in a degree of public and political consensus that a new approach to the future of the city was needed. But the consensus was fragile and the city still faced many economic, institutional and political challenges. There were three major phases of political life in the city in this period, which produced three different local economic strategies. The period 1973–83 witnessed a dramatic escalation of the city's economic problems, combined with a period of political paralysis because none of the city's three political parties could achieve the necessary support to get a majority on the council and develop a coherent response to economic decline. The period 1983–87 was marked by the rise of a powerful Labour majority on the city council which regarded a major public spending programme on the physical infrastructure of its working-class heartland as the only way to regenerate Liverpool's economy. Labour's strategy during this period alienated the Conservative government and the local private sector and ended in political and legal defeat for Labour. After 1987, as the failure of Labour's strategy became apparent to all political actors in the city, an alternative development strategy began to emerge. During this crucial period, changes of political leadership and strategies in the public and private sectors meant that the city's politics began to change from municipal socialism to urban entrepreneurialism. Print publication: 31 May 2019, pp v-vi Liverpool Beyond the Brink The Remaking of a Post Imperial City Print publication: 31 May 2019 Liverpool Beyond the Brink is a fascinating commentary on the economic decline that caused the physical, social and political fragmentation of the imperial city during the1970s and the efforts since then to revive and reconnect it. It charts Liverpool's fall in the 1980s, its gradual normalisation in the 1990s, its staggering achievements and, as a European city in the first part of this century, its efforts to be ambitious in an age of austerity. This thought-provoking work asks: how far has Liverpool come and where does it now stand in comparison with thirty years ago and alongside other cities in the UK? What were the most important forces driving change? Who helped the most and who helped the least? Who and where gained the most and who and where gained the least? Finally, the author asks what is next for Liverpool: what are the current challenges for the city? Liverpool Beyond the Brink identifies the key economic, social and political challenges facing the city today to ensure there is increased productivity, future development is high quality and that the benefits of the city's renaissance are experienced by all the people in Liverpool in all parts of the city. [Cover image: Liverpool Waterfront 2017? McCoy Wynne (mccoywynne.co.uk)] Print publication: 31 May 2019, pp viii-x Print publication: 31 May 2019, pp 175-178 7 - Liverpool beyond the brink: what are the lessons and what is to be done? This book has tried to show how and why Liverpool has taken the path it has during the past thirty years. It has argued that the city has gone from being a bad news story on the newspapers' front pages for all the wrong reasons to a good news story in the public eye for the right reasons. Given where the city was in the 1980s, it is extraordinary how far it has come and how much it has changed. The city, its leaders and people are more confident, more optimistic, more ambitious and more positive. The city has improved its performance on some of the economic fundamentals. It has seriously exploited its cultural and heritage assets to present a very different picture to the world, nationally and internationally. People are voting with their feet. After a dramatic decline, the city's population is growing again – up 50,000 in this century, faster than the nation in recent years and predicted to be half a million by 2020. It is also becoming younger and more ethnically diverse – a real break with the recent past. The mood music is far better than at any time since the 1960s and Beatlemania. Political relations have improved, particularly during the past decade. The different parts of the city have been gradually put together again physically. The post-imperial city is being remade. 'We have turned the liner.' – Max Steinberg, former chief executive, Liverpool Vision But it's still not perfect. Too many places and people have not shared in the city's renaissance. The social challenges are still big. Political relationships are much better but still need strengthening. The city's capacity to deliver regeneration is impressive, but its capacity to deliver economic competitiveness needs to be increased. And despite improvements, the city needs to do better on some of the key drivers of success. This final chapter pulls these threads together and does three things. First, it identifies some of the lessons for Liverpool, national government and a wider international audience about the city's renaissance. Second, it reflects on the contribution of the key players who raised the aspirations and ambition of the city. Finally, it identifies three key strategic challenges for the city as it tries to build on the progress it has made in the past two decades: productivity, people and place. 3 - Liverpool begins to become normal, 1988–98 The 1990s were a crucial bridge between the confusion and chaos of the 1980s and the European ambitions of the 2000s. Several key features marked the period. First, the Liverpool Labour Party moved further away from the Militant legacy towards the political centre as it tried to distance itself from the bad old days, although there remained two wings within the party. Secondly, there was a significant effort to deal with the weaknesses of the local authority – even though this was not wholly successful. Thirdly, there was a growing recognition of the need for partnership working between the public and private sectors, which was encouraged by both national government and the European Commission. Fourthly, there was a growing recognition, encouraged by Europe, of the importance of scale and the need for Liverpool to work at the wider city region level. Finally and most significantly, there was a growing recognition of the importance of the city centre economy to the future of Liverpool and increased efforts to improve its performance. Putting Humpty Dumpty together again – four big city initiatives A key feature of the Liverpool renaissance story is how the economic, physical and social disintegration of the city was gradually tackled, if not resolved, in the 1990s and 2000s. Slowly during this period the different parts of the city, which had lost their economic rationale and had been fragmented by the loss of population during the 1970s and 1980s, were gradually tied together into a more coherent place physically.1 During the 1990s the city's political, administrative and business leadership began to recognise the need to regenerate the declining city centre. Given the scale of collapse in the manufacturing and port sectors, the potential of business, professional, financial, retail, and tourism and knowledge sectors was an obvious – but previously neglected – area to exploit. This growing awareness can best be illustrated by four major initiatives that helped to transform the debate about Liverpool's economy and to significantly improve the performance of the city centre in particular. They were Liverpool City Challenge, the Merseyside Development Corporation, Speke Garston Development Company and Regeneration Partnership, and the European Commission's Objective 1 Programme. They were the keys to the successes of the 1990s. 5 - Continuing ambition in an age of austerity, 2010–19 99 Print publication: 31 May 2019, pp 99-116 Liverpool had a very good boom during the first decade of the twenty-first century. It improved its own economic performance and closed the gap on some other UK cities. But even though the city also had a relatively decent bust, austerity and the policies of the Coalition and subsequently the Conservative government had an impact. In fact, in 2010 Liverpool again underlined the peculiarities of its politics when, just as in 1998, city voters threw out of office the party that had just done very well in the national elections. Liverpool returned decisively to Labour in 2010. The subsequent period was marked by three features. The first was the creation in 2012 of the office of an elected mayor in Liverpool. The second was austerity, with a Labour council increasingly hit by cuts in national government resources trying to sustain the economic development of the city that had taken place in the golden age. During this time the Labour leadership and the mayor had to straddle two different horses – trying to maintain economic development while at the same time carrying out the government's austerity programme, which hit hardest the poorest people in the poorest parts of the city. And they were attempting to do so without losing political support in the city or returning to the failed 1980s politics of confrontation. The third feature of this period was devolution and the move by national government to give responsibilities, if not always resources, to local government, with a raft of institutional changes including City Deals and elected city and later city regional mayors and Combined Authorities. This move changed and challenged the face of decision making in Liverpool as in other British cities. This chapter looks at the way in which those forces played out. It does three things. It assesses the performance and impact of Liverpool's elected mayor as he tried to sustain economic development with a commitment to pragmatic rather than gesture politics. It judges how the move to city regional government, including the election of a city regional mayor, worked out in Liverpool. And it assesses the impact of austerity and cuts in government resources to local councils on the city's finances and capacity. 1 - What is the Liverpool story and why does it matter? Print publication: 31 May 2019, pp 1-20 Beyond the brink – but where is Liverpool going? Liverpool is an endlessly fascinating, challenging city. It has a grip on people's imaginations in a way few other cities do – nationally or internationally. Everybody – business, government, policy makers, the media, pundits and punters – wants to know what is happening in Liverpool. Is it up, is it down? Is it winning, is it losing? Is it at peace or is it at war? But the irony of Liverpool is that you never quite know where the story is going. Just when you thought a path had been laid out, it changes direction all over again. During the past century the city went from being the second city of the greatest empire the world had ever seen into a post-imperial period of economic decline and political despair. But it emerged phoenix-like as one of the most significant examples of urban renaissance in the UK. Thirty years ago few would have predicted its metamorphosis or even believed it was possible. Its story has many lessons for the external world and even more for Liverpool itself. Liverpool is different The constantly offside city' – Sir Simon Rattle 'Liverpool – Threshold to the Ends of the Earth' – Michael O'Mahoney 'Liverpool is the pool of life, it makes to live' – Carl Jung Liverpool may not be better than, but it is different from, other cities. It is not an English but a Celtic city. As the iconic banner on the Kop at Anfield proclaims, 'We're not English, we're Scousers.' Its cultural blend of poets, philosophers, storytellers, flâneurs, comedians and musicians; its wide river and seaport; its history of immigrants and emigrants; its combination of global aspiration and intense local chauvinism – all make it different. It is an aggravating, cosmopolitan, self-regarding, expansive place. Its people are simultaneously big-hearted, open-minded, generous, literate, argumentative and querulous. Liverpool is ever so slightly surreal. That's what makes it interesting – and important. It will always have ups and downs. Economic crashes, buildings and people will come and go. But Liverpool will always be the same. It will always attract the curious and the interesting. They come because they never know where the story is going. That's why it will always be there. That's why, if it didn't exist, we would have to invent it. Print publication: 31 May 2019, pp vii-vii 6 - The state of Liverpool's economy today This book has looked at the renaissance of Liverpool during the past thirty years as its leaders responded to its demise as a globally connected imperial city. It has shown how they tried to deal with the impact of post-imperial economic decline on its political, physical, social and financial infrastructure and behaviour. It has assessed the impact of a range of initiatives designed to regenerate Liverpool city centre and to underpin it with a more sustainable economy. It has shown how those initiatives, which were often externally generated by European or national government, gradually combined to regenerate and reconnect parts of the city into a more coherent place. Liverpool is going through a successful if still incomplete process of renaissance that has better equipped it to survive in a globally challenging economy. But this book has also shown that not all places or people have shared in the fruits of the city's success. This book has focused on the physical renewal of the city because this has been the primary focus of Liverpool's leaders since the collapse of the Militant Tendency regime in the late 1980s. The scale of the renewal challenge meant that this was inevitable. Many regeneration initiatives were impressive, well regarded and often nationally significant. The approach worked. Nevertheless, this approach will not be enough to ensure that in future Liverpool is a serious economic player at a European let alone global level. That will need a clearer focus on economic competitiveness beyond the regeneration of particular parts of the city, however important they are currently. The city will have to develop its assets and the drivers of a modern successful city region. It will also need a change of scale. Until relatively recently the effort to regenerate Liverpool has focused mainly though not exclusively on the city, in particular its centre. But economic logic as well as national policy means that Liverpool cannot operate at that scale in future. Its challenges will have to be met at the level of the city region – economically, socially, environmentally and politically. Given these economic realities, this chapter moves beyond the story of the historic renaissance of the city of Liverpool to examine the fundamental position of the Liverpool city region economy. It looks at some of the hard evidence about – but also some perceptions of – its performance and prospects. Giant Cystic Craniopharyngioma: Case Report Dwight Parkinson, Michael West Journal: Canadian Journal of Neurological Sciences / Volume 6 / Issue 3 / August 1979 This is a report of a giant cystic craniopharyngioma which escaped diagnosis for nine years in spite of seemingly thorough neuroradiological investigation, prior to the advent of the CT scan. By Christoph Bachhuber, Maria Carme Belarte, Anna Maria Bietti Sestieri, Emma Blake, Helena Bonet-Rosado, Shlomo Bunimovitz, Despina Catapoti, John F. Cherry, Derek B. Counts, Mariassunta Cuozzo, Marian H. Feldman, Kevin D. Fisher, Lin Foxhall, Michael L. Galaty, Raphael Greenberg, Alessandro Guidi, Yannis Hamilakis, Ömür Harmanşah, Tamar Hodos, Sarah Janes, Morag M. Kersel, Carl Knappett, Zvi Lederman, Thomas P. Leppard, Katina T. Lillios, Consuelo Mata-Parreño, Sandra Montón Subías, Irene Nikolakopoulou, Massimo Osanna, Giulio Palumbi, John K. Papadopoulos, William A. Parkinson, Mieke Prent, Damià Ramis, Corinna Riva, R. Gareth Roberts, Alonso Rodríguez Díaz, Marisa Ruiz-Gálvez, Joan Sanmartí, Davide Tanasi, Helena Tomas, Carlo Tronchetti, Nicholas C. Vella, Jaime Vives-Ferrándiz Sánchez, Jennifer M. Webb, Yuval Yekutieli Edited by A. Bernard Knapp, University of Glasgow, Peter van Dommelen, Brown University, Rhode Island Book: The Cambridge Prehistory of the Bronze and Iron Age Mediterranean Print publication: 12 January 2015, pp xiii-xvi 9 - Bronze Age European Elites: From the Aegean to the Adriatic and Back Again from Mobility, Migration and Colonisation By Michael L. Galaty, Helena Tomas, William A. Parkinson Print publication: 12 January 2015, pp 157-177 This chapter deals with the specific forms of Sicily's interaction with Aegean and eastern Mediterranean groups who were consistently present and active in the central Mediterranean throughout the second millennium BC. The focus is on Sicily and the Aeolian islands. The chapter discusses the cultural differences between the main island of Sicily and the minor islands of the Aeolian group and Ustica throughout the Early Bronze Age. The Sicilian Middle Bronze Age is characterized by a formally homogeneous archaeological culture, the so-called Thapsos-Milazzese facies that was shared by Sicily and the Aeolian islands and that is also documented at Ustica, Pantelleria and on the Poro promontory of the Calabria coast. The label 'Ausonian I' was first used by Bernabo Brea to refer to the Late Bronze Age facies at Lipari. Throughout the Late Bronze Age, the Pantalica culture continued the local, long-established tradition of integration with Aegean groups who were still present and active in Sicily. Selection Bias Introduced by Neuropsychological Assessments Robert Olson, Maureen Parkinson, Michael McKenzie Journal: Canadian Journal of Neurological Sciences / Volume 37 / Issue 2 / March 2010 Two prospective studies in patient with brain tumours were performed comparing the Mini Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA). The first assessed their feasibility and the second compared their diagnostic accuracy against a four-hour neuropsychological assessment (NPA). The introduction of the NPA decreased accrual and retention rates. We were therefore concerned regarding potential selection bias. Ninety-two patients were prospectively accrued and subsequently divided into three categories: a) no NPA required b) withdrew consent to NPA c) completed NPA. In order to quantify any potential bias introduced by the NPA, patient demographics and cognitive test scores were compared between the three groups. There were significant differences in age (p<0.001), education (p=0.034), dexamethasone use (p=0.002), MMSE (p=0.005), and MoCA scores (p<0.001) across the different study groups. Furthermore, with increasing involvement of the NPA, patients' cognitive scores and educational status increased, while their age, dexamethasone use, and opioid use all decreased. Individuals who completed the NPA had higher MoCA scores than individuals who were not asked to complete the NPA (24.7 vs. 20.5; p < 0.001). In addition, this relationship held when restricting the analyses to individuals with brain metastases (p < 0.001). In this study, the lengthy NPA chosen introduced a statistically and clinically significant source of selection bias. These results highlight the importance of selecting brief and well tolerated assessments when possible. However, researchers are challenged by weighing the improved selection bias associated with brief assessments at the cost of reduced diagnostic accuracy.
CommonCrawl
Peter L. Antonelli Peter Louis Antonelli (March 5, 1941 – February 15, 2020) was an American mathematician known for his work on mathematical biology, Finsler geometry, and their connections. Overview Antonelli was born on March 5, 1941, in Syracuse, New York, and became a student at Syracuse University, graduating in 1963.[1] He completed a PhD at Syracuse University, with the 1966 dissertation Structure Theory for Montgomery-Samelson Fiberings Between Manifolds supervised by Erik Hemmingsen.[2][3] After a short period as assistant professor at the University of Tennessee, Knoxville from 1967 to 1968, and an NSF post-doctoral fellowship at the Institute for Advanced Study in Princeton, New Jersey, from 1968 to 1970, he took a faculty position at the University of Alberta, Canada, where he stayed for the remainder of his career.[1][4] In 2006, he moved to Brazil with his wife and colleague S.F. Rutz, where he was a visiting professor at Federal University of Pernambuco, Recife.[5] He died in 2020.[6] Contributions to mathematics In his early years, Peter L. Antonelli's interests were focused on physics, especially general relativity. As a Ph.D. student, he studied mathematical objects such as special groups of diffeomorphisms and exotic spheres. After 1970, his interests shifted towards applied mathematics, especially applications of differential geometry to developmental biology, ecology, and genetics. As a visiting professor in the biology department at the University of Sussex in the early 1970s, he pursued interests that had developed from his work in the early 1960s as a United States Public Health Service Fellow in mathematical biology at the University of Chicago.[1] During the course of his career, Peter L. Antonelli published over 120 research papers in a variety of domains including non-linear mechanics, Hamiltonian systems, diffusion theory, stochastic calculus and stochastic geometry, geometric probability, differential game theory, bifurcation theory, geometry of paths, and Riemannian, Finslerian and Lagrangian geometries. The geometry of certain non-Riemannian metrics now bear his name.[1] Along with his extensive work on the mathematical ecology of the Great Barrier Reef,[7][8] he also showed that all living plants and animals are likely derived from two primitive species of bacteria, through the process of endosymbiosis.[9][10][11][12] Books Antonelli was the coauthor of books including: • Antonelli, Peter L.; Burghelea, Dan; Kahn, Peter J. (1971). The concordance-homotopy groups of geometric automorphism groups. Lecture Notes in Mathematics. Vol. 215. Berlin & New York: Springer-Verlag. doi:10.1007/BFb0061176. ISBN 978-3-540-05560-0.[13] • Antonelli, P. L.; Ingarden, R. S.; Matsumoto, M. (1993). The theory of sprays and Finsler spaces with applications in physics and biology. Fundamental Theories of Physics. Vol. 58. Dordrecht: Kluwer Academic Publishers. doi:10.1007/978-94-015-8194-3. ISBN 0-7923-2577-X.[14] • Antonelli, P. L.; Bradbury, R. H. (1996). Volterra–Hamilton models in the ecology and evolution of colonial organisms. Singapore: World Scientific Publishing. ISBN 981-02-2450-8.[15] • Antonelli, P. L.; Zastawniak, T. J. (1999). Fundamentals of Finslerian diffusion with applications. Fundamental Theories of Physics. Vol. 101. Dordrecht: Kluwer Academic Publishers. doi:10.1007/978-94-011-4824-5. ISBN 0-7923-5511-3.[16] His edited volumes include: • Antonelli, P. L., ed. (1985). Mathematical essays on growth and the emergence of form. University of Alberta Press.[17] • Antonelli, P. L., ed. (2000). Finslerian geometries: a meeting of minds. Fundamental Theories of Physics. Vol. 109. Dordrecht: Kluwer Academic Publishers. ISBN 0-7923-6115-6.[18] • Antonelli, P. L., ed. (2003). Handbook of Finsler geometry, Vols. I & II. Dordrecht: Kluwer Academic Publishers. doi:10.1007/978-94-007-0942-3. ISBN 1-4020-1557-7. MR 2067663.[19] Recognition In 1987, Antonelli was awarded a McCalla Professorship at the University of Alberta for research excellence. In 2001, he was awarded the degree of Honorary Professor from Alexandru Ioan Cuza University in Romania. Papers from a conference held there in honor of his 60th birthday were later published as a festschrift.[1] References 1. Anastasiei, M.; Antonelli, P. L. (2003). Anastasiei, M; Antonelli, P. L (eds.). Finsler and Lagrange Geometries: Proceedings of a Conference held on August 26–31, Iaşi, Romania. Springer Netherlands. doi:10.1007/978-94-017-0405-2. ISBN 978-90-481-6325-0. See especially Radi Miron and Mihai Anastasiei, "Professor Dr. Peter Louis Antonelli at Sixty", pp. xi–xiii. 2. "Alumni". College of Arts & Sciences at Syracuse University. Retrieved 2022-01-16. 3. Peter L. Antonelli at the Mathematics Genealogy Project 4. "Retired Faculty | Mathematical and Statistical Sciences". www.ualberta.ca. Retrieved 2022-01-14. 5. "Peter Louis Antonelli". Escavador (in Brazilian Portuguese). Retrieved 2022-01-16. 6. Gielis, Johan; Goemans, Wendy (2020). "Editorial". Growth and Form. 1 (1): 44. doi:10.2991/gaf.k.200131.001. S2CID 243398589. 7. Sapp, Jan (2003). What is Natural?: Coral Reef Crisis. Oxford University Press. ISBN 978-0-19-516178-6. 8. Bradbury, R. H. (1991). "UNDERSTANDING "ACANTHÁSTER"". Coenoses. 6 (3): 121–126. ISSN 0393-9154. JSTOR 43461274. 9. "Researchers Show Evolutionary Theory Adds Up". ScienceDaily. Retrieved 2022-10-26. 10. "Researchers show evolutionary theory adds up". EurekAlert!. Retrieved 2022-10-26. 11. Holmes, Bob (24 January 2004). "Early life wouldn't stand a chance in a commune". New Scientist. Retrieved 2022-10-26. 12. Whitfield, John (2004-02-19). "Born in a watery commune". Nature. 427 (6976): 674–677. Bibcode:2004Natur.427..674W. doi:10.1038/427674a. PMID 14973452. S2CID 28739859. 13. Reviews of The concordance-homotopy groups of geometric automorphism groups: E. C. Turner, MR0358834; M. V. Mielke, Zbl 0222.57001 14. Reviews of The theory of sprays and Finsler spaces with applications in physics and biology: Howard E. Brandt (1994), Foundations of Physics, doi:10.1007/BF02054792; Sorin Dragomir, MR1273129; R. Miron, Zbl 0821.53001 15. Review of Volterra-Hamilton models in the ecology and evolution of colonial organisms: J. M. Cushing, Zbl 0930.92031 16. Review of Fundamentals of Finslerian diffusion with applications: David Bao, MR1743065 17. Reviews of Mathematical essays on growth and the emergence of form: Robert Rosen (1987), American Scientist, JSTOR 27854502; R.M.Shymko (1987), Mathematical Biosciences, doi:10.1016/0025-5564(87)90012-5; René Thom (1986), Quarterly Review of Biology, doi:10.1086/415262, JSTOR 2827862 18. Review of Finslerian geometries: a meeting of minds: G. Yu. Bogoslovsky (2001), General Relativity and Gravitation, doi:10.1023/a:1012097704400 19. Review of Handbook of Finsler geometry: Lajos Tamássy, Zbl 1057.53001 Further reading • Anastasiei, Mihai (2003). "On P. L. Antonelli works in mathematical biology". Scientific Annals of University of Agricultural Sciences and Veterinary Medicine. 46 (2): 3–8. MR 2149028. Authority control International • ISNI • VIAF National • France • BnF data • Israel • United States • Czech Republic Academics • MathSciNet • Mathematics Genealogy Project Other • IdRef
Wikipedia
\begin{document} \begin{frontmatter} \title{Taylor expansion based fast Multipole Methods for 3-D Helmholtz equations in Layered Media} \author[add1,add2]{Bo Wang} \author[add3]{Duan Chen} \author[add4]{Bo Zhang} \author[add2]{Wenzhong Zhang} \author[add5]{Min Hyung Cho} \author[add2]{Wei Cai\corref{mycorrespondingauthor}} \cortext[mycorrespondingauthor]{Corresponding author} \ead{[email protected]} \address[add1]{LCSM, Ministry of Education, School of Mathematics and Statistics, Hunan Normal University, Changsha, Hunan 410081, P. R. China} \address[add2]{Department of Mathematics, Southern Methodist University, Dallas, TX 75275, USA} \address[add3]{Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223, USA} \address[add4]{Department of Computer Science, Indiana University, IN 47408, USA} \address[add5]{Department of Mathematical Science, University of Massachusetts Lowell, Lowell, MA 01854, USA} \begin{abstract} In this paper, we develop fast multipole methods for 3D Helmholtz kernel in layered media. Two algorithms based on different forms of Taylor expansion of layered media Green's function are developed. A key component of the first algorithm is an efficient algorithm based on discrete complex image approximation and recurrence formula for the calculation of the layered media Green's function and its derivatives, which are given in terms of Sommerfeld integrals. The second algorithm uses symmetric derivatives in the Taylor expansion to reduce the size of precomputed tables for the derivatives of layered media Green's function. Numerical tests in layered media have validated the accuracy and $O(N)$ complexity of the proposed algorithms. \end{abstract} \begin{keyword} Fast multipole method, layered media, Helmholtz equation, Taylor expansion \end{keyword} \end{frontmatter} \section{Introduction} Wave scattering of objects embedded in layered media can be computed by integral equation (IE) methods using domain Green's functions which satisfy the transmission conditions at material layer interfaces and the Sommerfeld radiation condition at infinity (cf. \cite{michalski1990electromagnetic,millard2003fast, chen2018accurate,lai2014fast}). As a result, IE methods based on domain Green's functions only require solution unknowns to be given on the scatterer's surface for a surface IE formulation or over the scatterer body for a volume IE. This is different from formulations based on free space Green's functions, which will need additional unknowns on the infinite material layer interfaces (cf. \cite{cho2018spectrally,cho2015robust,lai2015fast}). For the solution of the resulting linear system from the discretized IEs, iterative solvers such as GMRES are usually used, which require the product of a full matrix, from the discretization of the integral operator, and a solution vector. A direct product will generate an $O(N^{2})$ cost where $N$ is the size of the matrix. Therefore, the main computational issue for IE methods is to develop fast solvers to speed up such a matrix-vector product. The popular fast method is the fast multipole method (FMM) developed by Greengard and Rokhlin using multipole expansions for the free space Green's functions \cite{greengard1987fast,greengard1997new}. However, extending FMMs to layered media has been a long outstanding challenge for IE methods. Numerical algorithms for layered-media problems have traditionally been carried out in the Fourier spectral domain due to the availability of the closed-form Green's functions (CFGF's) for layered media in the spectral domain. Since a series of techniques have been developed to obtain approximated CFGF's for layered media in the spatial domain (cf. \cite{chow1991closed,aksun1996robust, alparslan2010closed}), extensions of FMMs to layered media problems were proposed by applying spherical harmonic expansions to the approximated CFGF's (see \cite{jandhyala1995multipole,gurel1996electromagnetic, geng2001fast} for Laplace, Helmholtz and Maxwell's equations, respectively). For the Laplace equation, only real images were used in the approximating the CFGF's and traditional FMMs can be then applied to the approximated CFGF's, directly. However, for Helmholtz and Maxwell's equations in layered media, complex images are required to obtain approximated CFGF's. Thus, addition theorems used for the free space FMMs need \ to be modified for wave functions with complex arguments, so far no rigorous mathematical formulations and numerical implementation have been obtained. Other efforts to speed up the computation of integral operator for layered media Green's functions include the inhomogeneous plane wave method \cite{huchew2000}, windowed Green's function method for layered-media \cite{bruno2016windowed}, and cylindrical wave decomposition of the Green's function in 3-D and 2-D FMM \cite{cho2012}. In this paper, we will develop FMM methods for the 3D Helmholtz equation for layered media based on Taylor expansion instead of the mutltipole expansion. In addition to the original FMMs \cite{greengard1987fast, greengard1997new} using spherical harmonic expansion, FMMs based on Taylor expansion (TE) have already been developed and investigated for free space Green's functions and other kernels \cite{tausch2004variable, ying2004kernel,fong2009black,darve2004efficient} and have been shown to have similar error estimate as multipole expansion using spherical harmonic expansion \cite{tausch2004variable}. Since analytical form of the Green's functions in layered media usually can only be obtained in the spectral domain using Sommerfeld integrals, it will be more convenient to develop FMMs based on Taylor expansions in multi-layer media. We will start with the derivation of analytical form of the Green's function in spectral domain for Helmholtz equations in multi-layer media. Two different versions of Taylor expansion based FMMs (TE-FMMs) will be proposed to compute the interaction between sources and targets located in different layers. The two versions come from using Taylor expansions with nonsymmetric derivatives or symmetric derivatives, respectively. For the two algorithms, different statigies are introduced for efficient computation of the translation operator from far field expansion centered at a source box to local expansion centered at a target box. In the case of Taylor expansion with nonsymmetric derivatives, we propose an efficient and low memory algorithm based on discrete complex image method (DCIM) approximation of the Green's functions in the spectral domain together with recurrence formulas for derivatives of free space Green's function. Meanwhile, for the case of Taylor expansion with symmetric derivatives, precomputed tables for the translation operators will be used, instead. With these Taylor expansion based FMMs, fast computation is achieved for interactions among particles in multi-layer media, as shown in numerical examples for two layers and three layers cases. The rest of this paper is organized as follows. In section 2, a general formulation for Green's function of Helmholtz equation in multi-layer media are derived. Unlike the derivation presented in \cite{cho2012parallel}, the derivation here shows source and target information in separate parts of the formulas for general multi-layered media. In section 3, the first version TE-FMM using non-symmetric derivatives is proposed for multi-layer media. Using DCIM approximation and recurrence formulas for derivatives, a fast algorithm for the computation of the translation operator from a TE in source box to a TE in target box is given. Then, fast algorithm for interactions among particles in multi-layer media is presented. The second TE-FMM using symmetric derivatives is developed in Section 4. There, we first introduce the TE-FMM using symmetric derivatives for the free space case, and then extend it to the case of multi-layer media. Numerical results using both versions of the TE-FMMs are given for two and three layers media in Section 5. Various efficiency comparison results are given to show the performance of the proposed TE-FMMs. \section{Spectral form of Green's function in multi-layer media} In this section, we briefly summarize the derivation of the Green's function of Helmholtz equations in multi-layer media \cite{cho2012parallel} with source and target coordinates separated in the Fourier spectral form. \subsection{General formula} Consider a layered medium consisting of $L$-interfaces located at $z=d_{\ell },\ell=0,1,\cdots,L-1$ in Fig. \ref{layerstructure}. Suppose we have a point source at $\boldsymbol{r}^{\prime}=(x^{\prime},y^{\prime},z^{\prime})$ in the $\ell^{\prime}$th layer ($d_{\ell^{\prime}}<z^{\prime}<d_{\ell^{\prime}-1}$). Then, the layered media Green's function for the Helmholtz equation satisfies \begin{equation} \boldsymbol{\Delta}u_{\ell\ell^{\prime}}(\boldsymbol{r},\boldsymbol{r}^{\prime })+k_{\ell}^{2}u_{\ell\ell^{\prime}}(\boldsymbol{r},\boldsymbol{r}^{\prime })=-\delta(\boldsymbol{r},\boldsymbol{r}^{\prime}), \end{equation} at field point $\boldsymbol{r}=(x,y,z)$ in the $\ell$th layer ($d_{\ell }<z<d_{\ell}-1$) where $\delta(\boldsymbol{r},\boldsymbol{r}^{\prime})$ is the Dirac delta function and $k_{\ell}$ is the wave number in the $\ell$th layer. \begin{figure} \caption{Sketch of the layer structure for general multi-layer media.} \label{layerstructure} \end{figure} Define the partial Fourier transform along $x-$ and $y-$directions for $u_{\ell\ell^{\prime}}(x,y,z)$ as \[ \widehat{u}_{\ell\ell^{\prime}}(k_{x},k_{y} ,z)=\mathscr{F}[u_{\ell\ell'}(\bs r, \bs r')](k_{x},k_{y},z):=\int_{-\infty }^{\infty}\int_{-\infty}^{\infty}u_{\ell\ell^{\prime}} (\boldsymbol{r},\boldsymbol{r}^{\prime})e^{-\ri(k_{x}x+k_{y}y)}dxdy. \] Then, $\widehat{u}_{\ell\ell^{\prime}}(k_{x},k_{y},z)$ satisfies second order ordinary differential equations \[ \frac{d^{2}\widehat{u}_{\ell\ell^{\prime}}(k_{x},k_{y},z)}{dz^{2}}+(k_{\ell }^{2}-k_{\rho}^{2})\widehat{u}_{\ell\ell^{\prime}}(k_{x},k_{y} ,z)=-e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}\delta(z,z^{\prime}), \] where $k_{\rho}^{2}=k_{x}^{2}+k_{y}^{2}$. The system of ordinary differential equations can be solved analytically for each layer in $z$ by imposing transmission conditions at the interface between $\ell$th and $(\ell-1)$th layer ($z=d_{\ell-1})$, i.e., \[ \widehat{u}_{\ell-1,\ell^{\prime}}(k_{x},k_{y},z)=\widehat{u}_{\ell \ell^{\prime}}(k_{x},k_{y},z),\quad k_{\ell-1}\frac{d\widehat{u}_{\ell -1,\ell^{\prime}}(k_{x},k_{y},z)}{dz}=k_{\ell}\frac{d\widehat{u}_{\ell \ell^{\prime}}(k_{x},k_{y},z)}{dz}, \] as well as decay conditions in the top and bottom-most layers for $z\rightarrow\pm\infty$. Generally, an analytic solution has the following form \begin{equation} \begin{cases} \displaystyle\widehat{u}_{\ell\ell^{\prime}}(k_{x},k_{y},z)=A_{\ell \ell^{\prime}}\cosh(\ri k_{\ell z}z_{\ell})+B_{\ell\ell^{\prime}}\sinh(\ri k_{\ell z}z_{\ell}),\quad\ell\neq\ell^{\prime},L,\\[10pt] \displaystyle\widehat{u}_{\ell^{\prime}\ell^{\prime}}(k_{x},k_{y} ,z)=A_{\ell^{\prime}\ell^{\prime}}\cosh(\ri k_{\ell^{\prime}z}z_{\ell^{\prime }})+B_{\ell^{\prime}\ell^{\prime}}\sinh(\ri k_{\ell^{\prime}z}z_{\ell^{\prime }})+\widehat{G}(k_{\ell^{\prime}z},z-z^{\prime}),\\[10pt] \displaystyle\widehat{u}_{L\ell^{\prime}}(k_{x},k_{y},z)=A_{L\ell^{\prime} }\cosh(\ri k_{\ell z}z)+B_{L\ell^{\prime}}\sinh(\ri k_{\ell z}z), \end{cases} \label{solutionformula} \end{equation} where \[ \widehat{G}(k_{\ell^{\prime}z},z-z^{\prime})=\vartheta\frac{e^{\ri k_{\ell^{\prime}z}|z-z^{\prime}|}}{k_{\ell^{\prime}z}},\quad\mathrm{with} \;\;\vartheta=\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}}{2} \] is the Fourier transform of free space Green's function with wave number $k_{\ell^{\prime}}$. Notations $k_{\ell z}=\sqrt{k_{\ell}^{2}-k_{\rho}^{2}}$, $k_{\ell^{\prime}z}=\sqrt{k_{\ell^{\prime}}^{2}-k_{\rho}^{2}}$ and local coordinate $z_{\ell}:=z-d_{\ell}$ are used here (see Fig. \ref{layerstructure} ). The interface conditions implies that coefficients in the $\ell$th layer $V_{\ell\ell^{\prime}}=(A_{\ell\ell^{\prime}},B_{\ell\ell^{\prime} })^{\mathrm{T}}$ can be recursively determined as follow \begin{equation} \begin{split} & V_{\ell\ell^{\prime}}=\prod\limits_{k=\ell+1}^{L}\mathbb{T}_{k-1,k} V_{L},\quad\ell^{\prime}<\ell<L,\quad V_{\ell^{\prime}\ell^{\prime}} =\prod\limits_{k=\ell^{\prime}+1}^{L}\mathbb{T}_{k-1,k}V_{L\ell^{\prime} }+\boldsymbol{S}_{\ell^{\prime},\ell^{\prime}+1},\\ & V_{\ell^{\prime}-1,\ell^{\prime}}=\prod\limits_{k=\ell^{\prime}} ^{L}\mathbb{T}_{k-1,k}V_{L}+\boldsymbol{S}_{\ell^{\prime}-1,\ell^{\prime} }+\mathbb{T}_{\ell^{\prime}-1,\ell^{\prime}}\boldsymbol{S}_{\ell^{\prime} ,\ell^{\prime}+1},\\ & V_{\ell\ell^{\prime}}=\prod\limits_{k=\ell+1}^{\ell^{\prime}-1} \mathbb{T}_{k-1,k}\Big(\prod\limits_{k=\ell^{\prime}}^{L}\mathbb{T} _{k-1,k}V_{L\ell^{\prime}}+\mathbb{T}_{\ell^{\prime}-1,\ell^{\prime} }\boldsymbol{S}_{\ell^{\prime},\ell^{\prime}+1}+\boldsymbol{S}_{\ell^{\prime }-1,\ell^{\prime}}\Big),\quad0<\ell<\ell^{\prime}-1, \end{split} \label{recursion} \end{equation} where the transfer matrix $_{\ell-1,\ell}$ are given by \[ \begin{split} \mathbb T_{\ell-1,\ell}= & \begin{pmatrix} \displaystyle\cosh(\ri k_{\ell z}D_{\ell}) & \displaystyle\sinh(\ri k_{\ell z}D_{\ell})\\[6pt] \displaystyle\frac{k_{\ell}k_{\ell z}\sinh(\ri k_{\ell z}D_{\ell})}{k_{\ell -1}k_{\ell-1,z}} & \displaystyle\frac{k_{\ell}k_{\ell z}\cosh(\ri k_{\ell z}D_{\ell})}{k_{\ell-1}k_{\ell-1,z}} \end{pmatrix} ,\quad\ell=1,2,\cdots,L-1,\\ \mathbb T_{L-1,L}= & \begin{pmatrix} \displaystyle\cosh(\ri k_{Lz}d_{L-1}) & \displaystyle\sinh(\ri k_{Lz} d_{L-1})\\[6pt] \displaystyle\frac{k_{N}k_{Lz}\sinh(\ri k_{Lz}d_{L-1})}{k_{L-1}k_{L-1,z}} & \displaystyle\frac{k_{L}k_{Lz}\cosh(\ri k_{Lz}d_{L-1})}{k_{L-1}k_{L-1,z}} \end{pmatrix} , \end{split} \] and source vectors are defined as follows: \begin{equation} \boldsymbol{S}_{\ell^{\prime}-1,\ell^{\prime}}= \begin{pmatrix} \displaystyle1\\ \displaystyle\frac{k_{\ell^{\prime}}k_{\ell^{\prime}z}}{k_{\ell^{\prime} -1}k_{\ell^{\prime}-1,z}} \end{pmatrix} \frac{\vartheta e^{\ri k_{\ell^{\prime}z}(d_{\ell^{\prime}-1}-z^{\prime})} }{k_{\ell^{\prime}z}},\quad\boldsymbol{S}_{\ell^{\prime},\ell^{\prime} +1}=\left( \begin{array} [c]{r} \displaystyle-1\\ \displaystyle1 \end{array} \right) \frac{\vartheta e^{\ri k_{\ell^{\prime}z}(z^{\prime}-d_{\ell^{\prime }})}}{k_{\ell^{\prime}z}}. \end{equation} The decaying conditions on the top and bottom layers yield initial values for the recursion \eqref{recursion} \begin{equation} A_{0\ell^{\prime}}=B_{0\ell^{\prime}},\quad A_{L\ell^{\prime}}=-B_{L\ell ^{\prime}}. \end{equation} Therefore, the system of algebraic equations between $V_{0\ell^{\prime}}$ and $V_{L\ell^{\prime}}$ can be found from \eqref{recursion} as \begin{equation} \begin{split} \begin{pmatrix} A_{0\ell^{\prime}}\\ A_{0\ell^{\prime}} \end{pmatrix} = & \prod\limits_{k=1}^{L}\mathbb{T}_{k-1,k}V_{L\ell^{\prime}}+\prod \limits_{k=1}^{\ell^{\prime}-1}\mathbb{T}_{k-1,k}\boldsymbol{S}_{\ell^{\prime }-1,\ell^{\prime}}+\prod\limits_{k=1}^{\ell^{\prime}}\mathbb{T}_{k-1,k} \boldsymbol{S}_{\ell^{\prime},\ell^{\prime}+1}\\ = & \begin{pmatrix} \alpha_{11} & \alpha_{12}\\ \alpha_{21} & \alpha_{22} \end{pmatrix} \left( \begin{array} [c]{r} A_{L\ell^{\prime}}\\ -A_{L\ell^{\prime}} \end{array} \right) + \begin{pmatrix} \beta_{11}\\ \beta_{21} \end{pmatrix} \vartheta e^{\ri k_{\ell^{\prime}z}(d_{\ell^{\prime}-1}-z^{\prime})}+ \begin{pmatrix} \beta_{12}\\ \beta_{22} \end{pmatrix} \vartheta e^{\ri k_{\ell^{\prime}z}(z^{\prime}-d_{\ell^{\prime}})}. \end{split} \end{equation} It is important to point out that $\alpha_{ij}$ and $\beta_{ij}$ are independent of the source location $(x^{\prime},y^{\prime},z^{\prime})$, which only depend on $\{k_{\ell},k_{\ell z}\}_{\ell=0}^{L}$ and $\{D_{\ell} \}_{\ell=1}^{L-1}$. Therefore \begin{equation} \begin{split} A_{0\ell^{\prime}}=B_{0\ell^{\prime}}= & \frac{[(\alpha_{22}-\alpha _{21})\beta_{11}+(\alpha_{11}-\alpha_{12})\beta_{21}]}{[(\alpha_{11} -\alpha_{12})-(\alpha_{21}-\alpha_{22})]}\frac{\ri e^{-\ri(k_{x}x^{\prime }+k_{y}y^{\prime})}}{2}\frac{e^{\ri k_{\ell^{\prime}z}(d_{\ell^{\prime} -1}-z^{\prime})}}{k_{\ell^{\prime}z}}\\ & +\frac{[(\alpha_{22}-\alpha_{21})\beta_{12}+(\alpha_{11}-\alpha_{12} )\beta_{22}]}{[(\alpha_{11}-\alpha_{12})-(\alpha_{21}-\alpha_{22})]}\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}}{2}\frac{e^{\ri k_{\ell^{\prime} z}(z^{\prime}-d_{\ell^{\prime}})}}{k_{\ell^{\prime}z}},\\ A_{L\ell^{\prime}}=-B_{L\ell^{\prime}}= & \frac{\beta_{21}-\beta_{11} }{[(\alpha_{11}-\alpha_{12})-(\alpha_{21}-\alpha_{22})]}\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}}{2}\frac{e^{\ri k_{\ell^{\prime} z}(d_{\ell^{\prime}-1}-z^{\prime})}}{k_{\ell^{\prime}z}}\\ & +\frac{\beta_{22}-\beta_{12}}{[(\alpha_{11}-\alpha_{12})-(\alpha _{21}-\alpha_{22})]}\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}} {2}\frac{e^{\ri k_{\ell^{\prime}z}(z^{\prime}-d_{\ell^{\prime}})}} {k_{\ell^{\prime}z}}. \end{split} \label{coefficientsolver} \end{equation} Together with recursions \eqref{recursion}, any coefficients $\{A_{\ell \ell^{\prime}},B_{\ell\ell^{\prime}}\}_{\ell=0}^{L}$ can be represented by \begin{equation} \begin{split} & A_{\ell\ell^{\prime}}=\big(A_{\ell\ell^{\prime}}^{1}e^{\ri k_{\ell^{\prime }z}(d_{\ell^{\prime}-1}-z^{\prime})}+A_{\ell\ell^{\prime}}^{2}e^{\ri k_{\ell^{\prime}z}(z^{\prime}-d_{\ell^{\prime}})}\big)\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})}}{2k_{\ell^{\prime}z}},\\ & B_{\ell\ell^{\prime}}=\big(B_{\ell\ell^{\prime}}^{1}e^{\ri k_{\ell^{\prime }z}(d_{\ell^{\prime}-1}-z^{\prime})}+B_{\ell\ell^{\prime}}^{2}e^{\ri k_{\ell^{\prime}z}(z^{\prime}-d_{\ell^{\prime}})}\big)\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})}}{2k_{\ell^{\prime}z}}, \end{split} \end{equation} where coefficients $\{A_{\ell\ell^{\prime}}^{1},A_{\ell\ell^{\prime}} ^{2};B_{\ell\ell^{\prime}}^{1},B_{\ell\ell^{\prime}}^{2}\}_{\ell,\ell^{\prime }=0}^{L}$ only depend on $\{k_{\ell},k_{\ell z}\}_{\ell=0}^{L}$ and $\{D_{\ell}\}_{\ell=1}^{L-1}$. Expressions given by \eqref{solutionformula} have upgoing and downgoing wave mixed. It is usually more convenient to rewrite those as upgoing and downgoing components \begin{equation} \begin{cases} \displaystyle\widehat{u}_{0\ell^{\prime}}(k_{x},k_{y},z)=b_{0\ell^{\prime} }\vartheta\frac{e^{\ri k_{0z}z}}{k_{0z}},\\[10pt] \displaystyle\widehat{u}_{\ell\ell^{\prime}}(k_{x},k_{y},z)=a_{\ell \ell^{\prime}}\vartheta\frac{e^{-\ri k_{\ell z}(z-d_{\ell})}}{k_{\ell z} }+b_{\ell\ell^{\prime}}\vartheta\frac{e^{\ri k_{\ell z}(z-d_{\ell})}}{k_{\ell z}},\quad i\neq0,\ell^{\prime},L,\\[10pt] \displaystyle\widehat{u}_{\ell^{\prime}\ell^{\prime}}(k_{x},k_{y} ,z)=a_{\ell^{\prime}\ell^{\prime}}\vartheta\frac{e^{-\ri k_{\ell^{\prime} z}(z-d_{\ell^{\prime}})}}{k_{\ell z}}+b_{\ell^{\prime}\ell^{\prime}} \vartheta\frac{e^{\ri k_{\ell^{\prime}z}(z-d_{\ell^{\prime}})}}{k_{\ell z} }+\widehat{G}(k_{\ell^{\prime}z},z-z^{\prime}),\\[10pt] \displaystyle\widehat{u}_{L\ell^{\prime}}(k_{x},k_{y},z)=a_{L\ell^{\prime} }\vartheta\frac{e^{-\ri k_{Lz}z}}{k_{Lz}}, \end{cases} \label{updownspectraldoamin} \end{equation} where \begin{equation} \begin{split} a_{\ell\ell^{\prime}}= & \frac{k_{\ell z}}{2k_{\ell^{\prime}z}}(A_{\ell \ell^{\prime}}^{1}-B_{\ell\ell^{\prime}}^{1})e^{\ri k_{\ell^{\prime}z} (d_{\ell^{\prime}-1}-z^{\prime})}+\frac{k_{\ell z}}{2k_{\ell^{\prime}z} }(A_{\ell\ell^{\prime}}^{2}-B_{\ell\ell^{\prime}}^{2})e^{\ri k_{\ell^{\prime }z}(z^{\prime}-d_{\ell^{\prime}})},\\ b_{\ell\ell^{\prime}}= & \frac{k_{\ell z}}{2k_{\ell^{\prime}z}}(A_{\ell \ell^{\prime}}^{1}+B_{\ell\ell^{\prime}}^{1})e^{\ri k_{\ell^{\prime}z} (d_{\ell^{\prime}-1}-z^{\prime})}+\frac{k_{\ell z}}{2k_{\ell^{\prime}z} }(A_{\ell\ell^{\prime}}^{2}+B_{\ell\ell^{\prime}}^{2})e^{\ri k_{\ell^{\prime }z}(z^{\prime}-d_{\ell^{\prime}})}. \end{split} \end{equation} It is important to note that $z^{\prime}$ only appears in the exponentials. In general, these can be written in the form of \begin{equation} \begin{split} a_{\ell\ell^{\prime}}= & \sigma_{\ell\ell^{\prime}}^{\downarrow\downarrow }(k_{\rho})e^{\ri k_{\ell^{\prime}z}(d_{\ell^{\prime}-1}-z^{\prime})} +\sigma_{\ell\ell^{\prime}}^{\downarrow\uparrow}(k_{\rho})e^{\ri k_{\ell^{\prime}z}(z^{\prime}-d_{\ell^{\prime}})},\\ b_{\ell\ell^{\prime}}= & \sigma_{\ell\ell^{\prime}}^{\uparrow\downarrow }(k_{\rho})e^{\ri k_{\ell^{\prime}z}(d_{\ell^{\prime}-1}-z^{\prime})} +\sigma_{\ell\ell^{\prime}}^{\uparrow\uparrow}(k_{\rho})e^{\ri k_{\ell ^{\prime}z}(z^{\prime}-d_{\ell^{\prime}})}, \end{split} \end{equation} where \begin{equation} \begin{split} \sigma_{\ell\ell^{\prime}}^{\downarrow\downarrow}(k_{\rho}) & =\frac{k_{\ell z}}{2k_{\ell^{\prime}z}}(A_{\ell\ell^{\prime}}^{1}-B_{\ell\ell^{\prime}} ^{1}),\quad\sigma_{\ell\ell^{\prime}}^{\downarrow\uparrow}(k_{\rho} )=\frac{k_{\ell z}}{2k_{\ell^{\prime}z}}(A_{\ell\ell^{\prime}}^{2}-B_{\ell \ell^{\prime}}^{2}),\\ \sigma_{\ell\ell^{\prime}}^{\uparrow\downarrow}(k_{\rho}) & =\frac{k_{\ell z}}{2k_{\ell^{\prime}z}}(A_{\ell\ell^{\prime}}^{1}+B_{\ell\ell^{\prime}} ^{1}),\quad\sigma_{\ell\ell^{\prime}}^{\uparrow\uparrow}(k_{\rho} )=\frac{k_{\ell z}}{2k_{\ell^{\prime}z}}(A_{\ell\ell^{\prime}}^{2}+B_{\ell \ell^{\prime}}^{2}). \end{split} \end{equation} Therefore, taking inverse Fourier transform in \eqref{updownspectraldoamin} gives expression of Green's function in the physical domain using Sommerfeld integrals as follows: \begin{equation} \begin{cases} \displaystyle u_{\ell\ell^{\prime}}^{\uparrow} (\boldsymbol{r},\boldsymbol{r}^{\prime})=\frac{\ri}{4\pi}\int_{0}^{\infty }k_{\rho}J_{0}(k_{\rho}\rho)\frac{e^{\ri k_{\ell z}(z-d_{\ell})}}{k_{\ell z} }\tilde{\sigma}_{\ell\ell^{\prime}}^{\uparrow}(k_{\rho},z^{\prime})dk_{\rho },\quad\ell<L,\\[8pt] \displaystyle u_{\ell\ell^{\prime}}^{\downarrow} (\boldsymbol{r},\boldsymbol{r}^{\prime})=\frac{\ri}{4\pi}\int_{0}^{\infty }k_{\rho}J_{0}(k_{\rho}\rho)\frac{e^{-\ri k_{\ell z}(z-d_{\ell})}}{k_{\ell z} }\tilde{\sigma}_{\ell\ell^{\prime}}^{\downarrow}(k_{\rho},z^{\prime})dk_{\rho },\quad0<\ell<L,\\[8pt] \displaystyle u_{L\ell^{\prime}}^{\downarrow} (\boldsymbol{r},\boldsymbol{r}^{\prime})=\frac{\ri}{4\pi}\int_{0}^{\infty }k_{\rho}J_{0}(k_{\rho}\rho)\frac{e^{-\ri k_{\ell z}z}}{k_{\ell z}} \tilde{\sigma}_{L\ell^{\prime}}^{\downarrow}(k_{\rho},z^{\prime})dk_{\rho}, \end{cases} \label{greenfuncomponent} \end{equation} where \begin{equation} \begin{cases} \displaystyle\tilde{\sigma}_{\ell0}^{\uparrow}(k_{\rho},z^{\prime})=e^{\ri k_{0z}z^{\prime}}\sigma_{\ell0}^{\uparrow\uparrow}(k_{\rho }),\\[8pt] \displaystyle\tilde{\sigma}_{\ell\ell^{\prime}}^{\uparrow}(k_{\rho},z^{\prime })=e^{\ri k_{\ell^{\prime}z}(z^{\prime}-d_{\ell^{\prime}})}\sigma_{\ell \ell^{\prime}}^{\uparrow\uparrow}(k_{\rho})+e^{-\ri k_{\ell^{\prime} z}(z^{\prime}-d_{\ell^{\prime}-1})}\sigma_{\ell\ell^{\prime}}^{\uparrow \downarrow}(k_{\rho}),\quad0<\ell^{\prime}<L,\\[8pt] \displaystyle\tilde{\sigma}_{\ell\ell^{\prime}}^{\downarrow}(k_{\rho },z^{\prime})=e^{\ri k_{\ell^{\prime}z}(z^{\prime}-d_{\ell^{\prime}})} \sigma_{\ell\ell^{\prime}}^{\downarrow\uparrow}(k_{\rho})+e^{-\ri k_{\ell^{\prime}z}(z^{\prime}-d_{\ell^{\prime}-1})}\sigma_{\ell\ell^{\prime} }^{\downarrow\downarrow}(k_{\rho}),\quad0<\ell^{\prime}<L,\\[8pt] \displaystyle\tilde{\sigma}_{\ell L}^{\downarrow}(k_{\rho},z^{\prime})=e^{-\ri k_{\ell^{\prime}z}(z^{\prime}-d_{L-1})}\sigma_{\ell L}^{\downarrow\downarrow }(k_{\rho}). \end{cases} \label{totaldensity} \end{equation} Note that the Green's function in the interior layers are given by \[ u_{\ell\ell^{\prime}}(\boldsymbol{r},\boldsymbol{r}^{\prime})= \begin{cases} \displaystyle u_{\ell\ell^{\prime}}^{\uparrow} (\boldsymbol{r},\boldsymbol{r}^{\prime})+u_{\ell\ell^{\prime}}^{\downarrow }(\boldsymbol{r},\boldsymbol{r}^{\prime}), & \ell\neq\ell^{\prime},\\ \displaystyle u_{\ell\ell^{\prime}}^{\uparrow} (\boldsymbol{r},\boldsymbol{r}^{\prime})+u_{\ell\ell^{\prime}}^{\downarrow }(\boldsymbol{r},\boldsymbol{r}^{\prime})+\frac{\ri k_{\ell^{\prime}}}{4\pi }h_{0}^{(1)}(k_{\ell^{\prime}}|\boldsymbol{r}-\boldsymbol{r}^{\prime}|), & \ell=\ell^{\prime}. \end{cases} \] The derivation above is applicable to multi-layered media, in Appendix A, we give explicit formulas (see \eqref{densitytwolayer1}, \eqref{densitytwolayer2} and \eqref{densitythreelayer1},\eqref{densitythreelayer2}, \eqref{densitythreelayer3}) for the cases of two and three layers for numerical tests of the fast algorithms as these cases cover a wide range of applications. \section{First Taylor-expansion based FMM in multi-layered media} \subsection{Free space} First, we briefly review the TE-FMM for Helmholtz equations in the free space. Consider $N$ source particles with source strength $q_{j}$ placed at $\boldsymbol{r}_{j}=(x_{j},y_{j},z_{j})$. The field at $\boldsymbol{r}_{i} =(x_{i},y_{i},z_{i})$ due to all other sources is given by \begin{equation} u(\boldsymbol{r}_{i})=\sum\limits_{j=1}^{N}q_{j}h_{0}^{(1)} (k|\boldsymbol{r}_{i}-\boldsymbol{r}_{j}|),\quad i=1,2,\cdots,N, \end{equation} where $h_{0}^{(1)}(z)$ is the first kind spherical Hankel function of order zero. Hereafter, we omit the factor $\frac{\ri k}{4\pi}$ and $\frac {\ri k_{\ell}}{4\pi}$ in the free space and layered media Green's functions, respectively. A TE-FMM will use the following Taylor expansions : \begin{itemize} \item \textbf{TE in a source box centered at $\boldsymbol{r}_{c}$: }we have\textbf{ } \begin{equation} \sum\limits_{j\in J_{m}}q_{j}h_{0}^{(1)}(k|\boldsymbol{r}-\boldsymbol{r}_{j} |)\approx\sum\limits_{|\boldsymbol{k}|=0}^{p}\alpha_{\boldsymbol{k}} \frac{D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}}h_{0}^{(1)}(k\Vert {\boldsymbol{r}}-{\boldsymbol{r}}_{c}\Vert)}{\boldsymbol{k}!} ,\label{taylorexpfarxcfree} \end{equation} where \begin{equation} {\alpha}_{\boldsymbol{k}}=\sum\limits_{j\in J_{m}}q_{j}(\boldsymbol{r}_{j} -\boldsymbol{r}_{c})^{\boldsymbol{k}},\quad D_{\boldsymbol{r}^{\prime} }^{\boldsymbol{k}}:=\frac{\partial^{|\boldsymbol{k}|}}{\partial(x^{\prime })^{k_{1}}\partial(y^{\prime})^{k_{2}}\partial(z^{\prime})^{k_{3}}}, \end{equation} $J_{m}$ is the set of indices of particles in a source box centered at $\boldsymbol{r}_{c}$ and the $\boldsymbol{r}$ is far from this box. \item \textbf{TE in a target box centered at $\boldsymbol{r}_{c}^{l}$: }we have\textbf{ } \begin{equation} \sum\limits_{j\in J_{m}}q_{j}h_{0}^{(1)}(k|\boldsymbol{r}-\boldsymbol{r}_{j} |)\approx\sum\limits_{|\boldsymbol{k}|=0}^{p}\beta_{\boldsymbol{k}} (\boldsymbol{r}-\boldsymbol{r}_{c}^{l})^{\boldsymbol{k}} ,\label{taylorexplocxclfree} \end{equation} where \begin{equation} {\beta}_{\boldsymbol{k}}=\sum\limits_{j\in J_{m}}q_{j}\frac{D_{\boldsymbol{r}} ^{\boldsymbol{k}}h_{0}^{(1)}(k(\boldsymbol{r}_{c}^{l}-\boldsymbol{r}_{j} ))}{\boldsymbol{k}!},\quad D_{\boldsymbol{r}}^{\boldsymbol{k}}:=\frac {\partial^{|\boldsymbol{k}|}}{\partial x^{k_{1}}\partial y^{k_{2}}\partial z^{k_{3}}},\label{localexpcoeff} \end{equation} $\{(q_{j},\boldsymbol{r}_{j})\}_{j\in J_{m}}$ are particles in a source box far from the target box. \end{itemize} Next, we present the translation operators. \begin{itemize} \item \textbf{Translation from a TE in a source box centered at $\boldsymbol{r}_{c}$ to a TE in a target box centered at $\boldsymbol{r}_{c} ^{l}$:} \begin{equation} {\beta}_{\boldsymbol{k}}\approx\sum\limits_{|\boldsymbol{k}^{\prime}|=0} ^{p}{\alpha}_{\boldsymbol{k}^{\prime}}L_{\boldsymbol{k}} ^{\boldsymbol{k}^{\prime}}, \end{equation} where \begin{equation} L_{\boldsymbol{k}}^{\boldsymbol{k}^{\prime}}=\frac{D_{\boldsymbol{r}} ^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}^{\prime}} h_{0}^{(1)}(k(\boldsymbol{r}_{c}^{l}-\boldsymbol{r}_{c}))} {\boldsymbol{k}!\boldsymbol{k}^{\prime}!}=(-1)^{|\boldsymbol{k}|} \frac{D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}+\boldsymbol{k}^{\prime} }h_{0}^{(1)}(k(\boldsymbol{r}_{c}^{l}-\boldsymbol{r}_{c}))} {\boldsymbol{k}!\boldsymbol{k}^{\prime}!}. \end{equation} \item \textbf{Trnaslation from a TE in a source box centered at $\boldsymbol{r}_{c}$ to a TE in another source box centered at $\boldsymbol{r}_{c}^{\prime}$: } Let \[ {\gamma}_{\boldsymbol{k}}=\sum\limits_{j\in J_{m}}q_{j}(\boldsymbol{r}_{j} -\boldsymbol{r}_{c}^{\prime})^{\boldsymbol{k}}, \] be the coefficients of TE in the source box centered at $\boldsymbol{r}_{c} ^{\prime}$, then the bi-nominal formula \begin{equation} (\boldsymbol{r}_{j}-\boldsymbol{r}_{c}^{\prime})^{\boldsymbol{k}} =\sum\limits_{k_{1}^{\prime}=0}^{k_{1}}\sum\limits_{k_{2}^{\prime}=0}^{k_{2} }\sum\limits_{k_{3}^{\prime}=0}^{k_{3}}B_{\boldsymbol{k}} ^{\boldsymbol{k}^{\prime}}(\boldsymbol{r}_{j}-\boldsymbol{r}_{c} )^{\boldsymbol{k}^{\prime}},\label{LETOLEmat} \end{equation} gives that \begin{equation} \gamma_{\boldsymbol{k}}=\sum\limits_{k_{1}^{\prime}=0}^{k_{1}}\sum \limits_{k_{2}^{\prime}=0}^{k_{2}}\sum\limits_{k_{3}^{\prime}=0}^{k_{3} }B_{\boldsymbol{k}}^{\boldsymbol{k}^{\prime}}{\alpha}_{\boldsymbol{k}^{\prime }},\quad\boldsymbol{k}^{\prime}=(k_{1}^{\prime},k_{2}^{\prime},k_{3}^{\prime }),\label{TETartoTETar} \end{equation} where \begin{equation} B_{\boldsymbol{k}}^{\boldsymbol{k}^{\prime}}=\frac {\boldsymbol{k}!(\boldsymbol{r}_{c}-\boldsymbol{r}_{c}^{\prime} )^{\boldsymbol{k}-\boldsymbol{k}^{\prime}}}{k_{1}^{\prime}!(k_{1} -k_{1}^{\prime})!k_{2}^{\prime}!(k_{2}-k_2^{\prime})!k_{3}^{\prime}!(k_{3} -k_{3}^{\prime})!}. \end{equation} \item \textbf{Translation from a TE in a target box centered at $\boldsymbol{r}_{c}^{l}$ to a TE in another target box centered at $\tilde{\boldsymbol{r}}_{c}^{l}$:} Let \begin{equation} {\lambda}_{\boldsymbol{k}}=\sum\limits_{j\in J_{m}}\frac{q_{j} D_{\boldsymbol{r}}^{\boldsymbol{k}}h_{0}^{(1)}(k(\tilde{\boldsymbol{r}} _{c}^{l}-\boldsymbol{r}_{j}))}{\boldsymbol{k}!}, \end{equation} be the coefficients of a TE in the target box centered at $\tilde {\boldsymbol{r}}_{c}^{l}$. Then, the Taylor expansion at $\boldsymbol{r}_{c} ^{l}$ gives \begin{equation} {\lambda}_{\boldsymbol{k}}\approx\frac{1}{\boldsymbol{k}!}\sum \limits_{|\boldsymbol{k}^{\prime}|=0}^{p}\beta_{\boldsymbol{k}^{\prime} }D_{\tilde{\boldsymbol{r}}_{c}^{l}}^{\boldsymbol{k}}(\tilde{\boldsymbol{r}} _{c}^{l}-\boldsymbol{r}_{c}^{l})^{\boldsymbol{k}^{\prime}}. \end{equation} Note that \begin{equation} D_{\tilde{\boldsymbol{r}}_{c}^{l}}^{\boldsymbol{k}}(\tilde{\boldsymbol{r}} _{c}^{l}-\boldsymbol{r}_{c}^{l})^{\boldsymbol{k}^{\prime}}= \begin{cases} \displaystyle0,\quad k_{1}>k_{1}^{\prime}\;\;\mathrm{or}\;\;k_{2} >k_{2}^{\prime}\;\;\mathrm{or}\;\;k_{3}>k_{3}^{\prime},\\[6pt] \displaystyle\frac{\boldsymbol{k}^{\prime}!}{(\boldsymbol{k}^{\prime }-\boldsymbol{k})!}(\tilde{\boldsymbol{r}}_{c}^{l}-\boldsymbol{r}_{c} ^{l})^{\boldsymbol{k}^{\prime}-\boldsymbol{k}},\quad\mathrm{otherwise}, \end{cases} \label{LE2LErans} \end{equation} then \begin{equation} {\lambda}_{\boldsymbol{k}}=\sum\limits_{n^{\prime}=|\boldsymbol{k}|}^{p} \sum\limits_{\boldsymbol{k}^{\prime}\geq\boldsymbol{k}} ^{|\boldsymbol{k}^{\prime}|\leq n^{\prime}}\beta_{\boldsymbol{k}^{\prime} }\frac{\boldsymbol{k}^{\prime}!}{\boldsymbol{k}!(\boldsymbol{k}^{\prime }-\boldsymbol{k})!}(\tilde{\boldsymbol{r}}_{c}^{l}-\boldsymbol{r}_{c} ^{l})^{\boldsymbol{k}^{\prime}-\boldsymbol{k}}.\label{freeletole} \end{equation} \end{itemize} \subsection{Multi-layered media} Let $\mathscr{P}_{\ell}=\{(Q_{\ell j},\boldsymbol{r}_{\ell j}),j=1,2,\cdots ,N_{\ell}\}$ be a group of source particles distributed in the $\ell$-th layer of a multi-layered medium with $L+1$ layers (see Fig. \ref{layerstructure}). The interactions between all $N:=N_{0}+N_{1}+\cdots+N_{L}$ particles given by the sum \begin{equation} \Phi_{\ell}(\boldsymbol{r}_{\ell i})=\Phi_{\ell}^{free}(\boldsymbol{r}_{\ell i})+\sum\limits_{\ell^{\prime}=0}^{L}[\Phi_{\ell\ell^{\prime}}^{\uparrow }(\boldsymbol{r}_{\ell i})+\Phi_{\ell\ell^{\prime}}^{\downarrow} (\boldsymbol{r}_{\ell i})],\label{totalinteraction} \end{equation} for $\ell=0,1,\cdots,L;\;\;i=1,2,\cdots,N_{\ell}$, where \begin{equation} \begin{split} & \Phi_{\ell}^{free}(\boldsymbol{r}_{\ell i}):=\sum\limits_{j=1,j\neq i}^{N_{\ell}}Q_{\ell j}h_{0}^{(1)}(k_{\ell}|\boldsymbol{r}_{\ell i}-\boldsymbol{r}_{\ell j}|),\\ & \Phi_{\ell\ell^{\prime}}^{\uparrow}(\boldsymbol{r}_{\ell i}):=\sum \limits_{j=1}^{N_{\ell^{\prime}}}Q_{\ell^{\prime}j}u_{\ell\ell^{\prime} }^{\uparrow}(\boldsymbol{r}_{\ell i},\boldsymbol{r}_{\ell^{\prime}j} ),\quad\Phi_{\ell\ell^{\prime}}^{\downarrow}(\boldsymbol{r}_{\ell i} ):=\sum\limits_{j=1}^{N_{\ell^{\prime}}}Q_{\ell^{\prime}j}u_{\ell\ell^{\prime }}^{\downarrow}(\boldsymbol{r}_{\ell i},\boldsymbol{r}_{\ell^{\prime}j})). \end{split} \end{equation} Here $u_{\ell\ell^{\prime}}^{\uparrow},u_{\ell\ell^{\prime}}^{\downarrow}$ are the general scattering component of the domain Green's function in the $\ell $-th layer due to a source $\boldsymbol{r}_{\ell^{\prime}j}$ in the $\ell^{\prime}$-th layer. We also omit the factor $\frac{\ri k_{\ell}}{4\pi}$ in $u_{\ell\ell^{\prime}}^{\uparrow}$ and $u_{\ell\ell^{\prime}}^{\downarrow}$ for consistency with the free space case. In the top and bottom most layer, we have \[ u_{0\ell^{\prime}}^{\downarrow}(\boldsymbol{r},\boldsymbol{r}^{\prime })=0,\quad u_{L\ell^{\prime}}^{\uparrow}(\boldsymbol{r},\boldsymbol{r}^{\prime })=0,\quad0\leq\ell^{\prime}\leq L. \] General formulas for $u_{\ell\ell^{\prime}}^{\uparrow},u_{\ell\ell^{\prime} }^{\downarrow}$ are given in \eqref{greenfuncomponent}-\eqref{totaldensity} while densities for two and three layered cases are presented in the Appendix A (see expressions in \eqref{densitytwolayer1}, \eqref{densitytwolayer2} and \eqref{densitythreelayer1},\eqref{densitythreelayer2}, \eqref{densitythreelayer3}). Since the domain Green's function in multi-layer media has different representations \eqref{greenfuncomponent} for source and target particles in different layers, it is necessary to perform calculation individually for interactions between any two groups of particles among the $L+1$ groups $\{\mathscr{P}_{\ell}\}_{\ell=0}^{L}$. Without a loss of generality, let us focus on the computation of upgoing component of the interaction between $\ell$-th and $\ell^{\prime}$-th groups, i.e., \begin{equation} \Phi_{\ell\ell^{\prime}}^{\uparrow}(\boldsymbol{r}_{\ell i})=\sum \limits_{j=1}^{N_{\ell^{\prime}}}Q_{\ell^{\prime}j}u_{\ell\ell^{\prime} }^{\uparrow}(\boldsymbol{r}_{\ell i},\boldsymbol{r}_{\ell^{\prime}j}),\quad i=1,2,\cdots,N_{\ell}.\label{generalsum} \end{equation} Let \begin{equation} \Phi_{\ell\ell^{\prime}}^{b\uparrow}(\boldsymbol{r}_{\ell i}):=\sum \limits_{j\in J_{m}}Q_{\ell^{\prime}j}u_{\ell\ell^{\prime}}^{\uparrow }(\boldsymbol{r}_{\ell i},\boldsymbol{r}_{\ell^{\prime}j} ),\label{generalsuminbox} \end{equation} be the field at $\boldsymbol{r}_{\ell i}$ generated by the particles in a source box centered at $\boldsymbol{r}_{c}=(x_{c},y_{c},z_{c})$ in the tree structure. Here, $J_{m}$ is the set of indices of particles in the source box. The Taylor expansion based FMM for \eqref{generalsum} will use TE approximations \begin{equation} \Phi_{\ell\ell^{\prime}}^{b\uparrow}(\boldsymbol{r}_{\ell i})\approx \sum_{|\boldsymbol{k}|=0}^{p}\alpha_{\boldsymbol{k}}\frac {D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}}u_{\ell\ell^{\prime}}^{\uparrow }(\boldsymbol{r}_{\ell i},\boldsymbol{r}_{c})}{\boldsymbol{k}!},\quad{\alpha }_{\boldsymbol{k}}=\sum\limits_{j\in J_{m}}Q_{\ell^{\prime}j} (\boldsymbol{r}_{\ell^{\prime}j}-\boldsymbol{r}_{c})^{\boldsymbol{k}} ,\label{taylorexpfar3layers} \end{equation} in the source box and \begin{equation} \Phi_{\ell\ell^{\prime}}^{b\uparrow}(\boldsymbol{r}_{\ell i})\approx \sum\limits_{|\boldsymbol{k}|=0}^{p}\beta_{\boldsymbol{k}} (\boldsymbol{r}_{\ell i}-\boldsymbol{r}_{c}^{l})^{\boldsymbol{k}},\quad{\beta }_{\boldsymbol{k}}=\sum\limits_{j\in J_{m}}\frac{Q_{\ell^{\prime} j}D_{\boldsymbol{r}}^{\boldsymbol{k}}u_{\ell\ell^{\prime}}^{\uparrow }(\boldsymbol{r}_{c}^{l},\boldsymbol{r}_{\ell^{\prime}j})}{\boldsymbol{k}!} ,\label{taylorexploc2ld} \end{equation} in the target box centered at $\boldsymbol{r}_{c}^{l}=(x_{c}^{l},y_{c} ^{l},z_{c}^{l})$, respectively. Note that $u_{\ell\ell^{\prime}}^{\uparrow }(\boldsymbol{r},\boldsymbol{r}^{\prime})$ has a Sommerfeld integral representation with an integrand involving an exponential function $e^{\ri k_{\ell z}(z-d_{\ell})}\tilde{\sigma}_{\ell\ell^{\prime}}^{\uparrow}(k_{\rho },z^{\prime})$. It is worthy to point out that the integrand has an exponential decay when $d_{\ell}<z<d_{\ell}-1,d_{\ell^{\prime}}<z^{\prime }<d_{\ell^{\prime}}-1,$ which ensures the convergence of the Sommerfeld integral. According to the Taylor expansions \eqref{taylorexpfar3layers} and \eqref{taylorexploc2ld}, we conclude that the translation operators for center shifting from source boxes to their parents and from target boxes to their children are exactly the same as in free space case which are given by \eqref{TETartoTETar} and \eqref{freeletole}. The translation from a TE in source box to a TE in target box is given by \begin{equation} {\beta}_{\boldsymbol{k}}\approx\sum\limits_{|\boldsymbol{k}^{\prime}|=0} ^{p}{\alpha}_{\boldsymbol{k}^{\prime}}\frac{D_{\boldsymbol{r}} ^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}^{\prime}} u_{\ell\ell^{\prime}}^{\uparrow}(\boldsymbol{r}_{c}^{l},\boldsymbol{r}_{c} )}{\boldsymbol{k}!\boldsymbol{k}^{\prime}!}.\label{tesourceboxtotargetbox} \end{equation} In the next section, an efficient algorithm for the computation of $D_{\boldsymbol{r}}^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime}} ^{\boldsymbol{k}^{\prime}}u_{\ell\ell^{\prime}}^{\uparrow}(\boldsymbol{r}_{c} ^{l},\boldsymbol{r}_{c})$ will be presented. \subsection{Discrete complex-image approximation of derivatives of Green's functions for layered media} The TE-FMM demands an efficient algorithm for the computation of derivatives of Green's function. For free space case, recurrence formulas are available (cf. \cite{li2009cartesian, tausch2003fast}). In layered media, the following derivatives are needed \begin{align} \frac{D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}}u_{\ell\ell^{\prime} }^{\uparrow}(\boldsymbol{r},\boldsymbol{r}^{\prime})}{\boldsymbol{k}!} & =\frac{1}{\boldsymbol{k}!k_{\ell}}D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}} \int_{0}^{\infty}k_{\rho}J_{0}(k_{\rho}\rho)\frac{e^{\ri k_{\ell z}(z-d_{\ell })}}{k_{\ell z}}\tilde{\sigma}_{\ell\ell^{\prime}}^{\uparrow}(k_{\rho },z^{\prime})dk_{\rho},\label{SIs1}\\ \frac{D_{\boldsymbol{r}}^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime} }^{\boldsymbol{k}^{\prime}}u_{\ell\ell^{\prime}}^{\uparrow} (\boldsymbol{r},\boldsymbol{r}^{\prime})} {\boldsymbol{k}!\boldsymbol{k}^{\prime}!} & =\frac{1} {\boldsymbol{k}!\boldsymbol{k}^{\prime}!k_{\ell}}D_{\boldsymbol{r}} ^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}^{\prime}}\int _{0}^{\infty}k_{\rho}J_{0}(k_{\rho}\rho)\frac{e^{\ri k_{\ell z}(z-d_{\ell})} }{k_{\ell z}}\tilde{\sigma}_{\ell\ell^{\prime}}^{\uparrow}(k_{\rho},z^{\prime })dk_{\rho},\label{SIs2} \end{align} where $\boldsymbol{k}=(k_{1},k_{2},k_{3}),\boldsymbol{k}^{\prime} =(k_{1}^{\prime},k_{2}^{\prime},k_{3}^{\prime})$ are multi-indices. They are derivatives of a function represented in terms of Sommerfeld integral (SI). It is well known that SI has oscillatory integrand with pole singularities due to the existence of surface waves. Over the past decades, much effort has been made on the computation of this integral, including using ideas from high-frequency asymptotics, rational approximation, contour deformation (cf. \cite{cai2013computational,cai2000fast,cho2012parallel,okhmatovski2004evaluation,paulus2000accurate} ), complex images (cf. \cite{fang1988discrete,paulus2000accurate,ochmann2004complex,alparslan2010closed} ), and methods based on special functions (cf. \cite{koh2006exact}) or physical images (cf. \cite{li1996near,ling2000discrete,o2014efficient,lai2016new}). Since \eqref{SIs1} is just a special case of \eqref{SIs2}, our discussion will only focus on the latter. This integral is convergent when the target and source particles are not exactly on the interfaces of a layered medium. Contour deformation with high order quadrature rules could be used for direct numerical computation. However, this becomes prohibitively expensive due to a large number of derivatives needed in the FMM. In fact, $O(p^{6})$ derivatives will be needed for each source box to target box translation. Moreover, the involved integrand decays more and more slowly as the derivative order is getting higher. The length of contour needs to be very long to obtain a required accuracy for the computation of high order derivatives. Therefore, putting all derivatives inside the integral and then applying quadratures with contour deformation is too expensive in terms of CPU time. Moreover, despite that $u_{\ell\ell^{\prime}}^{\uparrow}$ is a function of $(\rho,z,z^{\prime})$ only, the derivative $D_{\boldsymbol{r}} ^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}^{\prime}} u_{\ell\ell^{\prime}}^{\uparrow}(\boldsymbol{r},\boldsymbol{r}^{\prime})$ depends on all coordinates in ${\boldsymbol{r}}$ and $\boldsymbol{r}^{\prime}$ due to the nonsymmetric derivative, it is not feasible to make a precomputed table on a fine grid and then use interpolation to obtain approximation for the derivative $D_{\boldsymbol{r}}^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime} }^{\boldsymbol{k}^{\prime}}u_{\ell\ell^{\prime}}^{\uparrow} (\boldsymbol{r},\boldsymbol{r}^{\prime})$. Instead, for this TE-FMM, we will use a complex image approximation of the integrand to simplify the calculation of the derivatives. Exchanging the order of the derivative and the integral leads to \begin{equation} \frac{D_{\boldsymbol{r}}^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime} }^{\boldsymbol{k}^{\prime}}u_{\ell\ell^{\prime}}^{\uparrow} (\boldsymbol{r},\boldsymbol{r}^{\prime})} {\boldsymbol{k}!\boldsymbol{k}^{\prime}!}=\frac{D_{\boldsymbol{r}} ^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}_{0}^{\prime}} }{\boldsymbol{k}!\boldsymbol{k}_{0}^{\prime}!}\Big(\frac{1}{k_{\ell}}\int _{0}^{\infty}k_{\rho}J_{0}(k_{\rho}\rho)\frac{e^{\ri k_{\ell z}(z-d_{\ell})} }{k_{\ell z}}\frac{\partial_{z^{\prime}}^{k_{3}^{\prime}}\tilde{\sigma} _{\ell\ell^{\prime}}^{\uparrow}(k_{\rho},z^{\prime})}{k_{3}^{\prime}!} dk_{\rho}\Big),\label{integralformderi} \end{equation} where $\boldsymbol{k}_{0}^{\prime}=(k_{1}^{\prime},k_{2}^{\prime},0)$ are multi-indices reduced from $\boldsymbol{k}^{\prime}$, \begin{equation} \frac{\partial_{z^{\prime}}^{k_{3}^{\prime}}\tilde{\sigma}_{\ell\ell^{\prime} }^{\uparrow}(k_{\rho},z^{\prime})}{k_{3}^{\prime}!}=\frac{(\ri k_{\ell ^{\prime}z})^{k_{3}^{\prime}}\big[e^{\ri k_{\ell^{\prime}z}(z^{\prime} -d_{\ell^{\prime}})}\sigma_{\ell\ell^{\prime}}^{\uparrow\uparrow} +(-1)^{k_{3}^{\prime}}e^{\ri k_{\ell^{\prime}z}(d_{\ell^{\prime}-1}-z^{\prime })}\sigma_{\ell\ell^{\prime}}^{\uparrow\downarrow}\big]}{k_{3}^{\prime} !},\label{layermediumdensity} \end{equation} corresponds to the derivatives with respect to $z^{\prime}$. Note that the variables $x,y,z,x^{\prime},y^{\prime}$ and $z^{\prime}$ are in separate functions in the Sommerfeld integral \eqref{greenfuncomponent}. Let us first consider the derivatives with respect to $z^{\prime}$. Recalling \eqref{integralformderi}, we have \begin{equation} \frac{1}{k_{3}^{\prime}!}\partial_{z^{\prime}}^{k_{3}^{\prime}}u_{\ell \ell^{\prime}}(\boldsymbol{r},\boldsymbol{r}^{\prime})=\frac{1}{k_{\ell}} \int_{0}^{\infty}k_{\rho}J_{0}(k_{\rho}\rho)\frac{e^{\ri k_{\ell z}(z-d_{\ell })}}{k_{\ell z}}\frac{\partial_{z^{\prime}}^{k_{3}^{\prime}}\tilde{\sigma }_{\ell\ell^{\prime}}^{\uparrow}(k_{\rho},z^{\prime})}{k_{3}^{\prime} !}dk_{\rho}.\label{m2lderivative} \end{equation} The derivatives of $u_{\ell\ell^{\prime}}^{\uparrow} (\boldsymbol{r},\boldsymbol{r}^{\prime})$ with respect to $z^{\prime}$ are represented by Sommerfeld integrals with densities $\frac{\partial_{z^{\prime }}^{k_{3}^{\prime}}\tilde{\sigma}_{\ell\ell^{\prime}}^{\uparrow}(k_{\rho },z^{\prime})}{k_{3}^{\prime}!}$. Now, we will use the discrete complex image method (DCIM) (cf. \cite{aksun1996robust, alparslan2010closed}) to generate an approximation using a sum of free space Green's function with complex coordinates. To use the decay from $e^{\ri k_{\ell}(z-d_{\ell})}$, we define \begin{equation} \Theta_{\ell\ell^{\prime}}^{k_{3}^{\prime}}(k_{\rho},z^{\prime})=\frac{e^{\ri k_{\ell z}(z_{min}-d_{\ell})}\partial_{z^{\prime}}^{k_{3}^{\prime}} \tilde{\sigma}_{\ell\ell^{\prime}}^{\uparrow}(k_{\rho},z^{\prime})} {k_{3}^{\prime}!}. \end{equation} Here, we choose $z_{min}$ to be the minimum $z$ coordinates of all target particles in $\ell$-th layer, so the remaining term $e^{\ri k_{\ell z}(z-z_{min})}$ still decays as $k_{\rho}\rightarrow\infty$. A two level DCIM method is used to approximate $\Theta_{\ell\ell^{\prime}}^{k_{3}^{\prime} }(k_{\rho},z^{\prime})$ as follows: \begin{itemize} \item[Step 1:] Sample density function $\Theta_{\ell\ell^{\prime}} ^{k_{3}^{\prime}}(k_{\rho}, z^{\prime})$ over a path defined by the following mappings (see Fig. \ref{twolevelcontour}) \begin{equation} \begin{split} k_{\ell z}=\ri k_{\ell}(T_{0}+t),\quad0\leq t\leq T_{1},\quad C_{ap1}: 1^{\mathrm{st}}\;\; \mathrm{level},\\ k_{\ell z}=k_{\ell} \Big(1-\frac{t}{T_{0}}+\ri t\Big),\quad0\leq t\leq T_{0},\quad C_{ap2}: 2^{\mathrm{nd}}\;\; \mathrm{level}. \end{split} \end{equation} \item[Step 2:] Approximate the sampled $\Theta_{\ell\ell^{\prime}} ^{k_{3}^{\prime}}(k_{\rho},z^{\prime})$ by summation of complex exponentials as \begin{equation} \Theta_{\ell\ell^{\prime}}^{k_{3}^{\prime}}(k_{\rho},z^{\prime})\approx \sum\limits_{j=1}^{M}A_{j}^{k_{3}^{\prime}}e^{-\ri k_{\ell z}Z_{j} ^{k_{3}^{\prime}}},\label{compleximageapp} \end{equation} using a generalized pencil-of-function method (GPOF) \cite{hua1989generalized}. \item[Step 3:] Then, we have \begin{equation} \begin{split} \frac{\partial_{z^{\prime}}^{k_{3}^{\prime}}u_{\ell\ell^{\prime} }(\boldsymbol{r},\boldsymbol{r}^{\prime})}{k_{3}^{\prime}!}= & \frac {1}{k_{\ell}}\int_{0}^{\infty}k_{\rho}J_{0}(k_{\rho}\rho)\frac{e^{\ri k_{\ell z}(z-z_{min})}}{k_{\ell z}}\Theta_{\ell\ell^{\prime}}^{k_{3}^{\prime}} (k_{\rho},z^{\prime})dk_{\rho}\\ \approx & \sum\limits_{j=1}^{M}A_{j}^{k_{3}^{\prime}}\Big(\frac{1}{k_{\ell} }\int_{0}^{\infty}k_{\rho}J_{0}(k_{\rho}\rho)\frac{e^{\ri k_{\ell z} (z-z_{min}-Z_{j}^{k_{3}^{\prime}})}}{k_{\ell z}}dk_{\rho}\Big), \end{split} \end{equation} and by applying the Sommerfeld identity \begin{equation} h_{0}^{(1)}(k|\boldsymbol{r}|)=\frac{1}{k_{\ell}}\int_{0}^{\infty}k_{\rho }J_{0}(k_{\rho}\rho)\frac{e^{\ri k_{\ell z}|z|}}{k_{\ell z}}dk_{\rho}, \end{equation} to the SI with complex $z$-coordinates, we arrive at the following approximations to the derivatives, \begin{equation} \frac{\partial_{z^{\prime}}^{k_{3}^{\prime}}u_{\ell\ell^{\prime} }(\boldsymbol{r},\boldsymbol{r}^{\prime})}{k_{3}^{\prime}!}\approx \sum\limits_{j=1}^{M}A_{j}^{k_{3}^{\prime}}h_{0}^{(1)}(k_{\ell}R_{j} ^{k_{3}^{\prime}}),\label{closedformgreen} \end{equation} where $R_{j}^{k_{3}^{\prime}}=\sqrt{(x-x^{\prime})^{2}+(y-y^{\prime} )^{2}+(z-z_{min}-Z_{j}^{k_{3}^{\prime}})^{2}}$ is the complex distance. \end{itemize} \begin{figure} \caption{Plots of contour used in two level DCIM method ($k_{\ell}=1.5$, $T_{0}=5$).} \label{twolevelcontour} \end{figure}By taking derivative $D_{\boldsymbol{r}}^{\boldsymbol{k}} D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}_{0}^{\prime}}$ directly on \eqref{closedformgreen}, we obtain approximation \begin{equation} \frac{D_{\boldsymbol{r}}^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime} }^{\boldsymbol{k}^{\prime}}u_{\ell\ell^{\prime}} (\boldsymbol{r},\boldsymbol{r}^{\prime})} {\boldsymbol{k}!\boldsymbol{k}^{\prime}!}\approx\sum\limits_{j=1}^{M} A_{j}^{k_{3}^{\prime}}\frac{D_{\boldsymbol{r}}^{\boldsymbol{k}} D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}_{0}^{\prime}}h_{0}^{(1)}(k_{\ell }R_{j}^{k_{3}^{\prime}})}{\boldsymbol{k}!\boldsymbol{k}_{0}^{\prime} !}\label{derivativeapp} \end{equation} Note that the approximation \eqref{compleximageapp} is independent of $x,y,z,x^{\prime},y^{\prime}$. Approximation \eqref{derivativeapp} is expected to maintain the accuracy of the approximation \eqref{compleximageapp}. Numerical results verify this fact at the end of this section. More importantly, the derivatives on Hankel function with complex coordinates can be calculated by using a recurrence formula. Define \begin{equation} a^{\boldsymbol{k}}(\boldsymbol{r},\boldsymbol{r}^{\prime},k):=\frac {1}{\boldsymbol{k}!}D_{\boldsymbol{r}}^{\boldsymbol{k}}h_{0}^{(1)} (k|\boldsymbol{r}-\boldsymbol{r}^{\prime}|),\quad b^{\boldsymbol{k}} (\boldsymbol{r},\boldsymbol{r}^{\prime},k)=\frac{1}{\boldsymbol{k}!} D_{\boldsymbol{r}}^{\boldsymbol{k}}\psi, \end{equation} where $\boldsymbol{r}^{\prime}=(x^{\prime},y^{\prime},Z^{\prime})$ is a coordinate with complex $z$-coordinate $Z^{\prime}$. Then, we have the following recurrence formula \begin{equation} \begin{split} |\boldsymbol{k}||\boldsymbol{r}-\boldsymbol{r}^{\prime}|^{2}a^{\boldsymbol{k}} & +2(|\boldsymbol{k}|-1)\sum\limits_{i=1}^{3} (\boldsymbol{r}-\boldsymbol{r}^{\prime})_{i} a^{\boldsymbol{k}-\boldsymbol{e}_{i}}+(|\boldsymbol{k}|+1)\sum\limits_{i=1} ^{3}a^{\boldsymbol{k}-2\boldsymbol{e}_{i}}\\ = & \ri k\Big(\sum\limits_{i=1}^{3}(\boldsymbol{r}-\boldsymbol{r}^{\prime })_{i}b^{\boldsymbol{k}-\boldsymbol{e}_{i}}+\sum\limits_{i=1}^{2} b^{\boldsymbol{k}-2\boldsymbol{e}_{i}}\Big),\\ |\boldsymbol{k}|b^{\boldsymbol{k}}= & \ri k\Big(\sum\limits_{i=1} ^{3}(\boldsymbol{r}-\boldsymbol{r}^{\prime})_{i} a^{\boldsymbol{k}-\boldsymbol{e}_{i}}+\sum\limits_{i=1}^{3} a^{\boldsymbol{k}-2\boldsymbol{e}_{i}}\Big). \end{split} \label{recurrsion} \end{equation} The derivation can be done by simply following the procedure in \cite{li2009cartesian}, since the involved derivatives are independent of the complex coordinate $z^{\prime}$. With this recurrence formula, derivatives \begin{equation} D_{\boldsymbol{r}}^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime}} ^{\boldsymbol{k}_{0}^{\prime}}h_{0}^{(1)}(k_{\ell}R_{j}^{k_{3}^{\prime} })=(-1)^{|\boldsymbol{k}_{0}^{\prime}|}D_{\boldsymbol{r}} ^{\boldsymbol{k}+\boldsymbol{k}_{0}^{\prime}}h_{0}^{(1)}(k_{\ell}R_{j} ^{k_{3}^{\prime}}), \end{equation} can be efficiently calculated. In the free space TE-FMM, the most time consuming part is the translation from a source box to a target box where a recurrence formula is used to calculate all $O(p^{6})$ derivatives. Note that $D_{\boldsymbol{r}}^{\boldsymbol{k}} D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}^{\prime}}u_{\ell\ell^{\prime} }^{\uparrow}(\boldsymbol{r}_{c}^{l},\boldsymbol{r}_{c})$ only depends on the center of the corresponding boxes. More precisely, the density \eqref{layermediumdensity} approximated by DCIM only depends on the $z$-coordinates of the center of boxes in the source tree. Once the tree structure is fixed, we can pre-compute a table for all complex exponential approximations used for the computation of $D_{\boldsymbol{r}} ^{\boldsymbol{k}}D_{\boldsymbol{r}^{\prime}}^{\boldsymbol{k}^{\prime}} u_{\ell\ell^{\prime}}^{\uparrow}(\boldsymbol{r}_{c}^{l},\boldsymbol{r}_{c})$. Assume that the depth of the source tree is $H$, then only $2^{H}(p+1)$ DCIM approximations are needed to be precomputed. Next, we will give some numerical results to show the accuracy of the two level DCIM and show that taking derivative with respect to $x,y,z,x^{\prime },y^{\prime}$ will not result in an accuracy loss. For this purpose, let us consider the approximation of \begin{equation} \Big(\frac{\partial}{\partial x}+\ri\frac{\partial}{\partial y}\Big)^{s} \Big(\frac{\partial}{\partial z}\Big)^{k_{3}}\Big(\frac{\partial}{\partial z^{\prime}}\Big)^{k_{3}^{\prime}}\frac{u_{11}^{\uparrow} (\boldsymbol{r},\boldsymbol{r}^{\prime})}{s!k_{3}!k_{3}^{\prime} !},\label{testderi} \end{equation} for three layers case with $k_{0}=0.8$, $k_{1}=1.5$, $k_{2}=2.0$, $d=2.0$. In the two level DCIM approximation, we set $T_{0}=\sqrt{\Big(\frac{k_{2} +0.8}{k_{1}}\Big)^{2}-1},T_{1}=10$ and use $101$ sample points in each level. \begin{table}[ptbh] \centering {\small \begin{tabular} [c]{|c|c|c|c|c|}\hline & $(k_{3}, k_{3}^{\prime}, s)$ & direct quadrature & DCIM & error\\\hline \multirow{5}{*}{\begin{tabular}{c} $\bs r=(0.5, 1.0, -0.5)$\\ $\bs r'=(0.3, 1.3, -0.5)$ \end{tabular}} & (0, 0, 0) & 0.0636386627264339 & 0.063638662478093 & 2.4834e-10\\\cline{2-5} & (3, 4, 0) & 0.00474777580070183 & 0.004747777003526 & -1.2028e-09\\\cline{2-5} & (8, 8, 0) & -7.40635683599036e-10 & -7.406632552555640e-10 & 2.7572e-14\\\cline{2-5} & (0, 0, 4) & -1.77276908208051e-06 & -1.772752691103600e-06 & -1.6391e-11\\\cline{2-5} & (0, 0, 8) & 1.61980348471514e-11 & 1.619802703109298e-11 & 7.8161e-18\\\hline \multirow{5}{*}{\begin{tabular}{c} $\bs r=(0.6, 0.3, -1.2)$\\ $\bs r'=(0.5, 1.0, -0.5)$ \end{tabular}} & (0, 0, 0) & 0.0470021533117637 & 0.047002199376864 & -4.6065e-08\\\cline{2-5} & (3, 4, 0) & 0.00185695910047338 & 0.001856957782404 & 1.3181e-09\\\cline{2-5} & (8, 8, 0) & 5.85835080649916e-09 & 5.858364813170763e-09 & -1.4007e-14\\\cline{2-5} & (0, 0, 4) & -1.71372127668556e-05 & -1.713524114661759e-05 & -1.9716e-9\\\cline{2-5} & (0, 0, 8) & -1.26729956194435e-07 & -1.267386149809553e-07 & 8.6588e-12\\\hline \end{tabular} }\caption{Numerical results of $(k_{3},k_{3}^{\prime},s)-$derivatives in \eqref{testderi} (real parts).} \label{Table:gpofapproximationrp} \end{table}\begin{table}[ht!] \centering {\small \begin{tabular} [c]{|c|c|c|c|c|}\hline & $(k_{3}, k_{3}^{\prime}, s)$ & direct quadrature & DCIM & error\\\hline \multirow{5}{*}{\begin{tabular}{c} $\bs r=(0.5, 1.0, -0.5)$\\ $\bs r'=(0.3, 1.3, -0.5)$ \end{tabular}} & (0, 0, 0) & 0.00236214962912961 & 0.002362151697708 & -2.0686e-09\\\cline{2-5} & (3, 4, 0) & -0.00126663970537548 & - 0.001266638701878 & -1.0035e-09\\\cline{2-5} & (8, 8, 0) & 1.3083718652325e-06 & 1.308371795385077e-06 & 6.9847e-14\\\cline{2-5} & (0, 0, 4) & -1.50190394931086e-06 & - 1.501883557929564e-06 & -2.0391e-11\\\cline{2-5} & (0, 0, 8) & -2.87922306729206e-13 & - 2.878701763048925e-13 & -5.2130e-17\\\hline \multirow{5}{*}{\begin{tabular}{c} $\bs r=(0.6, 0.3, -1.2)$\\ $\bs r'=(0.5, 1.0, -0.5)$ \end{tabular}} & (0, 0, 0) & -0.0655662374392812 & - 0.065566216753017 & -2.069e-08\\\cline{2-5} & (3, 4, 0) & -0.00407200441147604 & - 0.004072001032057 & -3.3794e-09\\\cline{2-5} & (8, 8, 0) & -5.8052078071366e-05 & - 5.805409125522990e-05 & 2.0132e-09\\\cline{2-5} & (0, 0, 4) & 0.000103591338132027 & 1.035831449609178e-04 & 8.1932e-09\\\cline{2-5} & (0, 0, 8) & 5.90666673167792e-08 & 5.907018897856176e-08 & -3.5217e-12\\\hline \end{tabular} }\caption{Numerical results of $(k_{3},k_{3}^{\prime},s)-$derivatives in \eqref{testderi} (imaginary parts).} \label{Table:gpofapproximationip} \end{table}\begin{figure}\label{gpofapproximation} \end{figure}Approximations of $\Theta_{11}^{k_{3}^{\prime}}(k_{\rho},-0.5)$ with $z_{min}=-1.5$ and corresponding errors for different order of derivatives are depicted in Fig. \ref{gpofapproximation}. Numerical results obtained by direct quadrature with contour deformation and DCIM approximation are compared in Table \ref{Table:gpofapproximationrp} -\ref{Table:gpofapproximationip}. A large number of Gauss points is used for the quadrature calculation so a machine accuracy is obtained to be used as reference values. The numerical results presented in Table \ref{Table:gpofapproximationrp}-\ref{Table:gpofapproximationip} show that DCIM can produce approximation with high accuracy even for high order derivatives. Taking derivative with respect to $x,y,z$ has no degeneracy on the accuracy. Since derivatives with respect to $x^{\prime},y^{\prime}$ are just a sign change of that with respect to $x,y$, it also shows that taking derivative with respect to $x^{\prime},y^{\prime}$ will not degenerate the accuracy neither. Now we can present two algorithms for the computation of general component \eqref{generalsum} and total interaction \eqref{totalinteraction}, respectively. \begin{algorithm}\label{algorithm1} \caption{TEFMM-I for general component \eqref{generalsum}} \begin{algorithmic} \State Generate an adaptive hierarchical tree structure and precompute tables. \State{\bf Upward pass:} \For{$l=H \to 0$} \For{all boxes $j$ on source tree level $l$ } \If{$j$ is a leaf node} \State{form the free-space TE using Eq. \eqref{taylorexpfar3layers}.} \Else \State form the free-space TE by merging children's expansions using the free-space center shift translation operator \eqref{TETartoTETar}. \EndIf \EndFor \EndFor \State{\bf Downward pass:} \For{$l=1 \to H$} \For{all boxes $j$ on target tree level $l$ } \State shift the TE of $j$'s parent to $j$ itself using the free-space translation operator \eqref{freeletole}. \State collect interaction list contribution using the source box to target box translation operator in Eq. \eqref{tesourceboxtotargetbox} with precomputed table for \eqref{compleximageapp} and recurrence formula \eqref{recurrsion}. \EndFor \EndFor \State {\bf Evaluate Local Expansions:} \For{each leaf node (childless box)} \State evaluate the local expansion at each particle location. \EndFor \State {\bf Local Direct Interactions:} \For{$i=1 \to N$ } \State compute Eq. \eqref{generalsum} of target particle $i$ in the neighboring boxes using precomputed table of $u_{\ell\ell'}^{\uparrow}(\bs r, \bs r')$. \EndFor \end{algorithmic} \end{algorithm}\begin{algorithm}\label{algorithm2} \caption{Taylor expansion based heterogeneous 3-D FMM for \eqref{totalinteraction}} \begin{algorithmic} \For{$\ell=0 \to L$} \For{$\ell'=0 \to L$ } \If{$\ell=\ell'$} \State{use free space FMM to compute $\Phi_{\ell}^{free}$.} \EndIf \If{$\ell=0$} \State use {\bf Algorithm 1} to compute $\Phi_{0\ell'}^{\uparrow}$. \Else \If{$\ell=L$} \State use {\bf Algorithm 1} to compute $\Phi_{L\ell'}^{\downarrow}$. \Else \State use {\bf Algorithm 1} to compute $\Phi_{\ell\ell'}^{\uparrow}$. \State use {\bf Algorithm 1} to compute $\Phi_{\ell\ell'}^{\downarrow}$. \EndIf \EndIf \EndFor \EndFor \end{algorithmic} \end{algorithm} \section{Second Taylor-Expansion based FMM in multi-layered media} As discussed in the last section, Taylor expansion based FMM in multi-layered media depends on an efficient algorithm for the calculation of corresponding Green's function and its derivatives. The algorithm using discrete complex image and recurrence formula has good efficiency. However, discrete complex image approximation may suffer stability problem in the calculation of high order derivatives. Since the Green's function in multi-layered media has a symmetry in the $x-y$ plane, it is worthy to maintain this symmetry. For this purpose, we use notation $\mathcal H_k(\bs r, \bs r')=h_0^{(1)}(k|\bs r-\bs r'|)$ and introduce differential operators \begin{equation} \begin{split} \mathscr{D}_{nm}^{s}= & \Big(\frac{\partial}{\partial x}-\ri\frac{\partial }{\partial y}\Big)^{s}\Big(\frac{\partial}{\partial x}+\ri\frac{\partial }{\partial y}\Big)^{m-{s}}\Big(\frac{\partial}{\partial z}\Big)^{n-m},\\ \widehat{\mathscr{D}}_{nm}^{s}= & \Big(\frac{\partial}{\partial x^{\prime} }-\ri\frac{\partial}{\partial y^{\prime}}\Big)^{s}\Big(\frac{\partial }{\partial x^{\prime}}+\ri\frac{\partial}{\partial y^{\prime}}\Big)^{m-s} \Big(\frac{\partial}{\partial z^{\prime}}\Big)^{n-m}. \end{split} \label{sysdifferential} \end{equation} \subsection{Free space} We start with rearranging TE of $h_{0}^{(1)} (k|\boldsymbol{r}-\boldsymbol{r}^{\prime}|)$ by using operators defined in \eqref{sysdifferential}. \begin{theorem} \label{taylorexptheorem} Suppose $|\boldsymbol{r}^{\prime}|\leq a$ for a given small radius $a$, then the Taylor expansion of $h_{0}^{(1)} (k|\boldsymbol{r}-\boldsymbol{r}^{\prime}|)$ at origin with respect to $\boldsymbol{r}^{\prime}$ is \begin{equation} \label{teatrc}h_{0}^{(1)}(k|\boldsymbol{r}-\boldsymbol{r}^{\prime} |)=\sum_{n=0}^{\infty}\sum_{m=0}^{n}\sum_{s =0}^{m}\tilde{\alpha}_{nm} ^{s}(\boldsymbol{r}^{\prime})\frac{\widehat{\mathscr D}_{nm}^s\mathcal H_k(\bs r, \bs 0)}{2^{m}(n-m)!s!(m-s)!}, \end{equation} where \begin{equation} \label{eqn:coefMExyz}\quad\tilde{\alpha}_{nm}^{s}(\boldsymbol{r}^{\prime })=(x^{\prime}+\ri y^{\prime})^{s}(x^{\prime}-\ri y^{\prime})^{m-s}(z^{\prime })^{n-m}. \end{equation} \end{theorem} \begin{proof} Denote the spherical coordinates of ${\boldsymbol{r}}^{\prime}$ and ${\boldsymbol{r}}$ as $(\rho,\alpha,\beta)$ and $(r,\theta,\varphi)$, repsectively. Applying Taylor expansion on $h_{0}^{(1)}(k|{\boldsymbol{r}} -{\boldsymbol{r}}^{\prime}|)$ with respect to $\boldsymbol{r}^{\prime }=(x^{\prime},y^{\prime},z^{\prime})$ at the origin and changing the derivative with respect to $(x,y,z)$, we have \begin{equation} \begin{split} h_{0}^{(1)}(k|{\boldsymbol{r}}-{\boldsymbol{r}}^{\prime}|)= & \mathcal H_k({\boldsymbol{r}}, \bs 0)+\sum_{n=1}^{\infty}\frac{(-1)^{n} }{n!}\left( x^{\prime}\frac{\partial}{\partial x}+y^{\prime}\frac{\partial }{\partial y}+z^{\prime}\frac{\partial}{\partial z}\right) ^{n}\mathcal H_k({\boldsymbol{r}}, \bs 0)\\ = &\mathcal H_k({\boldsymbol{r}}, \bs 0)+\sum_{n=1}^{\infty} \frac{(-1)^{n}\rho^{n}}{n!}\sum_{m=0}^{n}\binom{n}{m}\sin^{m}\alpha\left( \cos{\beta}\frac{\partial}{\partial x}+\sin{\beta}\frac{\partial}{\partial y}\right) ^{m}\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\cos^{n-m}\alpha\left( \frac{\partial}{\partial z}\right) ^{n-m}\mathcal H_k({\boldsymbol{r}}, \bs 0). \end{split} \label{eqn:taylor} \end{equation} Notice that for any function $f(z),z\in\mathbb{C}$, we have \[ \left( \cos{\beta}\frac{\partial}{\partial x}+\sin{\beta}\frac{\partial }{\partial y}\right) f=\frac{1}{2}\Big[e^{\ri\beta}\left( \frac{\partial }{\partial x}-\ri\frac{\partial}{\partial y}\right) +e^{-\ri\beta}\left( \frac{\partial}{\partial x}+\ri\frac{\partial}{\partial y}\right) \Big]f. \] Therefore, we can rewrite Eq. (\ref{eqn:taylor}) as \begin{equation} \begin{split} h_{0}^{(1)}(k|{\boldsymbol{r}}-{\boldsymbol{r}}^{\prime}|)= & \mathcal H_k({\boldsymbol{r}}, \bs 0)+\sum_{n=1}^{\infty}\sum_{m=0}^{n}\frac {(-1)^{n}\rho^{n}\sin^{m}\alpha\cos^{n-m}\alpha}{2^{m}(n-m)!}\\ & \qquad\qquad\qquad\qquad\times\sum_{s=0}^{m}\frac{e^{-\ri(m-s)\beta}e^{\ri s\beta}}{s!(m-s)!}\mathscr{D}_{nm}^{s}\mathcal H_k({\boldsymbol{r}}, \bs 0)\\ = & \mathcal H_k({\boldsymbol{r}}, \bs 0)+\sum_{n=1}^{\infty}\sum_{m=0}^{n} \sum_{s=0}^{m}\tilde{\alpha}_{nm}^{s}(\boldsymbol{r}^{\prime})\frac {(-1)^{n}\mathscr{D}_{nm}^{s}\mathcal H_k({\boldsymbol{r}}, \bs 0)}{2^{m}(n-m)!s!(m-s)!}, \end{split} \label{eqn:taylor1} \end{equation} where \begin{equation} \tilde{\alpha}_{nm}^{s}(\boldsymbol{r}^{\prime})=\rho^{n}e^{\ri s\beta }e^{-\ri(m-s)\beta}\sin^{m}\alpha\cos^{n-m}\alpha=(x^{\prime}+\ri y^{\prime })^{s}(x^{\prime}-\ri y^{\prime})^{m-s}(z^{\prime})^{n-m}. \label{eqn:coef1} \end{equation} We finish the proof by using the fact $(-1)^n\mathscr{D}_{nm}^{s}\mathcal H_k({\boldsymbol{r}}, \bs 0)=\widehat{\mathscr{D}}_{nm}^{s}\mathcal H_k({\boldsymbol{r}}, \bs 0)$ in \eqref{eqn:taylor1}. \end{proof} \begin{rem} The notation $\mathcal H_k({\boldsymbol{r}}, \bs r')$ is used to clearly show that the derivatives $\mathscr{D}_{nm}^{s}h_0^{(1)}(k|{\boldsymbol{r}}-\bs r'|)$ and $\widehat{\mathscr{D}}_{nm}^{s}h_0^{(1)}(k|{\boldsymbol{r}}-\bs r'|)$ are not just depend on $|\bs r-\bs r'|$ but are functions of $(\bs r, \bs r')$. \end{rem} \begin{corollary} \label{taylorexpcorollary} Suppose $|\boldsymbol{r}|\leq a$ for a given small radius $a$, then the Taylor expansion of $h_{0}^{(1)} (k|\boldsymbol{r}-\boldsymbol{r}^{\prime}|)$ at the origin with respect to $\boldsymbol{r}$ is \begin{equation} h_{0}^{(1)}(k|\boldsymbol{r}-\boldsymbol{r}^{\prime}|)=\sum_{n=0}^{\infty} \sum_{m=0}^{n}\sum_{s=0}^{m}{\beta}_{nm}^{s}\tilde{\alpha}_{nm}^{s} (\boldsymbol{r})\label{taylorexploc} \end{equation} where \begin{equation} {\beta}_{nm}^{s}=\frac{\mathscr{D}_{nm}^{s}\mathcal H_k(\bs 0, \boldsymbol{r}^{\prime})}{2^{m}(n-m)!s!(m-s)!}.\label{eqn:coefM2L} \end{equation} \end{corollary} With Taylor expansions given in \eqref{teatrc} and \eqref{taylorexploc}, we can have the second Taylor expansion based FMM which uses the following expansions: \begin{itemize} \item \textbf{Taylor expansion (TE) in a source box centered at $\boldsymbol{r}_{c}$: } \begin{equation} \sum\limits_{j\in J_{m}}q_{j}h_{0}^{(1)}(k|\boldsymbol{r}-\boldsymbol{r}_{j} |)\approx\sum_{n=0}^{p}\sum_{m=0}^{n}\sum_{s=0}^{m}{\alpha}_{nm}^{s} \frac{\widehat{\mathscr{D}}_{nm}^{s}\mathcal H_{k}(\boldsymbol{r}, \boldsymbol{r}_{c})}{2^{m}(n-m)!s!(m-s)!} ,\label{taylorexpfarxcfree2} \end{equation} where \begin{equation} {\alpha}_{nm}^{s}=\sum\limits_{j\in J_{m}}q_{j}\tilde{\alpha}_{nm} ^{s}(\boldsymbol{r}_{j}-\boldsymbol{r}_{c}).\label{mecoefficients} \end{equation} \item \textbf{Taylor expansion (TE) in a target box centered at $\boldsymbol{r}_{c}^l$: } \begin{equation} \sum\limits_{j\in J_{m}}q_{j}h_{0}^{(1)}(k|\boldsymbol{r}-\boldsymbol{r}_{j} |)\approx\sum_{n=0}^{p}\sum_{m=0}^{n}\sum_{s=0}^{m}{\beta}_{nm}^{s} \tilde{\alpha}_{nm}^{s}(\boldsymbol{r}-\boldsymbol{r}_{c}^{l} ),\label{taylorexplocxclfree2} \end{equation} where \begin{equation} {\beta}_{nm}^{s}=\sum\limits_{j\in J_{m}}\frac{q_{j}\mathscr{D}_{nm}^{s}\mathcal{H}_{k}(\boldsymbol{r}_{c}^{l}, \boldsymbol{r}_{j})}{2^{m} (n-m)!s!(m-s)!}.\label{localexpcoeff2} \end{equation} \end{itemize} The translation operators used in the FMM algorithm can be derived similarly as in the conventional way. Firstly, by applying Taylor expansion \eqref{taylorexpfarxcfree2} in \eqref{localexpcoeff2} and using the fact $\mathcal{H}_{k}(\boldsymbol{r}_{c}^{l}, \boldsymbol{r}_{j})=h_0^{(1)}(k|\bs r^l_c-\bs r_j|)$, we have \[ \begin{split} {\beta}_{nm}^{s}= & \frac{1}{2^{m}(n-m)!s!(m-s)!}\mathscr{D}_{nm}^{s} \sum_{n^{\prime}=0}^{p}\sum_{m^{\prime}=0}^{n^{\prime}}\sum_{s^{\prime} =0}^{m^{\prime}}{\alpha}_{n^{\prime}m^{\prime}}^{s^{\prime}}\frac {\widehat{\mathscr{D}}_{n^{\prime}m^{\prime}}^{s^{\prime}}\mathcal{H}_k(\boldsymbol{r}_{c}^{l}, \boldsymbol{r}_{c})}{2^{m^{\prime}}(n^{\prime }-m^{\prime})!s^{\prime}!(m^{\prime}-s^{\prime})!}\\ = & \sum_{n^{\prime}=0}^{p}\sum_{m^{\prime}=0}^{n^{\prime}}\sum_{s^{\prime }=0}^{m^{\prime}}{\alpha}_{n^{\prime}m^{\prime}}^{s^{\prime}}\frac {\mathscr{D}_{nm}^{s}\widehat{\mathscr{D}}_{n^{\prime}m^{\prime}}^{s^{\prime}}\mathcal{H}_k(\boldsymbol{r}_{c}^{l}, \boldsymbol{r}_{c})}{2^{m+m^{\prime} }(n-m)!s!(m-s)!(n^{\prime}-m^{\prime})!s^{\prime}!(m^{\prime}-s^{\prime})!}. \end{split} \] Therefore, the translation operator from TE in a source box centered at $\boldsymbol{r}_{c}$ to TE in a target box centered at $\boldsymbol{r}_{c} ^{l}$ is given by \begin{equation} {\beta}_{nm}^{s}=\sum_{n^{\prime}=0}^{p}\sum_{m^{\prime}=0}^{n^{\prime}} \sum_{s^{\prime}=0}^{m^{\prime}}{\alpha}_{n^{\prime}m^{\prime}}^{s^{\prime} }L_{nms}^{n^{\prime}m^{\prime}s^{\prime}}, \end{equation} where \begin{equation} L_{nms}^{n^{\prime}m^{\prime}s^{\prime}}= \frac{\mathscr{D}_{nm}^{s}\widehat{\mathscr{D}}_{n^{\prime}m^{\prime}}^{s^{\prime}}\mathcal{H}_k(\boldsymbol{r}_{c}^{l}, \boldsymbol{r}_{c})}{2^{m+m^{\prime}}(n-m)!s!(m-s)!(n^{\prime}-m^{\prime})!s^{\prime }!(m^{\prime}-s^{\prime})!}. \label{M2Ltransop} \end{equation} Denote the coefficients of TE in the source box centered at $\boldsymbol{r}_{c}^{\prime}$ by \[ {\gamma}_{nm}^{s}=\sum\limits_{j=1}^{N}q_{j}\tilde{\alpha}_{nm}^{s} (\boldsymbol{r}_{j}-\boldsymbol{r}_{c}^{\prime}). \] Direct calculation gives \begin{equation} \begin{split} & [(x_{j}-x_{c}^{\prime})+\ri(y_{j}-y_{c}^{\prime})]^{s}[(x_{j}-x_{c} ^{\prime})-\ri(y_{j}-y_{c}^{\prime})]^{m-s}(z_{j}-z_{c}^{\prime})^{n-m}\\ = & \sum\limits_{s^{\prime}=0}^{s}\sum\limits_{m^{\prime}=0}^{m-s} \sum\limits_{n^{\prime}=0}^{n-m}B_{nms}^{n^{\prime}m^{\prime}s^{\prime} }[(x_{j}-x_{c})+\ri(y_{j}-y_{c})]^{s^{\prime}}[(x_{j}-x_{c})-\ri(y_{j} -y_{c})]^{m^{\prime}}(z_{j}-z_{c})^{n^{\prime}} \end{split} \end{equation} where \begin{equation} \begin{split} B_{nms}^{n^{\prime}m^{\prime}s^{\prime}}= & \frac{s!(m-s)!(n-m)![(x_{c} -x_{c}^{\prime})+\ri(y_{c}-y_{c}^{\prime})]^{s-s^{\prime}}}{s^{\prime }!(s-s^{\prime})!m^{\prime}!(m-s-m^{\prime})!n^{\prime}!(n-m-n^{\prime})!}\\ & \times\lbrack(x_{c}-x_{c}^{\prime})-\ri(y_{c}-y_{c}^{\prime} )]^{m-s-m^{\prime}}(z_{c}-z_{c}^{\prime})^{n-m-n^{\prime}}. \end{split} \end{equation} Therefore, \begin{equation} \tilde{\alpha}_{nm}^{s}(\boldsymbol{r}_{j}-\boldsymbol{r}_{c}^{\prime} )=\sum\limits_{s^{\prime}=0}^{s}\sum\limits_{m^{\prime}=0}^{m-s} \sum\limits_{n^{\prime}=0}^{n-m}B_{nms}^{n^{\prime}m^{\prime}s^{\prime}} \tilde{\alpha}_{n^{\prime}+m^{\prime}+s^{\prime},m^{\prime}+s^{\prime} }^{s^{\prime}}(\boldsymbol{r}_{j}-\boldsymbol{r}_{c}), \end{equation} which implies that the translation operator from TE in a source box centered at $\boldsymbol{r}_{c}$ to TE in another source box centered at $\boldsymbol{r}_{c}^{\prime}$ has the form \begin{equation} \gamma_{nm}^{s}=\sum\limits_{s^{\prime}=0}^{s}\sum\limits_{m^{\prime}=0} ^{m-s}\sum\limits_{n^{\prime}=0}^{n-m}B_{nms}^{n^{\prime}m^{\prime}s^{\prime} }{\alpha}_{n^{\prime}+m^{\prime}+s^{\prime},m^{\prime}+s^{\prime}}^{s^{\prime }}.\label{me2mesymmetricte} \end{equation} Let \begin{equation} {\lambda}_{nm}^{s}=\sum\limits_{j\in J_{m}}\frac{q_{j}\mathscr{D}_{nm}^{s}\mathcal{H}_{k}(\tilde{\boldsymbol{r}}_{c}^{l},\boldsymbol{r}_{j})} {2^{m}(n-m)!s!(m-s)!}, \end{equation} be the coefficients of TE in a target box centered at $\tilde{\boldsymbol{r}} _{c}^{l}$. By applying Taylor expansion at $\bs r_c^l$ we obtain \begin{equation} \begin{split} {\lambda}_{nm}^{s}= & \frac{1}{2^{m}(n-m)!s!(m-s)!}{\mathscr D}_{nm}^{s} \sum\limits_{j\in J_{m}}q_{j}h_{0}^{(1)}(k|\tilde{\boldsymbol{r}}_{c} ^{l}-\boldsymbol{r}_{j}|)\\ \approx & \frac{1}{2^{m}(n-m)!s!(m-s)!}\sum_{n^{\prime}=0}^{p}\sum _{m^{\prime}=0}^{n^{\prime}}\sum_{s^{\prime}=0}^{m^{\prime}}{\beta} _{n^{\prime}m^{\prime}}^{s^{\prime}}{\mathscr D}_{nm}^{s}\tilde{\alpha }_{n^{\prime}m^{\prime}}^{s^{\prime}}(\tilde{\boldsymbol{r}}_{c} ^{l}-\boldsymbol{r}_{c}^{l}). \end{split} \label{taylorexplocxcltrans} \end{equation} Note that \[ \begin{split} & \Big(\frac{\partial}{\partial\tilde{x}_{c}^{l}}-\ri\frac{\partial} {\partial\tilde{y}_{c}^{l}}\Big)^{s}\Big(\frac{\partial}{\partial\tilde{x} _{c}^{l}}+\ri\frac{\partial}{\partial\tilde{y}_{c}^{l}}\Big)^{m-s}[(\tilde {x}_{c}^{l}-x_{c}^{l})+\ri(\tilde{y}_{c}^{l}-y_{c}^{l})]^{s^{\prime}} [(\tilde{x}_{c}^{l}-x_{c}^{l})-\ri(\tilde{y}_{c}^{l}-y_{c}^{l})]^{m^{\prime }-s^{\prime}}\\ = & \begin{cases} \displaystyle0,\quad m-s>m^{\prime}-s^{\prime}\;\;\mathrm{or}\;\;s>s^{\prime },\\[6pt] \displaystyle\frac{2^{m}s^{\prime}!(m^{\prime}-s^{\prime})![(\tilde{x}_{c} ^{l}-x_{c}^{l})+\ri(\tilde{y}_{c}^{l}-y_{c}^{l})]^{s^{\prime}-s}}{(s^{\prime }-s)!(m^{\prime}-s^{\prime}-m+s)!}[(\tilde{x}_{c}^{l}-x_{c}^{l})-\ri(\tilde {y}_{c}^{l}-y_{c}^{l})]^{m^{\prime}-s^{\prime}-m+s},\quad\mathrm{otherwise}, \end{cases} \\ & \Big(\frac{\partial}{\partial\tilde{z}_{c}^{l}}\Big)^{n-m}(\tilde{z} _{c}^{l}-z_{c}^{l})^{n^{\prime}-m^{\prime}}= \begin{cases} \displaystyle0,\quad\mathrm{if}\;n-m>n^{\prime}-m^{\prime},\\ \displaystyle\frac{(n^{\prime}-m^{\prime})!}{(n^{\prime}-m^{\prime} -(n-m))!}(\tilde{z}_{c}^{l}-z_{c}^{l})^{n^{\prime}-m^{\prime}-n+m} ,\;\mathrm{otherwise}.\\ \end{cases} \end{split} \] Therefore, \begin{equation} {\mathscr D}_{nm}^{s}\tilde{\alpha}_{n^{\prime}m^{\prime}}^{s^{\prime}} (\tilde{\boldsymbol{r}}_{c}^{l}-\boldsymbol{r}_{c}^{l})= \begin{cases} \displaystyle0,\quad\mathrm{if}\;\;m-s>m^{\prime}-s^{\prime}\;\;\mathrm{or} \;\;s>s^{\prime}\;\;\mathrm{or}\;\;\;n-m>n^{\prime}-m^{\prime};\\[10pt] \displaystyle2^{m}C_{nms}^{n^{\prime}m^{\prime}s^{\prime}}\tilde{\alpha }_{n^{\prime}-n,m^{\prime}-m}^{s^{\prime}-s}(\tilde{\boldsymbol{r}}_{c} ^{l}-\boldsymbol{r}_{c}^{l}),\quad\mathrm{otherwise}, \end{cases} \label{LE2LErans2} \end{equation} where \[ C_{nms}^{n^{\prime}m^{\prime}s^{\prime}}=\frac{(m^{\prime}-s^{\prime })!s^{\prime}!(n^{\prime}-m^{\prime})!}{(m^{\prime}-s^{\prime}-m+s)!(s^{\prime }-s)!(n^{\prime}-m^{\prime}-(n-m))!}. \] Substituting \eqref{LE2LErans2} into \eqref{taylorexplocxcltrans} gives the translation operator from TE in a target box centered at $\boldsymbol{r}_{c}^{l}$ to TE in another target box centered at $\tilde{\boldsymbol{r}}_{c}^{l}$ \begin{equation} {\lambda}_{nm}^{s}=\sum\limits_{n^{\prime}=n}^{p}\sum\limits_{m^{\prime} =m}^{m+n^{\prime}-n}\sum_{s^{\prime}=s}^{s+m^{\prime}-m}\beta_{n^{\prime }m^{\prime}}^{s^{\prime}}\frac{C_{nms}^{n^{\prime}m^{\prime}s^{\prime}} \tilde{\alpha}_{n^{\prime}-n,m^{\prime}-m}^{s^{\prime}-s}(\tilde {\boldsymbol{r}}_{c}^{l}-\boldsymbol{r}_{c}^{l})}{(n-m)!s!(m-s)!}. \label{le2lesymmetricte} \end{equation} To ensure the efficiency of this algorithm, a fast algorithm is needed for the calculation of derivatives $\mathscr{D}_{nm}^{s}\mathcal H_k(\bs r, \bs r')$ and $\mathscr{D}_{nm}^{s}\widehat{\mathscr{D}}_{n^{\prime}m^{\prime}}^{s^{\prime}}\mathcal H_k(\bs r, \bs r')$ which are used in the computation of coefficients \eqref{localexpcoeff2} and translation operators \eqref{M2Ltransop}. A recurrence formula can be derived from the following result (cf.\cite{martin2006multiple}). Define \begin{equation} \Omega_{n}^{m}(\boldsymbol{r})=h_{n}^{(1)}(k|\boldsymbol{r}|)Y_{n}^{m} (\theta,\phi),\quad\mathscr D^{\pm}=\frac{1}{k}\Big(\frac{\partial}{\partial x}\pm\ri\frac{\partial}{\partial y}\Big),\quad\mathscr D^{0}=-\frac{1}{k} \frac{\partial}{\partial z}, \end{equation} where \begin{equation} Y_{n}^{m}(\theta,\phi)=(-1)^{m}\sqrt{\frac{2n+1}{4\pi}\frac{(n-m)!}{(n+m)!} }P_{l}^{m}(\cos\theta)e^{\ri m\phi}, \end{equation} is the spherical harmonics. \begin{theorem} \label{Thm:hankelderiative} For $0\leq|m|\leq n$, \begin{equation} \begin{split} & \mathscr D^{+}\Omega_{n}^{m}=A_{nm}^{+}\Omega_{n+1}^{m+1}+B_{nm}^{+} \Omega_{n-1}^{m+1},\\ & \mathscr D^{-}\Omega_{n}^{m}=A_{nm}^{-}\Omega_{n+1}^{m-1}+B_{nm}^{-} \Omega_{n-1}^{m-1},\\ & \mathscr D^{0}\Omega_{n}^{m}=A_{nm}^{0}\Omega_{n+1}^{m}+B_{nm}^{0} \Omega_{n-1}^{m}. \end{split} \end{equation} where \[ \begin{split} & A_{nm}^{+}=\sqrt{\frac{(n+m+2)(n+m+1)}{(2n+1)(2n+3)}},\quad B_{nm} ^{+}=\sqrt{\frac{(n-m)(n-m-1)}{4n^{2}-1}},\\ & A_{nm}^{-}=-\sqrt{\frac{(n-m+2)(n-m+1)}{(2n+1)(2n+3)}},\quad B_{nm} ^{-}=-\sqrt{\frac{(n+m)(n+m-1)}{4n^{2}-1}},\\ & A_{nm}^{0}=-\sqrt{\frac{(n+1)^{2}-m^{2}}{(2n+1)(2n+3)}},\quad B_{nm} ^{0}=\sqrt{\frac{n^{2}-m^{2}}{4n^{2}-1}}. \end{split} \] In particular, for $n\ge0$ \begin{equation} \mathscr D^{0}\Omega_{n}^{\pm n}=\frac{1}{\sqrt{2n+3}}\Omega_{n+1}^{\pm n}. \end{equation} \end{theorem} From the theorem \ref{Thm:hankelderiative}, the high order derivatives can be expressed as \[ \{(\mathscr D^{+})^{s},(\mathscr D^{-})^{s},(\mathscr D^{0})^{s}\}\Omega _{n}^{m}=\sum\limits_{r=0}^{s}\big\{{C}_{rs}^{+}\Omega_{n-s+2r}^{m+s},{C} _{rs}^{-}\Omega_{n-s+2r}^{m-s},C_{rs}^{0}\Omega_{n-s+2r}^{m}\big\}, \] where the coefficients $\{{C}_{rs}^{+}\},\{C_{rs}^{-}\},\{{C}_{rs}^{0}\}$ have a recurrence formula \[ \begin{split} & C_{rs}^{+}= \begin{cases} \displaystyle0, & n-s+2r<m+s,\\[5pt] \displaystyle B_{n-s+1,m+s-1}^{+}C_{0,s-1}^{+}, & r=0,\\[5pt] \displaystyle A_{n+s-1,m+s-1}^{+}C_{s-1,s-1}^{+}, & r=s,\\[5pt] \displaystyle A_{n-s+2r-1,m+s-1}^{+}C_{r-1,s-1}^{+}+B_{n-s+2r+1,m+s-1} ^{+}C_{r,s-1}^{+}, & 0<r<s, \end{cases} \\ & C_{rs}^{-}= \begin{cases} \displaystyle0, & n-s+2r<|m-s|,\\[5pt] \displaystyle B_{n-s+1,m-s+1}^{-}C_{0,s-1}^{-}, & r=0,\\[5pt] \displaystyle A_{n+s-1,m-s+1}^{-}C_{s-1,s-1}^{-}, & r=s,\\[5pt] \displaystyle A_{n-s+2r-1,m-s+1}^{-}C_{r-1,s-1}^{-}+B_{n-s+2r+1,m-s+1} ^{-}C_{r,s-1}^{-}, & 0<r<s, \end{cases} \\ & {C}_{rs}^{0}= \begin{cases} \displaystyle0, & n-s+2r<m,\\[5pt] \displaystyle B_{n-s+1,m}^{0}C_{0,s-1}^{0}, & r=0,\\[5pt] \displaystyle A_{n+s-1,m}^{0}C_{s-1,s-1}^{0}, & r=s,\\[5pt] \displaystyle A_{n-(s-2r)-1,m}^{0}C_{r-1,s-1}^{0}+B_{n-(s-2r)+1,m} ^{0}C_{r,s-1}^{0}, & 0<r<s, \end{cases} \end{split} \] with initial values \[ C_{00}^{+}=C_{00}^{-}={C}_{00}^{0}=1. \] Therefore, \begin{equation} \begin{split} \mathscr D_{nm}^{s}\mathcal H_k(\boldsymbol{r},\bs r^{\prime})= & \Big(\frac{-1} {k}\Big)^{n}\sqrt{\frac{1}{4\pi}}(\mathscr D^{-})^{s}(\mathscr D^{+} )^{m-s}(\mathscr D^{0})^{n-m}\Omega_{0}^{0}(\boldsymbol{r}-\boldsymbol{r}^{\prime})\\ = & \Big(\frac{-1}{k}\Big)^{n}\sqrt{\frac{1}{4\pi}}\sum\limits_{n^{\prime} =0}^{n-m}\sum\limits_{m^{\prime}=0}^{m-s}\sum\limits_{s^{\prime}=0} ^{s}C_{n^{\prime}m^{\prime}}^{s^{\prime}}\Omega_{2(n^{\prime}+m^{\prime }+s^{\prime})-n}^{m-2s}(\boldsymbol{r}-\boldsymbol{r}^{\prime}), \end{split} \end{equation} where the coefficients $\{C_{n^{\prime}m^{\prime}}^{s^{\prime}}\}$ can be computed using coefficients $\{{C}_{rs}^{+}\},\{C_{rs}^{-}\},\{{C}_{rs}^{0}\}$. This formula is also used for the calculation of $\mathscr{D}_{nm}^{s}\widehat{\mathscr{D}}_{n^{\prime}m^{\prime}}^{s^{\prime}}\mathcal H_k(\bs r, \bs r')=(-1)^{n^{\prime}}\mathscr{D}_{n+n^{\prime},m+m^{\prime}}^{s+s^{\prime}}\mathcal H_k(\bs r, \bs r')$. \subsection{Multi-layer media} Consider the calculation of interactions given by \eqref{totalinteraction} with the setting presented in the last section. We only need to focus on a general component given by \eqref{generalsum}. According to \eqref{taylorexpfarxcfree2}-\eqref{localexpcoeff2}, the TE-FMM for \eqref{generalsum} will use Taylor expansions \begin{equation} \Phi_{\ell\ell^{\prime}}^{b\uparrow}(\boldsymbol{r}_{\ell i})=\sum _{n=0}^{\infty}\sum_{m=0}^{n}\sum_{s=0}^{m}{\alpha}_{nm}^{s}\frac {\widehat{\mathscr{D}}_{nm}^{s}u_{\ell\ell^{\prime}}^{\uparrow} (\boldsymbol{r}_{\ell i},\boldsymbol{r}_{c})}{2^{m}(n-m)!s!(m-s)!} ,\quad{\alpha}_{nm}^{s}=\sum\limits_{j\in J_{m}}Q_{\ell^{\prime}j} \tilde{\alpha}_{nm}^{s}(\boldsymbol{r}_{\ell^{\prime}j}-\boldsymbol{r}_{c} ),\label{taylorexpfarlayers} \end{equation} in a source box centered at $\boldsymbol{r}_{c}=(x_{c},y_{c},z_{c})$ and \begin{equation} \Phi_{\ell\ell^{\prime}}^{b\uparrow}(\boldsymbol{r}_{\ell i})=\sum_{n=0 }^{p}\sum_{m=0}^{n}\sum_{s=0}^{m}{\beta}_{nm}^{s}\tilde{\alpha}_{nm} ^{s}(\boldsymbol{r}_{\ell i}-\boldsymbol{r}_{c}^{l}),\quad{\beta}_{nm} ^{s}=\sum\limits_{j\in J_{m}}\frac{Q_{\ell^{\prime}j}\mathscr{D}_{nm} ^{s}u_{\ell\ell^{\prime}}^{\uparrow}(\boldsymbol{r}_{c}^{l} ,\boldsymbol{r}_{\ell^{\prime}j})}{2^{m}(n-m)!s!(m-s)!} ,\label{taylorexploclayers} \end{equation} in a target box centered at $\boldsymbol{r}_{c}^{l}=(x_{c}^{l},y_{c}^{l} ,z_{c}^{l})$, respectively. Applying TE \eqref{taylorexpfarlayers} in the expression of the coefficients $\beta_{nm}^{s}$ in \eqref{taylorexploclayers}, we obtain \begin{equation} {\beta}_{nm}^{s}\approx\sum_{n^{\prime}=0}^{p}\sum_{m^{\prime}=0}^{n^{\prime} }\sum_{s^{\prime}=0}^{m^{\prime}}{\alpha}_{n^{\prime}m^{\prime}}^{s^{\prime} }\frac{\mathscr{D}_{nm}^{s}\widehat{\mathscr{D}}_{n^{\prime},m^{\prime} }^{s^{\prime}}u_{\ell\ell^{\prime}}^{\uparrow}(\boldsymbol{r}_{c} ^{l},\boldsymbol{r}_{c})}{M_{nms}^{n^{\prime}m^{\prime}s^{\prime}} },\label{layerm2ltrans} \end{equation} where \[ M_{nms}^{n^{\prime}m^{\prime}s^{\prime}}=2^{m+m^{\prime}} (n-m)!s!(m-s)!(n^{\prime}-m^{\prime})!s^{\prime}!(m^{\prime}-s^{\prime})!. \] Due to the symmetry of differential operators $\mathscr{D}_{nm}^{s}$ and $\widehat{\mathscr{D}}_{n^{\prime}m^{\prime}}^{s^{\prime}}$, the entry of the translation matrix in \eqref{layerm2ltrans} also has a symmetry in the $x-y$ plane. This can be shown by using Sommerfeld integral representation \eqref{greenfuncomponent}. In fact, by using the identities \begin{equation} \begin{split} & \Big(\frac{\partial}{\partial x}+\ri\frac{\partial}{\partial y} \Big)^{m-{s}}J_{0}(kr)=(-k)^{m-s}J_{m-s}(kr)e^{\ri(m-s)\theta},\\ & \Big(\frac{\partial}{\partial x}-\ri\frac{\partial}{\partial y} \Big)^{s}\Big(J_{m-s}(kr)e^{\ri(m-s)\theta}\Big)=k^{s}J_{m-2s} (kr)e^{\ri(m-2s)\theta}, \end{split} \end{equation} we have \begin{equation} \begin{split} & \frac{\mathscr{D}_{nm}^{s}\widehat{\mathscr{D}}_{n^{\prime},m^{\prime} }^{s^{\prime}}u_{\ell\ell^{\prime}}^{\uparrow}(\boldsymbol{r}_{c} ^{l},\boldsymbol{r}_{c})}{M_{nms}^{n^{\prime}m^{\prime}s^{\prime}}} =\frac{\mathscr{D}_{nm}^{s}\widehat{\mathscr{D}}_{n^{\prime}m^{\prime} }^{s^{\prime}}}{M_{nms}^{n^{\prime}m^{\prime}s^{\prime}}}\frac{1}{k_{\ell}} \int_{0}^{\infty}k_{\rho}J_{0}(k_{\rho}\rho)\frac{e^{\ri k_{\ell z}(z_{c} ^{l}-d_{\ell})}}{k_{\ell z}}\tilde{\sigma}_{\ell\ell^{\prime}}^{\uparrow }(k_{\rho},z_{c})dk_{\rho}\\ = & \frac{(-1)^{m+s+s^{\prime}}e^{\ri(m+m^{\prime}-2(s+s^{\prime}))\phi} }{M_{nms}^{n^{\prime}m^{\prime}s^{\prime}}}\frac{1}{k_{\ell}}\int_{0}^{\infty}k_{\rho }^{m+m^{\prime}+1}J_{m+m^{\prime}-2(s+s^{\prime})}(k_{\rho}\rho)\\ & \times(\ri k_{\ell z})^{n-m}\frac{e^{\ri k_{\ell z}(z_{c}^{l}-d_{\ell})} }{k_{\ell z}}\frac{\partial^{n^{\prime}-m^{\prime}}\tilde{\sigma}_{\ell \ell^{\prime}}^{\uparrow}(k_{\rho},z_{c})}{\partial z^{\prime}}dk_{\rho}, \end{split} \end{equation} where $(\rho,\phi)$ is the polar coordinates of $(x_{c}^{l}-x_{c},y_{c} ^{l}-y_{c})$ \[ \frac{\partial^{n^{\prime}-m^{\prime}}\tilde{\sigma}_{\ell\ell^{\prime} }^{\uparrow}(k_{\rho},z_{c})}{\partial z^{\prime}}=(\ri k_{\ell^{\prime} z})^{n^{\prime}-m^{\prime}}\Big(e^{\ri k_{\ell^{\prime}z}(z_{c}-d_{\ell ^{\prime}})}\sigma_{\ell\ell^{\prime}}^{\uparrow\uparrow}(k_{\rho })+(-1)^{n^{\prime}-m^{\prime}}e^{\ri k_{\ell^{\prime}z}(d_{\ell^{\prime} -1}-z_{c})}\sigma_{\ell\ell^{\prime}}^{\uparrow\downarrow}(k_{\rho})\Big). \] For general integer indices $n,n^{\prime},m,m^{\prime}$, define integrals \begin{equation} \mathcal{S}_{nn^{\prime}}^{mm^{\prime}}(\rho,z,z^{\prime})=\frac{1}{k_{\ell}}\int_{0}^{\infty}\frac{k_{\rho}^{m+1}J_{m-2m^{\prime}}(k_{\rho}\rho)(\ri k_{\ell z})^{n}}{2^{m}m!n!n^{\prime}!}\frac{e^{\ri k_{\ell z}(z-d_{\ell})} }{k_{\ell z}}\frac{\partial^{n^{\prime}}\tilde{\sigma}_{\ell\ell^{\prime} }^{\uparrow}(k_{\rho},z^{\prime})}{\partial z^{\prime}}dk_{\rho} .\label{symmetricdetable} \end{equation} Then \[ \frac{\mathscr{D}_{nm}^{s}\widehat{\mathscr{D}}_{n^{\prime},m^{\prime} }^{s^{\prime}}u_{\ell\ell^{\prime}}^{\uparrow}(\boldsymbol{r}_{c} ^{l},\boldsymbol{r}_{c})}{M_{nms}^{n^{\prime}m^{\prime}s^{\prime}}} =\frac{(-1)^{m+s+s^{\prime}}(m+m^{\prime})!}{s!(m-s)!s^{\prime}!(m^{\prime }-s^{\prime})!}e^{\ri(m+m^{\prime}-2(s+s^{\prime}))\phi}\mathcal{S} _{n-m,n^{\prime}-m^{\prime}}^{m+m^{\prime},s+s^{\prime}}(\rho,z_{c}^{l} ,z_{c}). \] We pre-compute integrals $\mathcal{S}_{nn^{\prime}}^{mm^{\prime}} (\rho,z,z^{\prime})$ on a 3D grid $\{\rho_{i},z_{j},z_{k}^{\prime}\}$ in the domain of interest for all $n,m=0,1,\cdots,p$; $s=0,1,\cdots,2p$; $s^{\prime }=0,1,\cdots,s$. Then, a polynomial interpolation is performed for the computation of derivatives $\mathscr{D}_{nm}^{s}\widehat{\mathscr{D}} _{n^{\prime},m^{\prime}}^{s^{\prime}}u_{\ell\ell^{\prime}}^{\uparrow }(\boldsymbol{r}_{c}^{l},\boldsymbol{r}_{c})$ in the translation operators. The computation of Sommerfeld integrals similar to $\mathcal{S}_{nn^{\prime} }^{mm^{\prime}}(\rho,z,z^{\prime})$ is a standard problem in acoustic and electromagnetic scattering and often handled by contour deformation. It is typical to deform the integration contour by pushing it away from the real line into the fourth quadrant of the complex $k_{\rho}$-plane to avoid branch points and poles in the integrand. Here, we use a piece-wise smooth contour which consists of two segments: \begin{equation} \Gamma_{1}:\;\;\{k_{\rho}=\ri t,-b\leq t\leq0\},\quad\Gamma_{2} :\;\;\{k_{\rho}=t-\ri b,\quad0\leq t<\infty\}. \end{equation} We truncate $\Gamma_{2}$ at a point $t_{max}>0$, where the integrand has decayed to a user specified tolerance. \begin{figure} \caption{Plots of integrand along the integration contour $\Gamma_{1}$ with $\ell=\ell^{\prime}=1$ and different $(n,n^{\prime},m,m^{\prime})$.} \label{integrand1} \end{figure} \begin{figure} \caption{Plots of integrand along the integration contour $\Gamma_{2}$ with $\ell=\ell^{\prime}=1$ and different $(n,n^{\prime},m,m^{\prime})$.} \label{integrand2} \end{figure} As an example, we plot the integrand in \eqref{symmetricdetable} along $\Gamma_{1}$ and $\Gamma_{2}$ (see, Fig. \ref{integrand1} and Fig. \ref{integrand2}). The three layers case with $k_{0}=0.8,k_{1}=1.5,k_{2}=2.0,d=2.0,z=-0.3,z^{\prime}=-0.5$ and density given in \eqref{densitythreelayer2} is used. We can see that the integrand has exponential decay along $\Gamma_{2}$ as $t$ goes to infinity. \begin{rem} Similar to the first TE-FMM, the translation operators for center shift from source boxes to their parents and from target boxes to their children are exactly the same as in free space case which are given by \eqref{me2mesymmetricte} and \eqref{le2lesymmetricte}. \end{rem} The algorithm using symmetric derivatives for general component \eqref{generalsum} is as following: \begin{algorithm}\label{algorithm3} \caption{TEFMM-II for general component \eqref{generalsum}} \begin{algorithmic} \State Generate an adaptive hierarchical tree structure and precompute tables. \State{\bf Upward pass:} \For{$l=H \to 0$} \For{all boxes $j$ on source tree level $l$ } \If{$j$ is a leaf node} \State{form the free-space TE using Eq. \eqref{mecoefficients}.} \Else \State form the free-space TE by merging children's expansions using the free-space center shift translation operator \eqref{me2mesymmetricte}. \EndIf \EndFor \EndFor \State{\bf Downward pass:} \For{$l=1 \to H$} \For{all boxes $j$ on target tree level $l$ } \State shift the TE of $j$'s parent to $j$ itself using the free-space center shift translation operator \eqref{le2lesymmetricte}. \State collect interaction list contribution using the source box to target box translation operator in Eq. \eqref{layerm2ltrans} with precomputed tables of integrals \eqref{symmetricdetable}. \EndFor \EndFor \State {\bf Evaluate Local Expansions:} \For{each leaf node (childless box)} \State evaluate the local expansion at each particle location using \eqref{taylorexplocxclfree2}. \EndFor \State {\bf Local Direct Interactions:} \For{$i=1 \to N$ } \State compute Eq. \eqref{generalsum} of target particle $i$ in the neighboring boxes using precomputed table of $u_{\ell\ell'}^{\uparrow}(\bs r, \bs r')$. \EndFor \end{algorithmic} \end{algorithm} \section{Numerical results} In this section, we present numerical results to demonstrate the performance of two versions of TE-FMMs for acoustic wave scattering in layered media. These algorithms are implemented based on an open-source FMM package DASHMM \cite{debuhr2016dashmm}. The numerical simulations are performed on a workstation with two Xeon E5-2699 v4 2.2 GHz processors (each has 22 cores) and 500GB RAM using the gcc compiler version 6.3. Two and three layers media are considered for the numerical tests. More specifically, interfaces are placed at $z_{0}=0$ and $z_{0}=0,z_{1}=-2$ for two and three layer cases, respectively. We first use an example with particles uniformly distributed inside a cubic domain for accuracy and efficiency test. Then, more general distributions of particles in irregular domains are tested. \textbf{Example 1 (Cubic domains): }Set particles to be uniformly distributed in cubes of size $1$ centered at $(0.5,0.5,1.0)$, $(0.5,0.5,-1.0)$ and $(0.5,0.5,-3.0)$, respectively. Let $\widetilde{\Phi}_{\ell} (\boldsymbol{r}_{\ell i})$ be the approximated values of $\Phi_{\ell }(\boldsymbol{r}_{\ell i})$ calculated by TE-FEM. For accuracy test, we put $N=8000$ particles in each box and define $L^{2}$-error and maximum error as \begin{equation} Err_{2}^{\ell}:=\sqrt{\frac{\sum\limits_{i=1}^{N_{\ell}}|\Phi_{\ell }(\boldsymbol{r}_{\ell i})-\widetilde{\Phi}_{\ell}(\boldsymbol{r}_{\ell i})|^{2}}{\sum\limits_{i=1}^{N_{\ell}}|\Phi_{\ell}(\boldsymbol{r}_{\ell i})|^{2}}},\qquad Err_{max}^{\ell}:=\max\limits_{1\leq i\leq{N_{\ell}}} \frac{|\Phi_{\ell}(\boldsymbol{r}_{\ell i})-\widetilde{\Phi}_{\ell }(\boldsymbol{r}_{\ell i})|}{|\Phi_{\ell}(\boldsymbol{r}_{\ell i})|}. \end{equation} Convergence rates against $p$ are depicted in Figs. \ref{errorplot} and \ref{errorplot2}. Comparisons between CPU time for the computation of free space components $\Phi_{\ell}^{free}$ and scattering components in two and three layers are presented in Tables \ref{Table:ex1two}-\ref{Table:ex1three} and Tables \ref{Table:ex1two2}-\ref{Table:ex1three2}. We can see that the cost for the computation of scattering components is about forty times of that for the free space components. \begin{figure} \caption{Convergence of TEFMM-I against truncation number $p$.} \label{errorplot} \end{figure}\begin{figure} \caption{Convergence of TEFMM-II against truncation number $p$.} \label{errorplot2} \end{figure}\begin{table}[ptbhptbhptbh] \centering {\small \begin{tabular} [c]{|c|c|c|c|c|c|}\hline cores & $N$ & time for $\Phi_{0}^{free}$ & time for $\Phi_{00}^{\uparrow} +\Phi_{01}^{\uparrow}$ & time for $\Phi_{1}^{free}$ & time for $\Phi _{10}^{\downarrow}+\Phi_{11}^{\downarrow}$\\\hline \multirow{4}{*}{1} & 64000 & 4.11 & 120.11 & 4.00 & 135.57\\\cline{2-6} & 216000 & 25.65 & 902.09 & 25.73 & 1005.43\\\cline{2-6} & 512000 & 36.80 & 1120.73 & 36.58 & 1385.88\\\cline{2-6} & 1000000 & 61.84 & 1422.70 & 63.07 & 1539.35\\\hline \multirow{4}{*}{22} & 64000 & 0.25 & 7.08 & 0.23 & 7.85\\\cline{2-6} & 216000 & 1.54 & 52.79 & 1.55 & 59.04\\\cline{2-6} & 512000 & 2.21 & 65.88 & 2.18 & 73.65\\\cline{2-6} & 1000000 & 3.75 & 81.39 & 3.65 & 90.25\\\hline \multirow{4}{*}{44} & 64000 & 0.17 & 3.63 & 0.17 & 3.95\\\cline{2-6} & 216000 & 1.10 & 26.60 & 1.09 & 29.86\\\cline{2-6} & 512000 & 1.44 & 33.42 & 1.44 & 37.31\\\cline{2-6} & 1000000 & 1.92 & 41.34 & 1.88 & 45.84\\\hline \end{tabular} }\caption{CPU time for two layers using TEFMM-I with $p=3$.} \label{Table:ex1two} \end{table}\begin{table}[ptbhptbhptbhptbh] \centering {\small \begin{tabular} [c]{|c|c|c|c|c|c|}\hline cores & $N$ & time for $\Phi_{0}^{free}$ & time for $\sum\limits_{\ell ^{\prime}=0}^{2}\Phi_{0\ell^{\prime}}^{\uparrow}$ & time for $\Phi_{1}^{free}$ & time for $\sum\limits_{\ell^{\prime}=0}^{2}\Phi_{1\ell^{\prime}}^{\uparrow} $\\\hline \multirow{4}{*}{1} & 64000 & 3.94 & 109.61 & 3.97 & 134.37\\\cline{2-6} & 216000 & 25.60 & 824.29 & 25.76 & 1016.34\\\cline{2-6} & 512000 & 36.74 & 1034.80 & 36.51 & 1266.90\\\cline{2-6} & 1000000 & 62.67 & 1286.97 & 60.39 & 1551.25\\\hline \multirow{4}{*}{22} & 64000 & 0.25 & 6.47 & 0.24 & 7.90\\\cline{2-6} & 216000 & 1.55 & 48.26 & 1.55 & 59.32\\\cline{2-6} & 512000 & 2.21 & 60.36 & 2.19 & 73.81\\\cline{2-6} & 1000000 & 3.75 & 75.05 & 3.65 & 90.67\\\hline \multirow{4}{*}{44} & 64000 & 0.16 & 3.34 & 0.17 & 3.99\\\cline{2-6} & 216000 & 1.09 & 24.38 & 1.09 & 30.06\\\cline{2-6} & 512000 & 1.60 & 30.73 & 1.62 & 37.75\\\cline{2-6} & 1000000 & 1.93 & 38.19 & 1.87 & 46.08\\\hline \end{tabular} }\caption{CPU time for three layers using TEFMM-I with $p=3$.} \label{Table:ex1three} \end{table}\begin{table}[ptbhptbhptbhptbhptbh] \centering {\small \begin{tabular} [c]{|c|c|c|c|c|c|}\hline cores & $N$ & time for $\Phi_{0}^{free}$ & time for $\Phi_{00}^{\uparrow} +\Phi_{01}^{\uparrow}$ & time for $\Phi_{1}^{free}$ & time for $\Phi _{10}^{\downarrow}+\Phi_{11}^{\downarrow}$\\\hline \multirow{4}{*}{1} & 64000 & 7.65 & 120.27 & 7.37 & 117.20\\\cline{2-6} & 216000 & 60.76 & 610.60 & 59.30 & 663.51\\\cline{2-6} & 512000 & 60.80 & 1071.95 & 60.06 & 1049.38\\\cline{2-6} & 1000000 & 120.18 & 1153.79 & 118.06 & 1146.19\\\hline \multirow{4}{*}{22} & 64000 & 0.38 & 9.48 & 0.38 & 9.64\\\cline{2-6} & 216000 & 3.41 & 54.53 & 3.43 & 55.94\\\cline{2-6} & 512000 & 3.47 & 74.21 & 3.43 & 74.84\\\cline{2-6} & 1000000 & 6.49 & 79.62 & 6.37 & 82.53\\\hline \multirow{4}{*}{44} & 64000 & 0.23 & 8.35 & 0.21 & 8.72\\\cline{2-6} & 216000 & 1.74 & 47.62 & 1.76 & 47.09\\\cline{2-6} & 512000 & 1.75 & 66.88 & 1.73 & 65.65\\\cline{2-6} & 1000000 & 3.36 & 67.56 & 3.29 & 65.50\\\hline \end{tabular} }\caption{CPU time for two layers using TEFMM-II with $p=3$.} \label{Table:ex1two2} \end{table}\begin{table}[ptbhptbhptbhptbhptbhth] \centering {\small \begin{tabular} [c]{|c|c|c|c|c|c|}\hline cores & $N$ & time for $\Phi_{0}^{free}$ & time for $\sum\limits_{\ell ^{\prime}=0}^{2}\Phi_{0\ell^{\prime}}^{\uparrow}$ & time for $\Phi_{1}^{free}$ & time for $\sum\limits_{\ell^{\prime}=0}^{2}\Phi_{1\ell^{\prime}}^{\uparrow} $\\\hline \multirow{4}{*}{1} & 64000 & 6.66 & 97.98 & 6.58 & 99.07\\\cline{2-6} & 216000 & 60.42 & 651.24 & 61.14 & 657.92\\\cline{2-6} & 512000 & 64.91 & 912.75 & 63.15 & 903.19\\\cline{2-6} & 1000000 & 117.90 & 1101.38 & 116.92 & 1304.45\\\hline \multirow{4}{*}{22} & 64000 & 0.38 & 9.64 & 0.38 & 9.69\\\cline{2-6} & 216000 & 3.41 & 54.79 & 3.42 & 56.29\\\cline{2-6} & 512000 & 3.47 & 75.98 & 3.41 & 75.91\\\cline{2-6} & 1000000 & 6.49 & 81.98 & 6.37 & 84.57\\\hline \multirow{4}{*}{44} & 64000 & 0.22 & 8.06 & 0.22 & 8.52\\\cline{2-6} & 216000 & 1.77 & 47.69 & 1.80 & 46.63\\\cline{2-6} & 512000 & 1.74 & 66.45 & 1.74 & 65.43\\\cline{2-6} & 1000000 & 3.35 & 66.85 & 3.26 & 70.04\\\hline \end{tabular} }\caption{CPU time for three layers using TEFMM-II with $p=3$.} \label{Table:ex1three2} \end{table} \textbf{Example 2 (Irregular domains): }In practical applications, objects of irregular shape are often encountered. Here, we give examples with particles located in irregular domains which are obtained by shifting the domain given by \begin{equation} r=0.5-a+\frac{a}{8}(35\cos^{4}\theta-30\cos^{2}\theta+3), \end{equation} with $a=0.1,0.15$ to new centers $(0,0,1)$ and $(0,0,-1)$, respectively (see Fig. \ref{expconfigure} (left) for an illustration). For the three layer case test, we use particles in similar domains centered at $(0,0,1)$, $(0,0,-1)$ and $(0,0,-3)$ with $a=0.1,0.15,0.05$, respectively (see Fig. \ref{expconfigure} (right)). All particles are generated by keeping the uniform distributed particles in a larger cubic within corresponding irregular domains. The CPU time for the computation of $\{\Phi_{\ell} (\boldsymbol{r}_{\ell i})\}_{i=0}^{N_{\ell}}$ and $\{\Phi_{\ell} ^{free}(\boldsymbol{r}_{\ell i})\}_{i=0}^{N_{\ell}}$ are compared in Fig. \ref{TEFMM1perform} and Fig. \ref{TEFMM2perform}. It shows that the new algorithms have an $O(N)$ complexity. \begin{figure} \caption{Configuration of two numerical examples.} \label{expconfigure} \end{figure}\begin{figure} \caption{CPU time for TEFMM-I.} \label{TEFMM1perform} \end{figure}\begin{figure} \caption{CPU time for TEFMM-II.} \label{TEFMM2perform} \end{figure} \section{Conclusion} In this paper, we have presented two Taylor-expansion based fast multipole method for the efficient calculation of the discretized integral operator for the Helmholtz equation in layered media. These methods use the Taylor expansion of layered media Green's function for the low rank representation of far field for acoustic wave scattering governed by the Helmholtz equation. Comparing with the spherical harmonic multipole expansion in the traditional FMM, the Taylor expansion requires $O(p^{3})$ terms for the low rank representation of far field of layered Green's function in contrast to $O(p^{2})$ for the spherical harmonic based multipole expansion FMM for the free space Green's function. We addressed the main difficulty in developing the TE-FMM for the layered media - the computation of up to $p$-th order derivatives of the layered Green's function, which are given in terms of oscillatory Sommerfeld integrals. We proposed two solutions to overcome this difficulty. For the first TE-FMM\ based on non-symmetric derivatives, an efficient algorithm was developed using discrete complex image, which has shown to be very accurate and efficient for the low frequency Helmholtz equation. Meanwhile, for the second TE-FMM\ based on symmetric derivatives, pre-calculated tables are used for the translation operators. Both versions of the TE-FMM have comparable accuracy, and as our numerical examples show, both have an $O(N)$ time complexity similar to the FMM in the free space and they can provide fast solutions for integral equations of Helmholtz equations in layered media with low to middle frequencies. In comparison, the advantage of the first TE-FMM is the efficiency of computing the translation operator with the complex image approximations. However, the complex image approximation is sensitive to the parameters and we still need a rigorous mathematical theory for the approach in finding the discrete images. On the other hand, the second TE-FMM could be used for higher order TE expansions, however, there is a need to pre-compute large number of tables for the translation operators and it requires table storages. For the future work, we will carry out error estimate of the TE-FMMs for the layered media, which require an analysis of the Sommerfeld integral representations of the derivatives and the complex image approximations. \section{Appendix A} \subsection{Two layers with sources in the top layer} Let $L=1$ with source in the bottom layer at $\boldsymbol{r}^{\prime }=(x^{\prime}, y^{\prime}, z^{\prime})$, i.e., $z^{\prime}>0$. Then, the domain Green's function has representation \begin{equation} \label{greenspectraltwolayer3} \begin{cases} \widehat u_{0}(k_{x},k_{y}, z)=A_{0}\cosh(\ri k_{0z}z)+B_{0}\sinh (\ri k_{0z}z)+\frac{\ri e^{\ri(k_{0z}|z-z^{\prime}|-k_{x}x^{\prime} -k_{y}y^{\prime})}}{2k_{0z}}, & z>0,\\[7pt] \widehat u_{1}(k_{x},k_{y}, z)=A_{1}\cosh(\ri k_{1z}z)+B_{1}\sinh (\ri k_{1z}z), & z<0, \end{cases} \end{equation} or equivalently \begin{equation} \label{greenspectraltwolayer4} \begin{cases} \widehat u_{0}(k_{x},k_{y}, z)=b_{0}e^{\ri k_{0z}z}+\frac{\ri e^{\ri(k_{0z} |z-z^{\prime}|-k_{x}x^{\prime}-k_{y}y^{\prime})}}{2k_{0z}}, & z>0,\\[7pt] \widehat u_{1}(k_{x},k_{y}, z)=a_{1}e^{-\ri k_{1z}z}, & z<0, \end{cases} \end{equation} where \begin{equation} \begin{split} & b_{0}=\frac{A_{0}+B_{0}}{2},\quad a_{1}=\frac{A_{1}-B_{1}}{2}. \end{split} \end{equation} Proceeding the recursion \eqref{recursion} gives coefficients \begin{equation} \label{coefftwolayer3} \begin{cases} \displaystyle A_{0}=B_{0}=\frac{(k_{0}k_{0z}-k_{1}k_{1z})e^{\ri k_{0z} z^{\prime}}}{2(k_{0}k_{0z}+k_{1}k_{1z})}\frac{\ri e^{-\ri(k_{x}x^{\prime }+k_{y}y^{\prime})}}{k_{0z}},\\[10pt] \displaystyle A_{1}=-B_{1}=\frac{k_{0}k_{0z}e^{\ri k_{0z}z^{\prime}}} {k_{0}k_{0z}+k_{1}k_{1z}}\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})} }{k_{0z}}, \end{cases} \end{equation} or alternatively \begin{equation} \label{coefftwolayer4} \begin{cases} \displaystyle b_{0}=\frac{k_{0}k_{0z}-k_{1}k_{1z}}{2(k_{0}k_{0z}+k_{1}k_{1z} )}\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}e^{\ri k_{0z}z^{\prime}} }{ k_{0z}},\\[10pt] \displaystyle a_{1}=\frac{k_{0}k_{1z}}{k_{0}k_{0z}+k_{1}k_{1z}}\frac {\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}e^{\ri k_{0z}z^{\prime}}}{ k_{1z}}. \end{cases} \end{equation} Taking inverse Fourier transform on \eqref{greenspectraltwolayer4} with coefficients given by \eqref{coefftwolayer4}, we have \begin{equation} \label{densitytwolayer1}\sigma_{00}^{\uparrow\uparrow}(k_{\rho})=\frac {k_{0}k_{0z}-k_{1}k_{1z}}{k_{0}k_{0z}+k_{1}k_{1z}},\quad\sigma_{10} ^{\downarrow\uparrow}(k_{\rho})=\frac{2k_{0}k_{1z}}{k_{0}k_{0z}+k_{1}k_{1z}}. \end{equation} \subsection{Two layers with sources in the bottom layer} Let $L=1$ with source in the bottom layer at $\boldsymbol{r}^{\prime }=(x^{\prime}, y^{\prime}, z^{\prime})$, i.e., $z^{\prime}<0$. The domain Green's function has representation \begin{equation} \label{greenspectraltwolayer1} \begin{cases} \widehat u_{0}(k_{x},k_{y}, z)=A_{0}\cosh(\ri k_{0z}z)+B_{0}\sinh (\ri k_{0z}z), & z>0,\\[7pt] \widehat u_{1}(k_{x},k_{y}, z)=A_{1}\cosh(\ri k_{1z}z)+B_{1}\sinh (\ri k_{1z}z)+\frac{\ri e^{\ri(k_{1z}|z-z^{\prime}|-k_{x}x^{\prime} -k_{y}y^{\prime})}}{2k_{1z}}, & z<0, \end{cases} \end{equation} or equivalently \begin{equation} \label{greenspectraltwolayer2} \begin{cases} \widehat u_{0}(k_{x},k_{y}, z)=b_{0}e^{\ri k_{0z}z}, & z>0,\\[7pt] \widehat u_{1}(k_{x},k_{y}, z)=a_{1}e^{-\ri k_{1z}z}+\frac{\ri e^{\ri(k_{1z} |z-z^{\prime}|-k_{x}x^{\prime}-k_{y}y^{\prime})}}{2k_{1z}}, & z<0, \end{cases} \end{equation} where \begin{equation} \begin{split} & b_{0}=\frac{A_{0}+B_{0}}{2},\quad a_{1}=\frac{A_{1}-B_{1}}{2}. \end{split} \end{equation} Proceeding the recursion \eqref{recursion} gives coefficients \begin{equation} \label{coefftwolayer1} \begin{cases} \displaystyle A_{0}=B_{0}=\frac{e^{-\ri k_{1z}z^{\prime}}k_{1}k_{1z}} {k_{0}k_{0z}+k_{1}k_{1z}}\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}}{ k_{1z}},\\[10pt] \displaystyle A_{1}=-B_{1}=\frac{k_{1}k_{1z}-k_{0}k_{0z}}{2(k_{0}k_{0z} +k_{1}k_{1z})}\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})} e^{-\ri k_{1z}z^{\prime}}}{k_{1z}}, \end{cases} \end{equation} or alternatively \begin{equation} \label{coefftwolayer2} \begin{cases} \displaystyle b_{0}=\frac{k_{1}k_{0z}}{k_{0}k_{0z}+k_{1}k_{1z}}\frac {\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}e^{-\ri k_{1z}z^{\prime}}}{ k_{0z}},\\[10pt] \displaystyle a_{1}=\frac{k_{1}k_{1z}-k_{0}k_{0z}}{2(k_{0}k_{0z}+k_{1}k_{1z} )}\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}e^{-\ri k_{1z}z^{\prime} }}{ k_{1z}}. \end{cases} \end{equation} Taking inverse Fourier transform on \eqref{greenspectraltwolayer2} with coefficients given by \eqref{coefftwolayer2}, we have \begin{equation} \label{densitytwolayer2}\sigma_{01}^{\uparrow\downarrow}(k_{\rho} )=\frac{2k_{1}k_{0z}}{k_{0}k_{0z}+k_{1}k_{1z}},\quad\sigma_{11}^{\downarrow \downarrow}(k_{\rho})=\frac{k_{1}k_{1z}-k_{0}k_{0z}}{k_{0}k_{0z}+k_{1}k_{1z}}. \end{equation} \subsection{Three layers with sources in the top layer} Let $L=2$ with interfaces at $z=0$ and $z=-d<0$. Assume that the source is in the first layer at $(x^{\prime}, y^{\prime}, z^{\prime})$, i.e., $z^{\prime }>0$. The domain Green's function has representation \begin{equation} \label{greenspectral1} \begin{cases} \displaystyle\widehat u_{0}(k_{x},k_{y}, z)=A_{0}\cosh(\ri k_{0z}z)+B_{0} \sinh(\ri k_{0z}z)+\frac{\ri e^{\ri(k_{0z}|z-z^{\prime}|-k_{x}x^{\prime} -k_{y}y^{\prime})}}{2k_{0z}}, & 0<z<z^{\prime},\\[7pt] \displaystyle\widehat u_{1}(k_{x},k_{y}, z)=A_{1}\cosh(\ri k_{1z} (z+d))+B_{1}\sinh(\ri k_{1z}(z+d)), & -d<z<0,\\[7pt] \displaystyle\widehat u_{2}(k_{x},k_{y}, z)=A_{2}\cosh(\ri k_{2z}z)+B_{2} \sinh(\ri k_{2z}z), & z<-d, \end{cases} \end{equation} or equivalently \begin{equation} \label{greenspectral2} \begin{cases} \displaystyle\widehat u_{0}(k_{x},k_{y}, z)=b_{0}e^{\ri k_{0z}z} +\frac{\ri e^{\ri(k_{0z}|z-z^{\prime}|-k_{x}x^{\prime}-k_{y}y^{\prime})} }{2k_{0z}}, & z>0,\\[7pt] \displaystyle\widehat u_{1}(k_{x},k_{y}, z)=a_{1}e^{-\ri k_{1z}(z+d)} +b_{1}e^{\ri k_{1z}(z+d)}, & -d<z<0,\\[7pt] \displaystyle\widehat u_{2}(k_{x},k_{y}, z)=a_{2}e^{-\ri k_{2z}z}, & z<-d, \end{cases} \end{equation} where \[ b_{0}=\frac{A_{0}+B_{0}}{2},\quad a_{1}=\frac{A_{1}-B_{1}}{2},\quad b_{1}=\frac{A_{1}+B_{1}}{2},\quad a_{2}=\frac{A_{2}-B_{2}}{2}. \] Again proceeding the recursion \eqref{recursion} gives coefficients \begin{equation} \begin{cases} \displaystyle A_{0}=B_{0}=\frac{k_{0}k_{0z}\kappa_{11}+\ri k_{1}k_{1z} \kappa_{12}}{2(k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12})} \frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})} e^{\ri k_{0z} z^{\prime}} }{ k_{0z}},\\[7pt] \displaystyle A_{1}=\frac{k_{0}k_{1}k_{0z} k_{1z} }{k_{0}k_{0z}\kappa _{11}-\ri k_{1}k_{1z}\kappa_{12}}\frac{\ri e^{-\ri(k_{x}x^{\prime} +k_{y}y^{\prime})}e^{ik_{0z} z^{\prime}}}{ k_{0z}},\\[7pt] \displaystyle B_{1}=\frac{-k_{0}k_{2} k_{0z} k_{2z}}{k_{0}k_{0z}\kappa _{11}-\ri k_{1}k_{1z}\kappa_{12}}\frac{\ri e^{-\ri(k_{x}x^{\prime} +k_{y}y^{\prime})}e^{ik_{0z} z^{\prime}}}{ k_{0z}},\\[7pt] \displaystyle A_{2}=-B_{2}=\frac{ k_{0} k_{1}k_{0z} k_{1z}e^{-\ri d k_{2z} } }{k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12}}\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})}e^{\ri k_{0z} z^{\prime}}}{k_{0z}}, \end{cases} \end{equation} and \[ \begin{cases} \displaystyle b_{0}=\frac{k_{0}k_{0z}\kappa_{11}+\ri k_{1}k_{1z}\kappa_{12} }{2(k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12})}\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})} e^{\ri k_{0z} z^{\prime}}}{ k_{0z}},\\[7pt] \displaystyle a_{1}=\frac{k_{0}k_{1z}(k_{1} k_{1z}+ k_{2} k_{2z}) } {2(k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12})}\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})}e^{\ri k_{0z} z^{\prime}}}{ k_{1z}},\\[7pt] \displaystyle b_{1}=\frac{k_{0}k_{1z}(k_{1} k_{1z}- k_{2} k_{2z}) } {2(k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12})}\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})}e^{\ri k_{0z} z^{\prime}}}{ k_{1z}},\\[7pt] \displaystyle a_{2}=\frac{ k_{0} k_{1} k_{1z}k_{2z}e^{-\ri d k_{2z} }} {k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12}}\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})}e^{\ri k_{0z} z^{\prime}}}{k_{2z}}, \end{cases} \] where \begin{equation} \label{kappa1112} \begin{split} & \kappa_{11}=\frac{k_{1}k_{1z}-k_{2}k_{2z}}{2}e^{\ri 2dk_{1z}}+\frac {k_{1}k_{1z}+k_{2}k_{2z}}{2},\\ & \kappa_{12}=\ri\Big(\frac{k_{2}k_{2z}-k_{1}k_{1z}}{2}e^{\ri 2dk_{1z}} +\frac{k_{1}k_{1z}+k_{2}k_{2z}}{2}\Big). \end{split} \end{equation} Substuting into \eqref{greenspectral2} and applying inverse Fourier transform, we have \begin{equation} \label{densitythreelayer1} \begin{cases} \displaystyle\sigma_{00}^{\uparrow\uparrow}(k_{\rho})=\frac{k_{0}k_{0z} \kappa_{11}+\ri k_{1}k_{1z}\kappa_{12}}{k_{0}k_{0z}\kappa_{11}-\ri k_{1} k_{1z}\kappa_{12}},\\[8pt] \displaystyle\sigma_{10}^{\uparrow\uparrow}(k_{\rho})=\frac{k_{0}k_{1z}(k_{1} k_{1z}- k_{2} k_{2z})e^{\ri dk_{1z}}}{k_{0}k_{0z}\kappa_{11}-\ri k_{1} k_{1z}\kappa_{12}},\\[8pt] \displaystyle\sigma_{10}^{\downarrow\uparrow}(k_{\rho})=\frac{k_{0} k_{1z}(k_{1} k_{1z}+ k_{2} k_{2z})e^{\ri dk_{1z}}}{k_{0}k_{0z}\kappa _{11}-\ri k_{1}k_{1z}\kappa_{12}},\\[8pt] \displaystyle\sigma_{20}^{\downarrow\uparrow}(k_{\rho})=\frac{2k_{0} k_{1} k_{1z}k_{2z}e^{\ri dk_{1z}} }{k_{0}k_{0z}\kappa_{11}-\ri k_{1} k_{1z}\kappa_{12}}. \end{cases} \end{equation} \subsection{Three layers with sources in the middle layer} Let $L=2$ with interfaces at $z=0$ and $z=-d<0$. Assume that the source is in the middle layer at $(x^{\prime}, y^{\prime}, z^{\prime})$, i.e., $-d<z^{\prime}<0$. The domain Green's function has representation \begin{equation} \label{greenspectralmsour1} \begin{cases} \displaystyle \widehat u_{0}=A_{0}\cosh(\ri k_{0z}z)+B_{0}\sinh(\ri k_{0z} z), & z>0,\\[7pt] \displaystyle \widehat u_{0}=A_{1}\cosh(\ri k_{1z}(z+d))+B_{1}\sinh (\ri k_{1z}(z+d))+\frac{\ri e^{\ri(k_{1z}|z-z^{\prime}|-k_{x}x^{\prime} -k_{y}y^{\prime})}}{2k_{1z}}, & -d<z<z^{\prime},\\[7pt] \displaystyle \widehat u_{2}=A_{2}\cosh(\ri k_{2z}z)+B_{2}\sinh(\ri k_{2z} z), & z<-d, \end{cases} \end{equation} or equivalently \begin{equation} \label{greenspectralmsour2} \begin{cases} \displaystyle \widehat u_{0}=b_{0}e^{\ri k_{0z}z}, & z>0,\\[7pt] \displaystyle \widehat u_{1}=a_{1}e^{-\ri k_{1z}(z+d)}+b_{1}e^{\ri k_{1z} (z+d)}+\frac{\ri e^{\ri(k_{1z}|z-z^{\prime}|-k_{x}x^{\prime}-k_{y}y^{\prime} )}}{2k_{1z}}, & -d<z<0,\\[7pt] \displaystyle \widehat u_{2}=a_{2}e^{-\ri k_{2z}z}, & z<-d, \end{cases} \end{equation} where \[ b_{0}=\frac{A_{0}+B_{0}}{2},\quad a_{1}=\frac{A_{1}-B_{1}}{2},\quad b_{1}=\frac{A_{1}+B_{1}}{2},\quad a_{2}=\frac{A_{2}-B_{2}}{2}. \] Again proceeding the recursion \eqref{recursion} gives coefficients \begin{equation} \begin{cases} \displaystyle A_{0}=B_{0}=\frac{k_{1}k_{1z}\kappa_{23}}{k_{0}k_{0z}\kappa _{11}-\ri k_{1}k_{1z}\kappa_{12}}\frac{\ri e^{-\ri(k_{x}x^{\prime} +k_{y}y^{\prime})}}{ k_{1z}},\\[7pt] \displaystyle A_{1}=\frac{(e^{-\ri k_{1z}z^{\prime}}k_{1}k_{1z}(k_{1} k_{1z}-k_{0}k_{0z})+e^{\ri k_{1z}(d+z^{\prime})}\kappa_{21})e^{\ri dk_{1z}} }{k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12}}\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})}}{k_{1z}},\\[7pt] \displaystyle B_{1}=\frac{(e^{-\ri k_{1z}z^{\prime}}k_{2}k_{2z}(k_{0} k_{0z}-k_{1}k_{1z})+e^{\ri k_{1z}(d+z^{\prime})}\kappa_{21}^{\prime })e^{\ri dk_{1z}}}{k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12}} \frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}}{k_{1z}},\\[7pt] \displaystyle A_{2}=-B_{2}=\frac{ k_{1} k_{1z}\kappa_{22}e^{\ri d(k_{1z} -k_{2z})}}{k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12}}\frac {\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}}{k_{1z}}, \end{cases} \end{equation} where $\kappa_{11}, \kappa_{12}$ are defined in \eqref{kappa1112} and \[ \begin{split} \kappa_{21} & =(k_{1}k_{1z}-k_{2}k_{2z})\Big(\frac{k_{1} k_{1z}-k_{0} k_{0z}}{2}e^{\ri dk_{1z}}+\frac{k_{0} k_{0z}+k_{1} k_{1z}}{2}e^{-\ri d k_{1z} }\Big),\\ \kappa_{21}^{\prime} & =(k_{1}k_{1z}-k_{2}k_{2z})\Big(\frac{k_{0} k_{0z}-k_{1} k_{1z}}{2}e^{\ri dk_{1z}}+\frac{k_{0} k_{0z}+k_{1} k_{1z}} {2}e^{-\ri d k_{1z}}\Big),\\ \kappa_{22} & =\frac{k_{1} k_{1z}+k_{0} k_{0z}}{2}e^{\ri k_{1z}z^{\prime} }+\frac{k_{1} k_{1z}-k_{0} k_{0z}}{2}e^{-\ri k_{1z}z^{\prime}},\\ \kappa_{23} & =\frac{k_{1}k_{1z}- k_{2}k_{2z}}{2}e^{\ri k_{1z}(2d+z^{\prime })}+\frac{k_{1}k_{1z}+ k_{2}k_{2z}}{2}e^{-\ri k_{1z}z^{\prime}}. \end{split} \] Noting that \begin{equation} \kappa_{21}+\kappa_{22}=(k_{0}k_{0z}+k_{1}k_{1z})e^{\ri k_{1z}z^{\prime} },\quad\kappa_{21}-\kappa_{22}=(k_{0}k_{0z}-k_{1}k_{1z})e^{-\ri k_{1z} z^{\prime}}, \end{equation} we further have \begin{equation} \begin{cases} \displaystyle b_{0}=\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}}{ k_{0z}}\frac{k_{1}k_{0z}\kappa_{23}}{k_{0}k_{0z}\kappa_{11}-\ri k_{1} k_{1z}\kappa_{12}},\\[7pt] \displaystyle a_{1}=\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}}{ 2k_{1z}}\frac{(k_{1}k_{1z}-k_{0}k_{0z})\kappa_{23}e^{\ri dk_{1z}}}{k_{0} k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12}},\\[7pt] \displaystyle b_{1}=\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})} }{2k_{1z}}\frac{(k_{1}k_{1z}-k_{2}k_{2z})\kappa_{22}e^{\ri dk_{1z}}} {k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12}},\\[7pt] \displaystyle a_{2}=\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})} }{k_{2z}}\frac{ k_{1} k_{2z}\kappa_{22}e^{\ri dk_{1z}}e^{-\ri d k_{2z} } }{k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12}}. \end{cases} \end{equation} Substuting into \eqref{greenspectralmsour2} and applying inverse Fourier transform, we have \begin{equation} \label{densitythreelayer2}\hspace{-7pt} \begin{cases} \displaystyle\{\sigma_{01}^{\uparrow\uparrow}(k_{\rho}), \sigma_{01} ^{\uparrow\downarrow}(k_{\rho})\}=\frac{k_{1}k_{0z}}{k_{0}k_{0z}\kappa _{11}-\ri k_{1}k_{1z}\kappa_{12}}\big\{(k_{1}k_{1z}-k_{2}k_{2z})e^{\ri dk_{1z} }, k_{1}k_{1z}+k_{2}k_{2z}\},\\[8pt] \displaystyle\{\sigma_{11}^{\uparrow\uparrow}(k_{\rho}), \sigma_{11} ^{\uparrow\downarrow}(k_{\rho})\}=\frac{k_{1}k_{1z}-k_{2}k_{2z}}{k_{0} k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12}}\Big\{\frac{k_{1}k_{1z} +k_{0}k_{0z}}{2}, \frac{k_{1}k_{1z}-k_{0}k_{0z}}{2}e^{\ri dk_{1z} }\Big\},\\[8pt] \displaystyle\{\sigma_{11}^{\downarrow\uparrow}(k_{\rho}), \sigma _{11}^{\downarrow\downarrow}(k_{\rho})\}=\frac{(k_{1}k_{1z}-k_{0} k_{0z})e^{\ri dk_{1z}}}{k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12} }\Big\{\frac{k_{1}k_{1z}-k_{2}k_{2z}}{2}e^{\ri dk_{1z}}, \frac{k_{1} k_{1z}+k_{2}k_{2z}}{2}\Big\},\\[8pt] \displaystyle\{\sigma_{21}^{\downarrow\uparrow}(k_{\rho}), \sigma _{21}^{\downarrow\downarrow}(k_{\rho})\}=\frac{k_{1} k_{2z}e^{-\ri dk_{2z}} }{k_{0}k_{0z}\kappa_{11}-\ri k_{1}k_{1z}\kappa_{12}}\Big\{k_{1}k_{1z} +k_{0}k_{0z}, (k_{1}k_{1z}-k_{0}k_{0z})e^{\ri dk_{1z}}\Big\}. \end{cases} \end{equation} \subsection{Three layers with sources in the bottom layer} Let $L=2$ with interfaces at $z=0$ and $z=-d<0$. Assume that the source is in the bottom layer at $(x^{\prime}, y^{\prime}, z^{\prime})$, i.e., $z^{\prime }<-d$. The domain Green's function has representation \begin{equation} \label{greenspectralbsour1} \begin{cases} \displaystyle \widehat u_{0}(k_{x},k_{y}, z)=A_{0}\cosh(\ri k_{0z} z)+B_{0}\sinh(\ri k_{0z}z), & z>0,\\[7pt] \displaystyle \widehat u_{1}(k_{x},k_{y}, z)=A_{1}\cosh(\ri k_{1z} (z+d))+B_{2}\sinh(\ri k_{1z}(z+d)),, & -d<z<0,\\[7pt] \displaystyle \widehat u_{2}(k_{x},k_{y}, z)=A_{2}\cosh(\ri k_{2z} z)+B_{2}\sinh(\ri k_{2z}z), & z<-d, \end{cases} \end{equation} or equivalently \begin{equation} \label{greenspectralbsour2} \begin{cases} \displaystyle \widehat u_{0}(k_{x},k_{y}, z)=b_{0}e^{\ri k_{0z}z}, & z>0,\\[7pt] \displaystyle \widehat u_{1}(k_{x},k_{y}, z)=a_{1}e^{-\ri k_{1z}(z+d)} +b_{1}e^{\ri k_{1z}(z+d)}, & -d<z<0,\\[7pt] \displaystyle \widehat u_{2}(k_{x},k_{y}, z)=a_{2}e^{-\ri k_{2z}z} +\frac{\ri e^{\ri(k_{2z}|z-z^{\prime}|-k_{x}x^{\prime}-k_{y}y^{\prime})} }{2k_{2z}}, & z<-d, \end{cases} \end{equation} where \[ b_{0}=\frac{A_{0}+B_{0}}{2},\quad a_{1}=\frac{A_{1}-B_{1}}{2},\quad b_{1}=\frac{A_{1}+B_{1}}{2},\quad a_{2}=\frac{A_{2}^{U}-B_{2}^{U}}{2}. \] then the solution can be calculated via \eqref{coefficientsolver}, i.e., \begin{equation} \begin{cases} \displaystyle A_{0}=B_{0}=\frac{k_{1}k_{1z} k_{2}k_{2z}e^{\ri dk_{1z}}} {k_{2}k_{2z}\kappa_{31}-\ri k_{1}k_{1z}\kappa_{32}}\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})}e^{-\ri k_{2z}(d+z^{\prime})}}{ k_{2z}},\\[7pt] \displaystyle A_{1}=\frac{k_{2}k_{2z}\kappa_{31}e^{-\ri k_{2z}(d+z^{\prime})} }{k_{2}k_{2z}\kappa_{31}-\ri k_{1}k_{1z}\kappa_{32}}\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})}}{k_{2z}},\\[7pt] \displaystyle B_{1}=\frac{-\ri k_{2}k_{2z}\kappa_{32}e^{-\ri k_{2z} (d+z^{\prime})}}{k_{2}k_{2z}\kappa_{31}-\ri k_{1}k_{1z}\kappa_{32}} \frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}}{k_{2z}},\\[7pt] \displaystyle A_{2}=-B_{2}=\frac{k_{2}k_{2z}\kappa_{31}+\ri k_{1}k_{1z} \kappa_{32}}{2 (k_{2}k_{2z}\kappa_{31}-\ri k_{1}k_{1z}\kappa_{32})} \frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}e^{-\ri k_{2z}(d+z)} }{k_{2z}}, \end{cases} \end{equation} where \begin{equation} \begin{split} & \kappa_{31}=\frac{k_{1} k_{1z} -k_{0} k_{0z}}{2}e^{\ri 2dk_{1z}} +\frac{k_{1} k_{1z} + k_{0} k_{0z}}{2},\\ & \kappa_{32}=\ri\Big(\frac{k_{0} k_{0z} -k_{1} k_{1z}}{2}e^{\ri 2dk_{1z}} +\frac{k_{0} k_{0z} +k_{1} k_{1z}}{2}\Big). \end{split} \end{equation} Then \begin{equation} \begin{cases} \displaystyle b_{0}=\frac{k_{1}k_{1z} k_{2}k_{2z}e^{\ri dk_{1z}}}{k_{2} k_{2z}\kappa_{31}+\ri k_{1}k_{1z}\kappa_{32}}\frac{\ri e^{-\ri(k_{x}x^{\prime }+k_{y}y^{\prime})}e^{-\ri k_{2z}(d+z^{\prime})}}{ k_{2z}},\\[7pt] \displaystyle a_{1}=\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})} }{k_{2z}}\frac{k_{2}k_{2z}(k_{1}k_{1z}-k_{0}k_{0z})e^{\ri 2dk_{1z} }e^{-\ri k_{2z}(d+z^{\prime})}}{2(k_{2}k_{2z}\kappa_{31}+\ri k_{1}k_{1z} \kappa_{32})},\\[7pt] \displaystyle b_{1}=\frac{\ri e^{-\ri(k_{x}x^{\prime}+k_{y}y^{\prime})}}{ k_{2z}}\frac{k_{2}k_{2z}(k_{1}k_{1z}+k_{0}k_{0z})e^{-\ri k_{2z}(d+z^{\prime} )}}{2 (k_{2}k_{2z}\kappa_{31}+\ri k_{1}k_{1z}\kappa_{32})},\\[7pt] \displaystyle a_{2}=\frac{k_{2}k_{2z}\kappa_{31}+\ri k_{1}k_{1z}\kappa_{32}}{2 (k_{2}k_{2z}\kappa_{31}-\ri k_{1}k_{1z}\kappa_{32})}\frac{\ri e^{-\ri(k_{x} x^{\prime}+k_{y}y^{\prime})}e^{-\ri dk_{2z}}}{k_{2z}}. \end{cases} \end{equation} Substuting into \eqref{greenspectralbsour2} and applying inverse Fourier transform, we have \begin{equation} \label{densitythreelayer3} \begin{cases} \displaystyle\sigma_{02}^{\uparrow\downarrow}(k_{\rho})=\frac{2k_{1} k_{1z}k_{2}k_{0z}e^{\ri dk_{1z}}}{k_{2}k_{2z}\kappa_{31}-\ri k_{1}k_{1z} \kappa_{32}},\\[8pt] \displaystyle\sigma_{12}^{\uparrow\downarrow}(k_{\rho})=\frac{k_{2} k_{1z}(k_{1}k_{1z}+k_{0}k_{0z})}{k_{2}k_{2z}\kappa_{31}-\ri k_{1}k_{1z} \kappa_{32}},\\[8pt] \displaystyle\sigma_{12}^{\downarrow\downarrow}(k_{\rho})=\frac{k_{2} k_{1z}(k_{1}k_{1z}-k_{0}k_{0z})e^{\ri 2dk_{1z}}}{k_{2}k_{2z}\kappa _{31}-\ri k_{1}k_{1z}\kappa_{32}},\\[8pt] \displaystyle\sigma_{22}^{\downarrow\downarrow}(k_{\rho})=\frac{k_{2} k_{2z}\kappa_{31}+\ri k_{1}k_{1z}\kappa_{32}}{k_{2} k_{2z}\kappa_{31}-\ri k_{1}k_{1z}\kappa_{32}}. \end{cases} \end{equation} It is worthy to point out that \[ k_{2}k_{2z}\kappa_{31}-\ri k_{1}k_{1z}\kappa_{32}=k_{0}k_{0z}\kappa _{11}-\ri k_{1}k_{1z}\kappa_{12}. \] \end{document}
arXiv
Home Journals MMEP Clay and fibers: Energy efficiency in buildings between tradition and innovation Clay and fibers: Energy efficiency in buildings between tradition and innovation T. Cardinale* | C. Sposato | A. Feo | P. De Fazio ENEA - C.R. Trisaia, Div. DTE-BBC – ss 106 jonica km. 419, 500 Rotondella, Italy [email protected] https://doi.org/10.18280/mmep.050308 Building construction technology using clay soil in various forms, already known since ancient times, presents a great potential to regulate indoor humidity, to reduce the indirect impact of construction sector on the environment and on energy consumption. Extensive studies have also been done on the effects of natural fibers on the mechanical and physical behaviour of composite materials in terms of strength, energy efficiency and impact resistance. This work is focused on some natural fiber composites made from different mixtures containing clay soil with different percentages of jute, straw and basalt fibers, in order to determine the ideal mixture between clay and fibers providing the optimum values of thermal inertia, mechanical performance and shrinkage able to improve the energy efficiency of buildings. The mechanical, physical and thermal properties of some specimens have been investigated. The obtained results show an improved mechanical strength and a better thermal conductivity of the clay composite material. adobe bricks, biobased materials, mechanical strength, natural fibers, thermal conductivity Building with low-input materials from renewable resources is one of the key issues for the EU; for this reason several ambitious actions have been implemented to reduce the impact of construction sector on energy consumption and, more generally, on the environment. The promotion of sustainable development lead to a growing interest in "alternative" building materials such as crude earth and biomass aggregates. Clay soil is one of the most common materials and the so-called adobe bricks made from loam, clay and sand were widely used in the past mostly for housing construction. Still from the Neolithic period, there are evidences of the existence of such constructions in areas like Mesopotamia, Anatolia and the Levant [1]. Nowadays crude earth is still the most widespread building material in the world because it has advantages in energy and climate fields: it is available in large quantities, cheap and easy to work, it is a totally recyclable resource and it requires a very low amount of energy to the manufacture as well to the transport [2]. Other important benefits, as observed in raw earth constructions left by previous generations, are the high thermal inertia and humidity regulation properties, that improve thermal performances and provide a healthy environment within the buildings [3]. Studying the hygro-thermal properties of this kind of bio-based materials is an essential step in the evaluation process of their impact, not only on energy consumption for heating, but also on indoor comfort, which strongly depends on heat and steam transfer [4]. In recent years the use of these type of local resources combined with natural fibers has contributed to create new low cost environmentally friendly materials, characterized by better electrical resistance, good mechanical properties, good thermal and acoustic insulating properties, as well as higher resistance to fracture [5]. The design of natural fiber composites could really substitute synthetic fiber reinforced composites as structural or semi structural components, especially in lightweight applications, with many economic and environmental benefits. The interest of researchers [6] toward natural fiber composites is principally due to avoid harmful effect on the environment and is also connected to their specific properties. They are biodegradable, inexpensive, affordable, green, accessible and easy available in nature, such as coconut tree, banana tree, cotton, flax, hemp, straw, jute, etc. [7]. Among natural fibers, the lignocellulosic ones containing cellulose, hemicellulose, lignin and pectins, with a small amount of extractives (the relative amounts vary greatly between different species of plants, depending on their origin) are well characterized in terms of composition and mechanical properties [8]. Also basalt fibers, natural inorganic material, represent a very promising reinforcing agent for these building materials [9-10]. In the last decades, extensive studies have been done especially on the effects of natural fibers on the mechanical and physical behaviour of cementitious materials in terms of strength, energy efficiency, and impact resistance [11-12]. The inclusion of short discrete fibres in concrete, mortar and/or cement paste can largely enhance some of their engineering properties, such as fracture toughness, tensile strength, flexural strength, resistance to fatigue, impact, and thermal shock [13]. Use of natural fibers to produce high-quality and low cost sustainable fiber-reinforced material is increasing. The mechanical properties of composite materials are strongly influenced by the kind of fiber and its properties [14]. Compared with conventional inorganic fillers such as glass fibers and carbon fibers, natural fibers provide additional advantages: low cost production, less energy consumption, flexibility during processing and less resulting machine wear, low density and relatively high tensile and flexural modulus are the main reasons behind the use of cellulose fibers as a constituent of the natural fiber composites [15-16]. The renewable and biodegradable characteristics of natural fibers facilitate their ultimate disposal by composting or incineration, options not possible with most industrial fibers [17]. Interventions on the building envelope represent an energy and cost effective solution, and also play an important role in achieving better conditions for the users, as far as thermal, visual and acoustic comfort as well as indoor air quality are concerned [18-19]. Therefore low carbon construction process would be an economic and ecological challenge with multiple benefits. In this work we have produced a clay sand composite with natural filler, such as basalt, jute or straw, in different percentages, in order to provide building materials with good physical, mechanical and thermal properties. 2. Materials and Methods The book size will be in A4 (8.27 inches x 11.69 inches). Do not change the current page settings when you use the template. The number of pages for the manuscript must be no more than ten, including all the sections. Please make sure that the whole text ends on an even page. Please do not insert page numbers. Please do not use the Headers or the Footers because they are reserved for the technical editing by editors. The materials used for the specimens made as adobe bricks are clay, sand water and fillers such as basalt, jute and straw. Clay is a finely-grained soil material, used as binder. It has as major components (wt%): SiO2 (57.29%), Al2O3 (12.79%), CaO (18.48%). The chemical composition of used clay is reported in table 1 and particle size distribution is shown in figure 1. Table 1. Chemical composition of clay % w/w Fe2O3 K2O Na2O Figure 1. Size distribution of clay particles The XPRD (X-ray powder diffraction) analysis (Figure 2) shows the presence of crystalline phases such as quartz, dolomite, calcite, plagioclase, illite, chlorite and kaolinite. Figure 2. XRD patterns of clay In this work, tap water is added to fine silica sand in order to perform specimens with different clay content (up to 50%w/w); the particle size distribution of sand is measured according with UNI EN 933-1:2012 and showed in Figure 3. Figure 3. Size distribution of sand particles The straw fillers used for this research have been obtained from plants of durum wheat (Triticum durum Desf.), characterized by medium height and late production. Straw is a very good and renewable thermal insulation material [20], cheap and easy to obtain. The straw used in this research has been produced at the end of June 2015, in the form of compacted bales of approximate dimensions, stored in a dry environment, cut by using a knife mill with a 2 mm square grid and finally characterized by ASTM sieves (Figure 4, Figure 5). Figure 4. Size distribution of straw fibers particles Figure 5. Fillers used for the mixture; a) basalt, b) jute, c) straw The jute fibers were obtained by manually cutting a canvas of 1 x 1 m. The length of resulting fibers was 3 (± 0.5) cm with a thickness of about 1 mm. Basalt fibers were provided by the Chinese company HG GBF. Chopped used in this work were 24 mm of length and 12 µm of thickness (fiber code J01YF6) (Figure 5). The bulk densities of the straw, jute and basalt fibers were determined by weighing a known volume of material; true density was determined after soaking the fibers overnight in pure water. In Table 2 the fillers densities are reported. Table 2. Densities of used fibers Bulk density [kg/m3] Apparent density [kg/m3] True density [kg/m3] 2.1 Specimens preparation In this work two different specimens geometries were prepared; the first one are 160 x 40 x 40 mm clay bricks, used for mechanical test, and the second one are 300 x 300 x 20 mm specimens used to investigate the thermal properties of resulting materials. In a first part of this work the influence of clay mass was studied; for this purpose 160 x 40 x 40 mm specimens with mixture components amounts reported in Table 3 were prepared. Table 3. Composition of clay/sand mixture Sand [g] Clay [g] Water [g] Density [kg/m3] The amount of clay, calculated as a percentage on dry raw material, varied between 10 and 50% (C10 - C50). A water/solid ratio of 0.25 is used in order to obtain an appropriate workability of mixture. In a typical preparation step, dry solid, such as sand and clay, are mixed with tap water using a concrete mixer impeller (800 rpm) for 10 min. The mixture was placed into appropriate moulds, and the resulting samples are aged in a climate chamber (Angelantoni mod. ACS 1200) for 3 days at 30 °C with a relative humidity (U) of 40% and then increasing temperature up to 70 °C. After aging treatment no weight variation was observed. Specimens containing fibers were prepared using the components reported in Table 4. Table 4. Composition of fiber specimens Straw fillers [g] Jute fibers [g] Basalt fibers [g] Prepared samples contain an amount of fiber of 2 and 3 %w/w calculated on dry solid raw materials. The content of clay was 20%w/w for all specimens reported in Table 4. In order to guarantee the workability of clay/fiber mixture an increase of water/solid ratio is effected, from 25.8% in the case of the lowest basalt content (B2), to 33.3 % for the highest amount of straw (S3) and jute fillers (J3). In Table 5 the composition of mixtures used for 300x300x20 mm panel specimens is shown; 3 - 5% of fiber content were studied, using 0.2 of clay/solid ratio. Table 5. 300 x 300 x 20 mm panel samples characteristics Fiber [g] REF-p S3-p J3-p B3-p 3. Section Headings Experimental tests were performed to investigate thermal and mechanical properties of manufactured specimens containing different percentages of used fibers. The apparatus used for thermal conductivity measurements was an heat flow meter in "single sample in a double configuration" (NETZSCH heat flow meter HFM 436/0/1), placed in a conditioned laboratory at a temperature of 23 ± 2 °C and relative humidity 50 ± 5 %, to ensure the test conditions required by the standard EN 12664:2002 "Thermal performance of building and products – Determination of thermal resistance by means of guarded hot plate and heat flow meter methods – Dry and moist products of medium and low thermal resistance". The specimen is placed between two plates at different temperatures (ΔT). The heat flux (q) which passes through the specimen is measured by heat flux transducers: with the achievement of thermal equilibrium the test ends. For the purpose of the analysis only a central portion (100 x 100 mm) is considered. If l is the thermal conductivity of the specimen, d is the specimen thickness, ΔT is the temperature difference between the two faces of the test and A is the area through which the heat passes, the relation between these parameters which provides heat balance is expressed by Fourier equation: F = l× A × DT /d (1) The two transducers measure the heat flow through the specimen. The signal from a transducer (expressed in Volt) is proportional to the heat flow through the transducer. In the heat flow meter, the transducer area is the area through which passes the heat and is constant for all the specimens, then: Φ = N · V (2) where N is the calibration factor that joins the potential difference of the transducers to heat flow through the specimen. Resolving isolating l, thermal conductivity is given by the following relation: l = N × V × d /(DT × A) (3) Average temperature of equilibrium of tests is set to 10 °C and the ΔT between the plates is 20 °C. In order to investigate the mechanical properties of clay specimens containing fillers, three samples (160 x 40 x 40 mm dimensions) for each mixture were prepared for flexural and compressive strength tests, according to standard EN 1015-11 "Methods of test for mortar for masonry – Determination of flexural and compressive strength of hardened mortar". The test apparatus used was a frame Dual Column Instron 3369 for both flexural and compressive strength. Regarding flexural tests, the instrumentation applies the load at a rate specified in standard UNI 1015-11; in this step load rate was set to 50 N/s since for these plaster mortars a low resistance is expected. For each specimen the maximum applied load (F) was recorded and expressed in Newton, then flexural strength (f) was calculated in N/mm2, to the nearest 0.05 N/mm2 using the following equation: f = 1.5 × F × l / (b × d2) (4) b = width of the specimen (mm) d = depth of the specimen (mm) l = distance between the axes of the support (mm). The average flexural strength was calculated to the nearest 0.1 N/mm2. In compression tests the load rate was set around to 300N/s, according to standard EN 1015-11. For each specimen the maximum load applied (N) was recorded and expressed in Newton; the compressive strength (σ) was calculated to the nearest 0.05 N/mm2 as the maximum load carried by the specimen divided by its cross-sectional area. For all realized mixtures, average of compressive strength has been calculated to the nearest 0.1 N/mm2. 4. Results and Discussion Experimental tests were performed to obtain thermal and mechanical properties of some specimens containing different percentages of clay and fibers. Figure 6. Results of mechanical tests on clay specimens The Figure 6 shows the flexural and compressive breaking strength of clay specimens, C10 – C50, also related to the shrinkage expressed in millimeter. The compression and flexural strength increases with the increasing of clay content; the same behavior is observed for the shrinkage. This is the typical behavior of clay and sand artifacts [1]. The shrinkage is an important parameter for the production process of clay specimens. In order to study the effect of fillers in sand/clay mixtures, we decided to use a 20% of clay specimens with fibers, looking for a compromise between mechanical strength and shrinkage. The flexural stress-strain curves shown in Figure 7 indicate the fragile behavior of adobe bricks. In fragile materials the break occurs without any noticeable change in the elongation rate. Figure 7. Flexural stress-strain curves The second column of table 6 shows the average flexural breaking strenght (expressed in MPa) of the REF specimen (without fibers) and of the specimens with the fibres. The third one shows the percentage variation of the strength with respect to the REF one. The use of straw in the clay and sand mixture causes a decrease in the maximum flexural strength, if compared to the REF specimen. Moreover, the flexural strength decreases with the fiber amount increasing; in particular we observe a resistance reduction of 24% for higher content straw specimen (S5). On the contrary the flexural strength rises with fiber content, respect to the REF one, in the case of basalt and jute specimens. In particular, using the basalt fibers (B2-B3), the increase in mechanical strength increases with the amount of filler, while the enhancement of jute from 2 to 3% (J2-J3) implies that the value of the flexural strength is almost unchanged. Basalt fibers and jute fillers increase the mechanical performance of reinforced adobe bricks exhibiting a binder behavior; the presence of straw fillers only influence the density of the final product [14]. Table 6. Average flexural breaking strength Flexural strength σ [MPa] $\frac{\sigma-\sigma_{r e f}}{\sigma_{r e f}}$ The second column of table 7 shows the average compression breaking strength of the REF specimen and of the specimens with the used fillers. The third column shows the percentage variation of the strength with respect to the REF one. Table 7. Compression breaking strength Compressive strength σ The behavior of the samples under compressive strength is similar to the flexural one. The increase of straw causes a decrease in the maximum compression strength, while the presence of basalt and jute improves the mechanical performance of the material. Figure 8. Stress strain diagram of J2 specimen Figure 8 shows the stress strain diagram of a specimen with 2% of jute. The trend of the curve, representative of all the samples containing jute as filler, shows a first linear section, up to the inflection where there is the collapse of the sand/clay matrix, and a second linear one where the material presents a plastic behavior with a low increasing compression strength. The presence of jute avoids the disintegration of the clay matrix, probably acting as a link/bridge between the clay crystals. Table 8. Values of λ Thermal conductivity λ [W/(m²K)] B5- p Table 8 shows the values of λ for the examined materials (Figure 9). The reference specimen, composed exclusively of sand and clay, has a higher thermal conductivity than specimens containing straw, basalt and jute fillers. The presence of the fiber involves a decrease in the final density of the product and a better behavior from the thermal point of view. Furthermore the final values of thermal conductivity λ decrease from 30 to 40% with respect to the REF-p. Figure 9. Specimens for thermal conductivity tests. Comparing the obtained results with those of a previous work [12] it is possible to notice that the sand/clay matrix has a better thermal behavior than a cement mortar. This paper presents an experimental study on adobe bricks with natural and biobased filler. An amount of 20% of clay shows the best compromise between mechanical strength and shrinkage. The use of straw filler in the clay/sand mixture causes a decrease in the maximum flexural and compressive strength (compared to the REF specimen). On the contrary mechanical performance of adobe bricks containing basalt and jute fibers increase with the filler amount, exhibiting a binder behavior with clay/sand mixture. This is probably due to the ability of these type of fibers to restrain the extension of cracks, reduce the extent of stress concentration at the tip of cracks, and delay the growth rate of cracks. Moreover the presence of the fiber involves a decrease in the final density of the product and a better behavior from the thermal point of view. Author thanks to ILA LATERIZI s.r.l. from Bari (I) for the technical collaboration. calibration factor, dimensionless potential difference between the transducers maximum load applied, N flexural strength, N. mm-2 distance between the axes of the support rollers, mm width of specimen, mm depth of the specimen, mm Greek symbols thermal conductivity, W.m-1. K-1 temperature difference between the two faces of the plates, K heat flux, W compressive tension, MPa deformation, dimensionless Subscripts fluid (pure water) Nanofluid [1] Catalan G, Hegyi A, Dico C, Mircea C (2016). Determining the optimum addition of vegetable materials in adobe bricks. Procedia Technology 22: 259-265. https://doi.org/10.1016/j.protcy.2016.01.077 [2] El Fgaier F, Lafhay Z, Antczak E, Chapiseau C. (2016). Dynamic thermal performance of three types of unfired earth bricks. Applied Thermal Engineering 93: 377-383. https://doi.org/10.1016/j.applthermaleng.2015.09.009 [3] Cagnon H, Aubert JE, Coutand M, Magniont C. (2014). Hygrothermal properties of earth bricks. Energy and Buildings 80: 208-217. https://doi.org/10.1016/j.enbuild.2014.05.024 [4] Labat M, Magniont C, Oudhof N, Aubert JE. (2016). From the experimental characterization of the hygrothermal properties of straw-clay mixtures to the numerical assessment of their buffering potential. Building and Environment 97: 69-81. https://doi.org/10.1016/j.buildenv.2015.12.004 [5] Sanjay MR, Madhu P, Jawaid M, Senthamaraikannan P, Senthil S, Pradeep S. (2018). Characterization and properties of natural fiber polymer composites: a comprehensive review. Journal of Cleaner Production. 172: 566-581. https://doi.org/20.2026/j.jclepro.2017.10.101 [6] Di Bella G, Fiore V, Galtieri G, Borsellino C, Valenza A. (2014). Effects of natural fibers reinforcement in lime plasters (kenaf and sisal vs. Polypropylene). Construction and Building Materials 58: 159-165. https://doi.org/10.1016/j.conbuildmat.2014.02.026 [7] Sood M, Dwivedi G. (2017). Effect of fiber treatment on flexural properties of natural fiber reinforced composites: A review. Egyptian Journal of Petroleum. https://doi.org/10.1016/j.ejpe.2017.11.005 [8] Jordan W, Chester P. (2017). Improving the properties of banana fiber reinforced polymeric composites by treating the fibers. Procedia Engineering 200: 283-289. https://doi.org/10.1016/j.proeng.2017.07.040 [9] Iorio M, Santarelli ML, González-Gaitano G, González-Benito J. (2018). Surface modification and characterization of basalt fibers as potential reinforcement of concretes. Applied Surface Science 427: 1248-1256. https://doi.org/10.1016/j.apsusc.2017.08.196 [10] Alba MB, Cardinale T, De Fazio P, Lista GF, Sposato C. (2016). Tests for the characterization of fiber reinforced autoclaved aerated concrete. In: Proceedings of ECCM17-17th European Conference on Composite Materials Munich, Germany, pp. 1-7. [11] Cardinale T, Arleo G, Bernardo F, Feo A, De Fazio P. (2017). Thermal and mechanical characterization of panels made by cement mortar and sheep's wool fibres. Energy Procedia 140: 159-169. https://doi.org/10.1016/j.egypro.2017.11.132 [12] Cardinale T, Arleo G, Bernardo F, Feo A, De Fazio P. (2017). Investigations on thermal and mechanical properties of cement mortar with reed and straw fibers. International Journal of Heat and Technology 35: S375-S382. https://doi.org/10.18280/ijht.35Sp0151 [13] González D. (2014). Energy and carbon embodied in straw and clay wall blocks produced locally in the Andean Patagonia. Energy and Buildings 70: 15-22. https://doi.org/10.1016/j.enbuild.2013.11.003 [14] Razmi, Mirsayar MM. (2017). On the mixed mode I/II fracture properties of jute fiber-reinforced concrete. Construction and Building Materials 148: 512-520. https://doi.org/10.1016/j.conbuildmat.2017.05.034 [15] Zhou X, Hamidreza Ghaffar S, Dong W, Olayinka, Fan M. (2013). Fracture and impact properties of short discrete jute fibre-reinforced cementitious composites. Material and Design 49: 35-47. https://doi.org/10.1016/j.matdes.2013.01.029 [16] Ashour T, Korjenic A, Korjenic S, Wu W. (2015). Thermal conductivity of unfired earth bricks reinforced by agricultural wastes with cement and gypsum. Energy and Buildings 104: 139-146. https://doi.org/10.1016/j.enbuild.2015.07.016 [17] Xie Y, Hill CAS, Xiao Z, Militz H, Mai C. (2010). Silane coupling agents used for natural fiber/polymer composites: A review. Composites Part A: Applied Science and Manufacturing 41: 806-819. https://doi.org/10.1016/j.compositesa.2010.03.005 [18] Carbonaro S, Tedesco F, Thiebat S, Fantucci V, Serra, Dutto M. (2016). An integrated design approach to the development of a vegetal-based thermal plaster for the energy retrofit of buildings. Energy and Buildings 124: 46-59. https://doi.org/10.1016/j.enbuild.2016.03.063 [19] Brouard Y, Belayachi N, Ranganathan M, Hoxha D, Méo S. (2017). Experimental assessment of hygrothermal properties of clay – sunflower (helianthus annuus) and rape straw (brassica napus) bio-composites. In: Proceedings of the 2st International Conference on Bio-based Building Materials, Clermont-Ferrand, France, ICBBM, pp. 376-380. [20] Liu J, Zhou H, Ouyang P. (2013). Effect of straw mixing amount on mechanical properties of admixture adding hollow block. Journal of Wuhan University of Technology-Mater. Sci. 28(3): 508-513. https://doi.org/508-513.10.1007/s11595-013-0722-5 [21] McGregor F, Heath A, Maskell D, Fabbri A, Morel JC. (2015). A review on the buffering capacity of earth buildings materials. Proceedings of the Institution of Civil Engineers-Construction Materials 169(5): 241-251. https://doi.org/10.1680/jcoma.15.00035
CommonCrawl
The Light Course Lecture V 27 December 1919, Stuttgart Today I will begin by shewing, as well as may be with our limited resources, the experiment of which we spoke last time. You will remember: when an incandescent solid body spreads its light and we let this light go through a prism, we get a "spectrum", a luminous picture, very like what we should get from the Sun, (compare Figure IVf), towards the end of Lecture IV). Now we can also obtain a luminous picture with the light that spreads from a glowing gas; however this picture only shews one or more single lines of light or little bands of light at different places, according to the substance used, (Figure IVg). The rest of the spectrum is stunted, so to speak. By very careful experiment, it is true, we should perceive that everything luminous gives a complete spectrum — expending all the way from red to violet, to say no more. Suppose for example we make a spectrum with glowing sodium gas: in the midst of a very feeble spectrum there is at one place a far more intense yellow line, making the rest seem even darker by contrast. Sodium is therefore often spoken of as giving only this yellow line. And now we come to the remarkable fact, which, although not unknown before, was brought to light above all in 1859 by the famous experiment of Kirchhoff and Bunsen. If we arrange things so that the source of light generating the continuous spectrum and the one generating, say, the sodium line, can take effect as it were simultaneously, the sodium line will be found to act like an untransparent body. It gets in the way of the quality of light which would be appearing at this place (i.e. in the yellow) of the spectrum. It blots it out, so that we get a black line here in place of yellow, (Figure IVh). Simply to state the fact, this then is what we have to say: For the yellow of the spectrum, another yellow (the strength of which must be at least equal to the strength of light that is just being developed at this place of the spectrum) acts like an opaque body. As you will presently see, the elements we are compiling will pave the way to an understanding also of this phenomenon. In the first place however we must get hold of the pure facts. We will now shew you, as well as we are able, that this dark line does really appear in the spectrum when we interpose the glowing sodium. We have not been able to arrange the experiment so as to project the spectrum on to a screen. Instead we will observe the spectrum by looking straight into it with our eyes. For it is possible to see the spectrum in this way too; it then appears displaced downward instead of upward, moreover the colours are reversed. We have already discussed, why it is that the colours appear in this way when we simply look through the prism. By means of this apparatus, we here generate the cylinder of light; we let it go through here, and, looking into it, we see it thus refracted. (The experiment was shewn to everyone in turn). To use the short remaining time — we shall now have to consider the relation of colours to what we call "bodies". As a transition to this problem looking for the relations between the colours and what we commonly call "bodies" — I will however also shew the following experiment. You now see the complete spectrum projected on to the screen. Into the path of the cylinder of light I place a trough in which there is a little iodine dissolved in carbon disulphide. Note how the spectrum is changed. When I put into the path of the cylinder of light the solution of iodine in carbon disulphide, this light is extinguished. You see the spectrum clearly divided into two portions; the middle part is blotted out. You only see the violet on the one side, the reddish-yellow on the other. In that I cause the light to go through this solution — iodine in carbon disulphide — you see the complete spectrum divided into two portions; you only see the two poles on either hand. It has grown late and I shall now only have time for a for a few matters of principle. Concerning the relation of the colours to the bodies we see around us (all of which are somehow coloured in the last resort), the point will be explained how it comes about that they appear coloured at all. How comes it in effect that the material bodies have this relation to the light? How do they, simply by dint of their material existence so to speak, develop such relation to the light that one body looks red, another blue, and so on. It is no doubt simplest to say: When colourless sunlight — according to the physicists, a gathering of all the colours — falls on a body that looks red, this is due to the body's swallowing all the other colours and only throwing back the red. With like simplicity we can explain why another body appears blue. It swallows the remaining colours and throws back the blue alone. We on the other hand have to eschew these speculative explanations and to approach the fact in question — namely the way we see what we call "coloured bodies" — by means of the pure facts. Fact upon fact in proper sequence will then at last enable us in time to "catch" — as it were, to close in upon — this very complex phenomenon. The following will lead us on the way. Even in the 17th Century, we may remember, when alchemy was still pursued to some extent, they spoke of so-called "phosphores" or light-bearers. This is what they meant: — A Bologna cobbler, to take one example, was doing some alchemical experiments with a kind of Heavy Spar (Barytes). He made of it what was then called "Bologna stone". When he exposed this to the light, a strange phenomenon occurred. After exposure the stone went on shining for a time, emitting a certain coloured light. The Bologna stone had acquired a relation to the light, which it expressed by being luminous still after exposure — after the light had been removed. Stones of this kind were then investigated in many ways and were called "phosphores", If you come across the word "phosphor" or "phosphorus" in the literature of that time, you need not take it to mean what is called "Phosphorus" today; it refers to phosphorescent bodies of this kind — bearers of light, i.e. phos-phores. However, even this phenomenon of after-luminescence — phosphor escence — is not the simplest. Another phenomenon is really the simple one. If you take ordinary paraffin oil and look through it towards a light, the oil appears slightly yellow. If on the other hand you place yourself so as to let the light pass through the oil while you look at it from behind, the oil will seem to be shining with a bluish light — only so long, however, as the light impinges on it. The same experiment can be made with a variety of other bodies. It is most interesting if you make a solution of plant green — chlorophyll (Figure Va). Look towards the light through the solution and it appears green. But if you take your stand to some extent behind it — if this (Figure Va) is the solution and this the light going through it, while you look from behind to where the light goes through — the chlorophyll shines back with a red or reddish light, just as the paraffin shone blue. Figure 5a There are many bodies with this property. They shine in a different way when, so to speak, they of themselves send the light back — when they have somehow come into relation to the light, changing it through their own nature — than when the light goes through them as through a transparent body. Look at the chlorophyll from behind: we see — so to speak — what the light has been doing in the chlorophyll; we see the mutual relation between the light and the chlorophyll. When in this way a body shines with one kind of light while illumined by another kind of light, we call the phenomenon Fluorescence. And, we may say: what in effect is Phosphorescence? It is a Fluorescence that lasts longer. For it is Fluorescence when the chlorophyll, for instance, shines with a reddish light so long as it is exposed to light. When there is Phosphorescence on the other hand, as with the Bologna stone, we can take the light away and the thing still goes on shining for a time. It thus retains the property of shining with a coloured light, — a property the chlorophyll does not retain. So you have two stages. The one is Fluorescence: we make a body coloured so long as we illumine it. The second is Phosphorescence: we cause a body to remain coloured still for a certain time after illumination. And now there is a third stage: the body, as an outcome of whatever it is that the light does with it, appears with a lasting colour. We have this sequence: Fluorescence, Phosphorescence, Colouredness-of-bodies. Thus we have placed the phenomena, in a manner of speaking, side by side. What we must try to do is to approach the phenomena rightly with our thinking, our forming of ideas. There is another fundamental idea which you will need to get hold of today, for we shall afterwards want to relate it to all these other things. Please, once again, only think quite exactly of what I shall bring forward. Think as precisely as you can. I will remind you again (as once before in these lectures) of the formula for a velocity, say \(v\). A velocity is expressed, as you know, in dividing \(s\), the distance which the mobile object passes through, by the time \(t\). This therefore is the formula: $$v=\frac{s}{t}$$ Now the opinion prevails that what is actually given in real Nature in such a case is the distance \(s\) the body passes through, and the time \(t\) it takes to do it. We are supposed to be dividing the real distance \(s\) by the real time \(t\), to get the velocity \(v\), which as a rule is not regarded as being quite so real but more as a kind of function, an outcome of the division sum. Thus the prevailing opinion. And yet in Nature it is not so. Of the three magnitudes — velocity, space and time, — velocity is the only one that has reality. What is really there in the world outside us is the velocity; the \(s\) and \(t\) we only get by splitting up the given totality, the \(v\), into two abstract entities. We only arrive at these on the basis of the velocity, which is really there. This then, to some extent, is our procedure. We see a so-called "body" flowing through space with a certain velocity. That it has this velocity, is the one real thing about it. But now we set to work and think. We no longer envisage the quick totality, the quickly moving body; instead, we think in terms of two abstractions. We dismember, what is really one, into two abstractions. Because there is a velocity, there is a distance moved through. This distance we envisage in the first place, and in the second place we envisage the time it takes to do it. From the velocity, the one thing actually there, we by our thinking process have sundered space and time; yet the space in question is not there at all save as an outcome of the velocity, nor for that matter is the time. The space and time, compared to this real thing which we denote as \(v\), are no realities at all, they are abstractions which we ourselves derive from the velocity. We shall not come to terms with outer reality, my dear Friends, till we are thoroughly clear on this point. We in our process of conception have first created this duality of space and time. The real thing we have outside us is the velocity and that alone; as to the "space" and "time", we ourselves have first created them by virtue of the two abstractions into which — if you like to put it so — the velocity can fall apart for us. From the velocity, in effect, we can separate ourselves, while from the space and time we cannot; they are within our perceiving, — in our perceiving activity. With space and time we are one. Much is implied in what I am now saying. With space and time we are one. Think of it well. We are not one with the velocity that is there outside us, but we are one with space and time. Nor should we, without more ado, ascribe to external bodies what we ourselves are one with; we should only use it to gain a proper idea of these external bodies. All we should say is that through space and time, with which we ourselves are very intimately united, we learn to know and understand the real velocity. We should not say "The body moves through such and such a distance"; we ought only to say: "The body has a velocity". Nor should we say, "The body takes so much time to do it," but once again only this: "The body has a velocity". By means of space and time we only measure the velocity. The space and time are our own instruments. They are bound to us, — that is the essential thing. Here once again you see the sharp dividing line between what is generally called "subjective" — here, space and time — and the "objective" thing — here, the velocity. It will be good, my dear Friends, if you will bring this home to yourselves very clearly; the truth will then dawn upon you more and more: \(v\) is not merely the quotient of \(s\) and \(t\). Numerically, it is true, \(v\) is expressed by the quotient of \(s\) and \(t\). What I express by this number \(v\) is however a reality in its own right — a reality of which the essence is, to have velocity. What I have here shewn you with regard to space and time — namely that they are inseparable from us and we ought not in thought to separate ourselves from them — is also true of another thing. But, my dear Friends (if I may say this in passing), people are still too much obsessed with the old Konigsberg habit, by which I mean, the Kantian idea. The "Konigsberg" habit must be got rid of, or else it might be thought that I myself have here been talking "Konigsberg", as if to say "Space and Time are within us." But that is not what I am saying. I say that in perceiving the reality outside us the — velocity — we make use of space and time for our perception. In effect, space and time are at once in us and outside us. The point is that we unite with space and time, while we do not unite with the velocity. The latter whizzes past us. This is quite different from the Kantian idea. Now once again: what I have said of space and time is also true of something else. Even as we are united by space and time with the objective reality, while we first have to look for the velocity, so in like manner, we are in one and the same element with the so-called bodies whenever we behold them by means of light. We ought not to ascribe objectivity to light any more than to space and time. We swim in space and time just as the bodies swim in it with their velocities. So too we swim in the light, just as the bodies swim in the light. Light is an element common to us and the things outside us — the so-called bodies. You may imagine therefore: Say you have gradually filled the dark room with light, the space becomes filled with something — call it \(x\), if you will — something in which you are and in which the things outside you are. It is a common element in which both you, and that which is outside you, swim. But we have still to ask: How do we manage to swim in light? We obviously cannot swim in it with what we ordinarily call our body. We do however swirl in it with our etheric body. You will never understand what light is without going into these realities. We with our etheric body swim in the light (or, if you will, you may say, in the light-ether; the word does not matter in this connection). Once again therefore: With our etheric body we are swimming in the light. Now in the course of these lectures we have seen how colours arise — and that in many ways — in and about the light itself. In the most manifold ways, colours arise in and about the light; so also they arise, or they subsist, in the so-called bodies. We see the ghostly, spectral colours so to speak, — those that arise and vanish within the light itself. For if I only cast a spectrum here it is indeed like seeing spectres; it hovers, fleeting, in space. Such colours therefore we behold, in and about the light. In the light, I said just now, we swim with our etheric body. How then do we relate ourselves to the fleeting colours? We are in them with our astral body; it is none other than this. We are united with the colours with our astral body. You have no alternative, my dear Friends but to realise that when and wheresoever you see colours, with your astrality you are united with them. If you would reach any genuine knowledge you have no alternative, but must say to yourselves: The light remains invisible to us; we swim in it. Here it is as with space and time; we ought not to call them objective, for we ourselves are swimming in them. So too we should regard the light as an element common to us and to the things outside us; whilst in the colours we have to recognize something that can only make its appearance inasmuch as we through our astral body come into relation to what the light is doing there. Assume now that in this space \(ABCD\) you have in some way brought about a phenomenon of colour — say, a spectrum. I mean now, a phenomenon that takes its course purely within the light. You must refer it to an astral relation to the light. But you may also have the phenomenon of colour in the form of a coloured surface. This therefore — from \(A\) to \(C,\) say — may be appearing to you as a coloured body, a red body for example. We say, then, \(AC\) is red. You look towards the surface of the body, and, to begin with, you will imagine rather crudely. Beneath the surface it is red, through and through. This time, you see, the case is different. Here too you have an astral relation; but from the astral relation you enter into with the colour in this instance you are separated by the bodily surface. Be sure you understand this rightly! In the one instance you see colours in the light — spectral colours. There you have astral relations of a direct kind; nothing is interposed between you and the colours. When on the other hand you see the colours of bodily objects, something is interposed between you and your astral body, and through this something you none the less entertain astral relations to what we call "bodily colours". Please take these things to heart and think them through. For they are basic concepts — very important ones — which we shall need to elaborate. Only on these lines shall we achieve the necessary fundamental concepts for a truer Physics. One more thing I would say in conclusion. What I am trying to present in these lectures is not what you can get from the first text-book you may purchase. Nor is it what you can get by reading Goethe's Theory of Colour. It is intended to be, what you will find in neither of the two, and what will help you make the spiritual link between them. We are not credulous believers in the Physics of today, nor need we be of Goethe. It was in 1832 that Goethe died. What we are seeking is not a Goetheanism of the year 1832 but one of 1919, — further evolved and developed. What I have said just now for instance — this of the astral relation — please think it through as thoroughly as you are able.
CommonCrawl
\begin{definition}[Definition:Left Hand Side] In an equation: :$\text {Expression $1$} = \text {Expression $2$}$ the term $\text {Expression $1$}$ is the '''left hand side'''. \end{definition}
ProofWiki
\begin{document} \setcounter{page}{1} \markboth{M. Isaev, R.G. Novikov}{ Reconstruction from the Fourier transform on the ball via PSWFs} \title{ Reconstruction from the Fourier transform on the ball via prolate spheroidal wave functions \thanks{The first author's research is supported by the Australian Research Council Discovery Early Career Researcher Award DE200101045. } } \date{} \author{ Mikhail Isaev\\ \small School of Mathematics\\[-0.8ex] \small Monash University\\[-0.8ex] \small Clayton, VIC, Australia\\ \small\texttt{[email protected]} \and Roman G. Novikov\\ \small CMAP, CNRS, Ecole Polytechnique\\[-0.8ex] \small Institut Polytechnique de Paris\\[-0.8ex] \small Palaiseau, France\\ \small IEPT RAS, Moscow, Russia\\ \small\texttt{[email protected]} } \maketitle \begin{abstract} We give new formulas for finding a compactly supported function $v$ on ${\mathbb{R}}^d$, $d\geqslant 1$, from its Fourier transform $\mathcal{F} v$ given within the ball $B_r$. For the one-dimensional case, these formulas are based on the theory of prolate spheroidal wave functions (PSWF's). In multidimensions, well-known results of the Radon transform theory reduce the problem to the one-dimensional case. Related results on stability and convergence rates are also given. \noindent \\ {\bf Keywords:} ill-posed inverse problems, band-limited Fourier transform, prolate spheroidal wave functions, Radon transform, H\"{o}lder-logarithmic stability. \\\noindent \textbf{AMS subject classification:} 42A38, 35R30, 49K40 \end{abstract} \section{Introduction}\label{S:intro} Following D. Slepian, H. Landau, and H. Pollak (see, for example, the survey paper \cite{Slepian1983}), we consider the compact integral operator $\mathcal{F}_c$ on $\mathcal{L}^2([-1,1])$ defined by \begin{equation}\label{def:Fc} \mathcal{F}_c[f] (x) := \int_{-1}^1 e^{i c xy} f(y)dy, \end{equation} where $f$ is a test function and the parameter $c>0$ is the bandwidth. Let ${\mathbb{N}}:=\{0,1\ldots\}$. The eigenfunctions $(\psi_{j,c})_{j \in {\mathbb{N}}}$ of $\mathcal{F}_c$ are \emph{prolate spheroidal wave functions} (PSWFs). These functions are real-valued and form an orthonormal basis in $\mathcal{L}^2([-1,1])$. Let $(\mu_{j, c})_{j \in {\mathbb{N}}}$ denote the corresponding eigenvalues. It is known that all these eigenvalues are simple and non-zero, so we can assume that $0<|\mu_{j+1,c}| < |\mu_{j,c}|$ for all $j \in {\mathbb{N}}$. The properties of $(\psi_{j,c})_{j \in {\mathbb{N}}}$ and $(\mu_{j, c})_{j \in {\mathbb{N}}}$ are recalled in Section \ref{S:prolate} of this paper. In particular, we have that \begin{equation}\label{Fc-dec} \mathcal{F}_c [f] (x) = \sum_{j \in {\mathbb{N}}} \mu_{j,c}\psi_{j,c}(x) \int_{-1}^1 \psi_{j,c} (y) f(y) dy, \end{equation} and, for $g = \mathcal{F}_c [f]$, \begin{equation}\label{f:inverse} \mathcal{F}_{c}^{-1} [g](y) = \sum_{j \in {\mathbb{N}}} \dfrac{1}{\mu_{j,c}}\psi_{j,c}(y) \int_{-1}^1 \psi_{j,c} (x) g(x)dx, \end{equation} where $\mathcal{F}_{c}^{-1}$ is the inverse operator, that is $\mathcal{F}_{c}^{-1} [\mathcal{F}_c [f]] \equiv f $ for all $f \in \mathcal{L}^2[-1,1]$. The operator $\mathcal{F}_c$ appears naturally in the theory of the classical Fourier transform $\mathcal{F}$ defined in the multidimensional case $d\geqslant 1$ by \begin{equation} \label{eq:Fourier} \mathcal{F}[v] (p) := \dfrac{1}{(2\pi)^d}\int\limits_{\mathbb{R}^d} e^{i pq } v(q) dq, \qquad p\in \mathbb{R}^d, \end{equation} where $v$ is a complex-valued test function on ${\mathbb{R}}^d$. To avoid any possible confusion with $\mathcal{F}_c$, we employ the simplified notation $\hat{v} := \mathcal{F}[v]$ throughout the paper. Let \begin{equation} B_{\rho} := \left\{q\in \mathbb{R}^d : |q| < \rho \right\}, \qquad \text{for any $\rho>0$.} \nonumber \end{equation} We consider the following inverse problem. \begin{Problem}\label{Problem} Let $d\geqslant 1$ and $ r,\sigma>0$. Find $v \in \mathcal{L}^2({\mathbb{R}}^d)$ from $\hat{v}$ given on the ball $B_r$ (possibly with some noise), under a priori assumption that $v$ is supported in $B_{\sigma}$. \end{Problem} Problem \ref{Problem} is a classical problem of the Fourier analysis, inverse scattering, and image processing; see, for example, \cite{LRC1987, AMS2009, BM2009, Papoulis1975, CF2014, Gerchberg1974, IN2020, IN2020+} and references therein. In the present work, we suggest a new approach to Problem \ref{Problem}, proceeding from the singular value decomposition formulas \eqref{def:Fc}, \eqref{Fc-dec} and further results of the PSWF theory. Surprisingly, to our knowledge, the PSWF theory was omitted in the context of Problem \ref{Problem} in the literature even though it is quite natural. In particular, in dimension $d=1$, Problem \ref{Problem} reduces to finding a function $f\in \mathcal{L}^2([-1,1])$ from $\mathcal{F}_c[f]$ (possibly with some noise). In multidimensions, in addition to the PSWF theory, we use inversion methods for the classical Radon transform $\mathcal{R}$ ; see, for example \cite{Naterrer2001, Radon}. Recall that $\mathcal{R}$ is defined by \begin{equation}\label{def:R} \mathcal{R} [v] (y, \theta) := \int_{q\in {\mathbb{R}}^d \,:\, q \theta =y } v(q) dq, \qquad y\in {\mathbb{R}},\ \theta \in \mathbb{S}^{d-1}, \end{equation} where $v$ is a complex-valued test function on ${\mathbb{R}}^d$, $d \geqslant 1$. In the present work, for simplicity, we define the inverse Radon transform $ \mathcal{R}^{-1}$ via the projection theorem; see formula \eqref{inv:radon} for details. \begin{Theorem}\label{T1}Let $d\geqslant 1$, $r,\sigma>0$ and $c = r\sigma$. Let $v\in \mathcal{L}^2({\mathbb{R}}^d)$ and $\operatorname{supp} v \subset B_{\sigma}$. Then, its Fourier transform $\hat{v}$ restricted to $B_r$ determines $v$ via the following formulas: \begin{align*} v(q) &= \mathcal{R}^{-1} [f_{r, \sigma}]( \sigma^{-1}q), \qquad q\in {\mathbb{R}}^d,\\ f_{r,\sigma}(y,\theta) &:= \begin{cases} \mathcal{F}_c^{-1}[g_{r,\theta}](y), &\text{if } y\in [-1,1]\\ 0, & \text{otherwise}, \end{cases} \\ g_{r,\theta} (x)&:= \left(\dfrac{2\pi}{\sigma}\right)^d \hat{v} (r x\theta), \qquad x\in [-1,1],\ \theta \in \mathbb{S}^{d-1}, \end{align*} where $\mathcal{F}_c^{-1}$ is defined by \eqref{f:inverse} and $ \mathcal{R}^{-1}$ is the inverse Radon transform. \end{Theorem} \begin{Remark}\label{rem1} For $d=1$, the formulas of Theorem \ref{T1} reduce to \begin{align*} v(q) = \mathcal{F}_c^{-1}[g_{r}]( \sigma^{-1}q), \qquad g_r(x) := \dfrac{2\pi}{\sigma}\hat{v}(rx), \end{align*} where $q\in (-\sigma,\sigma)$ and $x\in [-1,1]$. \end{Remark} We prove Theorem \ref{T1} in Section \ref{S:T1}. Unfortunately, the reconstruction procedure given in Theorem \ref{T1} and Remark \ref{rem1} is severely unstable. The reason is that the numbers $(\mu_{j, c})_{j \in {\mathbb{N}}}$ decay superexponentially as $j \rightarrow \infty$; see formulas \eqref{eq:eigenrel} and \eqref{eigenestimate}. To overcome this difficulty, we approximate $ \mathcal{F}_c^{-1}$ by the operator $\mathcal{F}_{n,c}^{-1}$ defined by \begin{equation}\label{def:Fnc} \mathcal{F}_{n,c}^{-1} [w] (y) := \sum_{j=0}^n \dfrac{1}{\mu_{j,c}}\psi_{j,c}(y) \int_{-1}^1 \psi_{j,c} (x) w(x)dx. \end{equation} Note that \eqref{def:Fnc} correctly defines the operator $\mathcal{F}_{n,c}^{-1}$ on $\mathcal{L}^2([-1,1])$ for any $n \in {\mathbb{N}}$. Let \begin{equation}\label{def:pi_n} \pi_{n,c}[f]:= \sum_{j =0 }^n \hat{f}_{j,c} \psi_{j,c}, \qquad \hat{f}_{j,c} := \int_{-1}^1 \psi_{j,c} (y) f(y) dy. \end{equation} That is, $\pi_{n,c}[\cdot]$ is the orthogonal projection in $\mathcal{L}^2([-1,1])$ onto the span of the first $n+1$ functions $(\psi_{j,c})_{j \leqslant n}$. \begin{Lemma}\label{L:general} Let $f,w \in \mathcal{L}^2([-1,1])$ and $ \| \mathcal{F}_c [f] - w\|_{\mathcal{L}^2} \leqslant \delta $ for some $\delta\geqslant 0$. Then, for any $n \in {\mathbb{N}}$, \begin{equation}\label{eq:general} \|f - \mathcal{F}_{n,c}^{-1} [w]\|_{\mathcal{L}^2([-1,1])} \leqslant \dfrac{\delta }{|\mu_{n,c}|} + \|f - \pi_{n,c} [f]\|_{\mathcal{L}^2([-1,1])}. \end{equation} \end{Lemma} Estimates of the type \eqref{eq:general} are of general nature for operators admitting a singular value decomposition like \eqref{Fc-dec}. For completeness of the presentation, we prove Lemma~\ref{L:general} in Section~\ref{S:Lemma}. Combining Theorem \ref{T1}, Remark \ref{rem1}, Lemma \ref{L:general}, inversion methods for the Radon transform $\mathcal{R}$, and known estimates of the PSWF theory for $|\mu_{n,c}|$ and $\|f - \pi_{n,c} [f]\|_{\mathcal{L}^2}$ (see Section \ref{S:prolate}) yields numerical methods for Problem \ref{Problem}. In this connection, in the present work we give a regularised version of the reconstruction procedure of Theorem \ref{T1}; see Theorem \ref{T:multi}, Theorem \ref{T:detailed} and Corollary \ref{C:main}. For $\alpha,\delta \in(0,1)$, let \begin{equation}\label{def:n-star} n^* = n^*(c, \alpha,\delta) = \left\lfloor 3+ \tau \dfrac{ec}{4}\right\rfloor, \end{equation} where $\lfloor\cdot \rfloor$ denotes the floor function and $\tau = \tau(c,\alpha,\delta) \geqslant 1$ is the solution of the equation \begin{equation}\label{eq-tau} \tau \log \tau = \dfrac{4}{ec} \alpha \log (\delta^{-1}). \end{equation} Let \begin{equation}\label{def:L2r} \begin{aligned} \mathcal{L}^2_r &:= \{w \in \mathcal{L}^2 (B_r) \,:\, \|w\|_r < \infty\},\\ \|w\|_{r}&:= \left( \int_{B_r} p^{1-d}|w(p)|^2 dp \right)^{1/2}. \end{aligned} \end{equation} \begin{Theorem}\label{T:multi} Let the assumptions of Theorem \ref{T1} hold and $v \in \mathcal{H}^\nu({\mathbb{R}}^d)$ for some $\nu \geqslant 0$ (and $\nu >0$ for $d=1$). Suppose that $w \in \mathcal{L}^2_r $ and $\|w - \hat{v}\|_{r} \leqslant \delta N$ for some $\delta \in (0,1)$. Let $\alpha \in (0,1)$ and $n^*$ be defined by \eqref{def:n-star}. Let \begin{align*} v^{\delta} (q) &:= \mathcal{R}^{-1}\left[u_{r,\sigma}\right] (\sigma^{-1}q), \qquad q \in {\mathbb{R}}^d,\\ u_{r,\sigma}(y,\theta)&:= \begin{cases} \mathcal{F}^{-1}_{n^*,c}[w_{r,\theta} ](y), &\text{if } y\in [-1,1],\\ 0, & \text{otherwise}, \end{cases} \\ w_{r,\theta}(x)&:= \left(\dfrac{2\pi}{\sigma}\right)^d w(r x\theta), \qquad x\in [-1,1],\ \theta \in \mathbb{S}^{d-1}. \end{align*} Then, for any $\beta \in (0, 1-\alpha)$ and any $\mu \in (0,\nu+\frac{d-1}{2})$, \begin{equation}\label{eq:multi} \|v - v^{\delta}\|_{\mathcal{H}^{-(d-1)/2}({\mathbb{R}}^d)} \leqslant \kappa_1 N \delta^{\beta} + \kappa_2 \|v\|_{\mathcal{H}^\nu({\mathbb{R}}^d)} \left( \log \delta^{-1}\right)^{-\mu}, \end{equation} where $\kappa_1 = \kappa_1(c, d, r, \sigma,\alpha,\beta)>0$ and $\kappa_2 = \kappa_2(c, d, r,\sigma,\alpha,\nu,\mu)>0$. \end{Theorem} Similarly to Remark \ref{rem1}, the statement of Theorem \ref{T:multi} simplifies significantly for the case $d=1$; see Corollary \ref{C:main}. We prove Theorem \ref{T:multi} in Section \ref{S:multi}. The parameter $N$ from Theorem \ref{T:multi} can be considered as an a priori upper bound for $\|\hat{v}\|_r$. Indeed, the assumption $\|w - \hat{v}\|_{r} \leqslant \delta \|\hat{v}\|_r$ is natural. If the noise level is such that $\|w - \hat{v}\|_{r} \geqslant \|\hat{v}\|_r$, then the given data $w$ tells about $v$ as little as the trivial function $w_0 \equiv 0$. An accurate reconstruction is hardly possible in this case, since it is equivalent to no data given at all. The function $v^\delta$ in Theorem \ref{T:multi} is not compactly supported, in general; see also the related remark about $v$ after Lemma \ref{L:H-H}. Nevertheless, only $v^\delta$ restricted to $B_\sigma$ is of interest under the assumptions of Theorem \ref{T:multi}. Our stability estimate \eqref{eq:multi} is given in $\mathcal{H}^s$ with $s\leqslant 0$. One can improve the regularity in such estimates using the apodized reconstruction $\phi * v^\delta$, where $*$ denotes the convolution operator and $\phi$ is an appropriate sufficiently regular non-negative compactly supported function with $\|\phi\|_{\mathcal{L}^1({\mathbb{R}}^d)}=1$; see, for example, \cite[Section~6.1]{IN2020}. In particular, \eqref{eq:multi} implies estimates for $\phi* v- \phi * v^\delta$ in $\mathcal{H}^t$ with $t\geqslant 0$. Applying Theorem \ref{T:multi} with $v:=v_1- v_2$ and $w\equiv 0$, we get the following result. \begin{Corollary}\label{C:multi2} Let the assumptions of Theorem 1.1 hold for $ v:=v_1- v_2$. Let $v_1- v_2 \in \mathcal{H}^\nu({\mathbb{R}}^d)$ for some $\nu \geqslant 0$ (and $\nu >0$ for $d=1$). Suppose that $\|\hat{v}_1\ - \hat{v}_2\|_{r} \leqslant \delta N$ for some $\delta \in (0,1)$ and $N>0$. Let $\alpha \in (0,1)$. Then, for any $\beta \in (0, 1-\alpha)$ and any $\mu \in (0,\nu+\frac{d-1}{2})$, \begin{equation}\label{eq:multi2} \|v_1 - v_2 \|_{\mathcal{H}^{-(d-1)/2}({\mathbb{R}}^d)} \leqslant \kappa_1 N \delta^{\beta} + \kappa_2 \| v_1 - v_2 \|_{\mathcal{H}^\nu({\mathbb{R}}^d)} \left( \log \delta^{-1}\right)^{-\mu}, \end{equation} where $\kappa_1 = \kappa_1(c,d,r,\sigma,\alpha,\beta)$ and $\kappa_2= \kappa_2(c,d,r,\sigma,\alpha,\nu,\mu)$ are the same as in \eqref{eq:multi}. \end{Corollary} The present work continues studies of \cite{IN2020,IN2020+}, where we approached Problem~\ref{Problem} via a H\"older-stable extrapolation of $\hat{v}$ from $B_r$ to a larger ball, using truncated series of Chebyshev polynomials. The reconstruction of the present work is essentially different; in particular, it does not use any extrapolation. However, the resulting stability estimates are analogous for both reconstructions. In particular, estimate \eqref{eq:multi} resembles \cite[Theorem~3.1]{IN2020} in dimension $d=1$ and resembles \cite[Theorem~3.2]{IN2020+} (with $s= -\frac{d-1}{2}$ and $\kappa=1$) in dimension $d\geqslant 1$; estimate \eqref{eq:multi2} resembles \cite[Corollary 3.3]{IN2020} in dimension $d=1$ and resembles \cite[Corollary~3.4]{IN2020+} (with $s= -\frac{d-1}{2}$ and $\kappa=1$) in dimension $d\geqslant 1$. Note also that, in the domain of coefficient inverse problems, estimates of the form \eqref{eq:multi} and \eqref{eq:multi2} are known as H\"older-logarithmic stability estimates; see \cite{IN2020,IN2020+, IN2013++, HH2001, HW2017} and references therein. The main advantages of the present work in comparison with \cite{IN2020,IN2020+} are the following: \begin{itemize} \item We allow the "noise" in Problem \ref{Problem} to be from a larger space $\mathcal{L}^2_r$ defined by \eqref{def:L2r} in contrast with $\mathcal{L}^\infty$. \item We use the straightforward formulas \eqref{f:inverse}, \eqref{def:Fnc}, \eqref{eq:general} in place of the roundabout way that requires extrapolation of $\hat{v}$ from $B_r$ to a larger ball and leads to additional numerical issues. \end{itemize} On the other hand, the advantages of \cite{IN2020,IN2020+} in comparison with the present work include: explicit expressions for quantities like $\kappa_1$ and $\kappa_2$ in \eqref{eq:multi}; more advanced norms $\|\cdot\|$ for reconstruction errors like $v - v^{\delta}$ in \eqref{eq:multi}, where $\|\cdot\| = \|\cdot\|_{\mathcal{L}^2({\mathbb{R}}^d)}$ in \cite{IN2020} and $\|\cdot\| = \|\cdot\|_{\mathcal{H}^s({\mathbb{R}}^d)}$ with any $s \in (-\infty, \nu)$ in \cite{IN2020+}. The reason is purely due to the fact that the PSWFs theory is still less developed than the theory of Chebyshev polynomials and the classical Fourier transform theory. In connection with further developments in the PSWFs theory that would improve the results of the present work on Problem \ref{Problem}, see Remarks \ref{R21}, \ref{R22}, and \ref{R23} in Section~\ref{S:prolate}. Note also that the functions $(\psi_{j,c})_{j \in {\mathbb{N}}}$ for large $j$, yield a new example of exponential instability for Problem \ref{Problem} in dimension $d=1$. This instability behaviour follows from the properties of $\psi_{j,c}$ and $\mu_{j,c}$ recalled in Section~\ref{S:prolate} and the result formulated in Remark~\ref{R22}. However, known estimates for the derivatives of PSWFs do not allow yet to say that this example is more strong than the example constructed in \cite[Theorem 5.2]{IN2020}. The aforementioned possible developments in the PSWFs theory and further development of the approach of the present work to Problem \ref{Problem}, including its numerical implementation, will be addressed in further articles. The further structure of the paper is as follows. Some prilimary results are recalled in Section \ref{S:preliminaries}. In Section \ref{S:1D}, we prove our estimates in dimension $d=1$ modulo a technical lemma, namely, Lemma \ref{L:delta-mu}. In Section \ref{S:multi}, we prove Theorem~\ref{T1}, Theorem~\ref{T:multi} and Corollary~\ref{C:multi2} based on the results given in Sections \ref{S:preliminaries} and \ref{S:1D}. In Section \ref{S:detailed}, we prove Lemma \ref{L:delta-mu}. \section{Preliminaries}\label{S:preliminaries} In this section, we recall some known results on PSWFs and on the Radon transform that we will use in the proofs of Theorems \ref{T1} and \ref{T:multi}. In addition, we prove Lemma \ref{L:general} and give a stability estimate for the inverse Radon transform; see Lemma \ref{L:H-H}. \subsection{Prolate spheroidal wave functions}\label{S:prolate} In connection with the facts presented in this subsection we refer to \cite{Slepian1983, BK2017, BK2017+, RX2007, Wang2010, STR2006} and references therein. Originally, the prolate spheroidal wave functions $(\psi_{n,c})_{n \in {\mathbb{N}}}$ were discovered as the eigenfunctions of the following spectral problem: \begin{equation}\label{L-prob} \mathcal{L}_c \psi = \chi \psi, \qquad \psi \in C^2([-1,1]), \end{equation} where $\chi$ is the spectral parameter and \begin{align*} \mathcal{L}_c [\psi] := - \dfrac{d}{dx} \left[(1-x^2) \dfrac{d \psi}{ dx}\right] + c^2 x^2 \psi. \end{align*} We also consider the operator $\mathcal{Q}_c$ defined on $\mathcal{L}_2([-1,1])$ by \begin{equation} \mathcal{Q}_c[f](x):=\frac{c}{2\pi} \mathcal{F}_c^* \left[\mathcal{F}_c [f]\right](x)= \int_{-1}^1 \dfrac{\sin c(x-y)}{ \pi (x-y)} f(y) dy, \end{equation} where $\mathcal{F}_c^*$ is the conjugate operator to $\mathcal{F}_c$ defined by \eqref{def:Fc}. The prolate spheroidal wave functions $(\psi_{n,c})_{n \in {\mathbb{N}}}$ are eigenfunctions for problem \eqref{L-prob} and for both operators $\mathcal{F}_c$ and $\mathcal{Q}_c$. Let $(\chi_{n,c})_{n \in {\mathbb{N}}}$ denote the eigenvalues of problem \eqref{L-prob}. It is known that $(\chi_{n,c})_{n \in {\mathbb{N}}}$ are real, positive, simple, that is, one can assume that \[ 0<\chi_{n,c}<\chi_{n+1,c}, \qquad \text{for all } n \in {\mathbb{N}}. \] In addition, the following estimates hold: \begin{equation}\label{chi<chi} n(n+1) < \chi_{n,c}<n(n+1)+c^2. \end{equation} If $ \mu_{n,c} $ and $ \lambda_{n,c}$ are the corresponding eigenvalues of $\mathcal{F}_c$ and $\mathcal{Q}_c$, respectively, then \begin{equation}\label{eq:eigenrel} \begin{aligned} \mu_{n,c} = i^n \sqrt{\dfrac{2\pi}{c} \lambda_{n,c}} \quad \text{ and } \quad 1>\lambda_{n,c} > \lambda_{n+1,c}>0. \end{aligned} \end{equation} Furthermore, each $\lambda_{n,c}$ is non-decreasing with respect to $c$. Using also \cite[formula (6)]{BK2017}, we find that \begin{equation}\label{eq:biglam} \left\lfloor\frac{2c}{\pi}\right\rfloor-1\leqslant \Big|\{n \in {\mathbb{N}} \,:\, \lambda_{n,c}\geqslant 1/2\}\Big|\leqslant \left\lceil \frac{2c}{\pi}\right\rceil+1. \end{equation} where $\lfloor \cdot \rfloor$ and $\lceil \cdot \rceil$ denote the floor and the ceiling functions, respectively, and $|\cdot|$ is the number of elements. We also employ the following estimate from \cite[Corollary 3]{BK2017}: for $n \geqslant \max\{3, \frac{2c}{\pi}\}$, \begin{equation}\label{eigenestimate} A(n,c)^{-1} e^{-2\tilde n (\log \tilde{n}-\kappa)} \leqslant \lambda_{n,c} \leqslant A(n,c) e^{-2\tilde n (\log \tilde{n}-\kappa)}, \end{equation} where $\nu_1\geqslant 1$, $\nu_2,\nu_3 \geqslant 0$ are some fixed constants, \[ A(n,c):= \nu_1 n^{\nu_2} \left(\frac{c}{c+1}\right)^{-\nu_3} e^{(\pi c)^2/4n}. \] and \begin{equation}\label{def:kappa} \kappa := \log \left( \dfrac{ec}{4}\right), \qquad \tilde{n} = \tilde{n}(n):= n+\dfrac 12. \end{equation} \begin{Remark}\label{R21} Apparently, proceeding from the approach of \cite{BK2017}, one can give explicit values for the constants $\nu_1$, $\nu_2$, $\nu_3$ in the expression for $A(n,c).$ \end{Remark} \noindent We also recall from \cite[formula (11)]{STR2006} that, for all $n \in {\mathbb{N}}$ and $c>0$, \begin{equation}\label{norm:Linfty} \max_{0 \leqslant j \leqslant n} \max_{|x|\leqslant 1} |\psi_{j,c}(x)|\leqslant 2\sqrt{n}. \end{equation} \begin{Remark}\label{R22} Proceeding from \eqref{L-prob}, \eqref{chi<chi}, and \eqref{norm:Linfty}, one can show that, for any $m\in {\mathbb{N}}$, \[ \|\psi_{n,c}\|_{C^m[-1,1]} = O ( n^{2m+1/2}) \qquad \text{as $n \rightarrow \infty$.} \] \end{Remark} Next, we recall results on the spectral approximation by PSWFs in Sobolev-type spaces; see \cite{Wang2010}. For a real $\nu\geqslant 0$, let \begin{equation} \widetilde\mathcal{H}^\nu_c([-1,1]):= \left\{ f \in \mathcal{L}^2([-1,1]) \,:\, \|f\|_{\widetilde\mathcal{H}^\nu_c} <\infty \right\}, \end{equation} where \begin{align*} \|f\|_{\widetilde\mathcal{H}^\nu_c([-1,1])} := \left(\sum_{n\in {\mathbb{N}}} (\chi_{n,c})^{\nu} |\hat{f}_{n,c}|^2\right)^{1/2} \qquad \hat{f}_{n,c}:= \int_{-1}^1 \psi_{n,c} (y) f(y)dy. \end{align*} Recall from \eqref{def:pi_n} that \[ \pi_{n}[f]= \sum_{j =0 }^n \hat{f}_{j,c}\psi_{j,c}(x), \qquad n\in {\mathbb{N}}. \] Note that $\pi_{n}[f] \rightarrow f$ as $n \rightarrow \infty$ since $(\psi_{j,c}(x))_{j \in {\mathbb{N}}}$ form an orthonormal basis in $\mathcal{L}^2([-1,1])$. Furthermore, for any $0 \leqslant \mu \leqslant \nu$, \begin{equation}\label{eq:pi1} \left\|f-\pi_{n}[f] \right\|_{\widetilde\mathcal{H}^\mu_c([-1,1])} \leqslant n^{\mu-\nu} \|f\|_{\widetilde\mathcal{H}^\nu_c([-1,1])}. \end{equation} The standard Sobolev space $\mathcal{H}^\nu[(-1,1)]$ is embedded in $\widetilde\mathcal{H}^\nu_c([-1,1])$. In fact, we have that \begin{equation}\label{eq:pi2} \|f\|_{\widetilde\mathcal{H}^\nu_c[(-1,1)]} \leqslant C(1+c^2)^{\nu/2} \|f\|_{\mathcal{H}^\nu([-1,1])}, \end{equation} where $C$ is a constant independent of $c$ and $f$ assuming that $c \geqslant c_0>0$. \begin{Remark}\label{R23} Proceeding from the results of \cite{Wang2010}, one can obtain an explicit estimate for the constant $C=C(c_0, \nu)$ in \eqref{eq:pi2}. Besides, one can establish an upper bound for $\|\varphi f\|_{\mathcal{H}^\nu ([-1,1])}$ in terms of $\| f \|_{\widetilde{\mathcal{H}}^\nu_c([-1,1])}$, for fixed $\nu>0$, where $\varphi$ is a smooth real-valued function appropriately vanishing at the ends of the interval $[-1,1]$ and non-vanishing elsewhere. \end{Remark} \subsection{Proof of Lemma \ref{L:general}}\label{S:Lemma} First, we observe that \[\mathcal{F}_{n,c}^{-1}[\mathcal{F}_c[f] ] = \pi_n [f].\] Using also the linearity of $\mathcal{F}_{n,c}^{-1}$, we derive \[ f - \mathcal{F}_{n,c}^{-1}[w] = f -\pi_n [f] + \mathcal{F}_{n,c}^{-1}[\mathcal{F}_c[f] ] - \mathcal{F}_{n,c}^{-1}[w] = f - \pi_n [f] + \mathcal{F}_{n,c}^{-1} [u], \] where $u := \mathcal{F}_c[f] - w$. Therefore, \[ \left\| f - \mathcal{F}_{n,c}^{-1}[w]\right\|_{\mathcal{L}^2([-1,1])} \leqslant \left\|\mathcal{F}_{n,c}^{-1} [u]\right\|_{\mathcal{L}^2([-1,1])}+ \left\|f - \pi_n [f] \right\|_{\mathcal{L}^2([-1,1])} . \] Due to \eqref{eq:eigenrel}, we have that $|\mu_{j,c}| \geqslant |\mu_{n,c}|$ for all $j \leqslant n$. Using also the orthonormality of the basis $(\psi_{j,c})_{j \in {\mathbb{N}}}$ in $\mathcal{L}^2([-1,1])$, we estimate \begin{align*} \| \mathcal{F}_{n,c}^{-1} [u] \|_{\mathcal{L}^2([-1,1])}^2 &= \left\| \sum_{j =0}^n \dfrac{1}{\mu_{j,c}}\psi_{j,c}(\cdot) \int_{-1}^1 \psi_{j,c} (x) u(x)dx \right\|_{\mathcal{L}^2([-1,1]) }^2 \\ & = \sum_{j =0}^n \dfrac{1}{|\mu_{j,c}|^2} \left\| \psi_{j,c}(\cdot) \int_{-1}^1 \psi_{j,c} (x) u(x)dx \right\|_{\mathcal{L}^2([-1,1]) }^2 \\ & \leqslant \dfrac{1}{|\mu_{n,c}|^2} \sum_{j =0}^n \left\| \psi_{j,c}(\cdot) \int_{-1}^1 \psi_{j,c} (x) u(x)dx \right\|_{\mathcal{L}^2([-1,1]) }^2 \\ &\leqslant \dfrac{1}{|\mu_{n,c}|^2} \sum_{j =0}^\infty \left\| \psi_{j,c}(\cdot) \int_{-1}^1 \psi_{j,c} (x) u(x)dx \right\|_{\mathcal{L}^2([-1,1]) }^2 = \left(\dfrac{ \|u\|_{ \mathcal{L}^2([-1,1])} }{|\mu_{n,c}|}\right)^2. \end{align*} Recalling that $ \|u\|_{ \mathcal{L}^2([-1,1])} \leqslant \delta$ (by assumptions) and combining the formulas above, we complete the proof. \subsection{Radon Transform} The Radon transform $\mathcal{R}$ defined in \eqref{def:R} arises in various domains of pure and applied mathematics. Since Radon's work \cite{Radon}, this transform and its applications received significant attention and its properties are well studied; see, for example, \cite{Naterrer2001} and references therein. In particular, the Radon transform $\mathcal{R}[v]$ is closely related to the Fourier transform $\hat v $ (see \eqref{eq:Fourier}) via the following formula: \begin{equation}\label{eq:projection} \hat{v} (s \theta ) = \dfrac{1}{(2\pi)^{d}} \int_{-\infty}^{\infty} e^{i st} \mathcal{R} [v] (t,\theta) dt, \qquad s \in {\mathbb{R}}, \ \theta \in \mathbb{S}^{d-1}. \end{equation} In the theory of Radon transform, formula \eqref{eq:projection} is known as the projection theorem. Note that one can define the inverse transform $\mathcal{R}^{-1}$ by combining \eqref{eq:projection} with inversion formulas for the Fourier transform: \begin{equation}\label{inv:radon} \begin{aligned} \mathcal{R}^{-1} [u](q)&:= \dfrac{1}{(2\pi)^{d-1}} \int_{\mathbb{S}^{d-1}} \int_{0}^{+\infty} e^{-is \theta q} \hat{u}(s, \theta) s^{d-1} ds\, d\theta, \qquad q\in {\mathbb{R}}^d, \\ \hat{u}(s, \theta) &:= \dfrac{1}{2\pi} \int_{\mathbb{R}} e^{ist} u(t, \theta) dt, \qquad \qquad s \in {\mathbb{R}}, \ \theta \in \mathbb{S}^{d-1}. \end{aligned} \end{equation} For other inversion formulas for $\mathcal{R}$; see \cite{Radon} and, for example, \cite[Section II.2]{Naterrer2001}. For real $\nu$, let \begin{align*} \mathcal{H}^{\nu}({\mathbb{R}}^d)&:= \{v \,:\, \|v\|_{\mathcal{H}^{\nu}({\mathbb{R}}^d)} <\infty\}, \\ \|v\|_{\mathcal{H}^{\nu}({\mathbb{R}}^d)}&:= \left(\int_{{\mathbb{R}}^{d}} (1+p^2)^{\nu} |\hat{v}(p)|^2 dp\right)^{1/2}, \\ \mathcal{H}^{\nu}({\mathbb{R}} \times \mathbb{S}^{d-1} ) &:= \{u \,:\, \|u \|_{\mathcal{H}^\nu({\mathbb{R}} \times \mathbb{S}^{d-1} )}<\infty\},\\ \|u \|_{\mathcal{H}^\nu({\mathbb{R}} \times \mathbb{S}^{d-1} )} &:= \left(\int_{\mathbb{S}^{d-1}} \int_{-\infty}^{+\infty} (1+s^2)^{\nu}\, |\hat{u}(s,\theta)|^2 ds\, d\theta\right)^{1/2}, \end{align*} where $v$, $u$ are distributions on ${\mathbb{R}}^{d}$ and ${\mathbb{R}} \times \mathbb{S}^{d-1}$, respectively. According to \cite[Theorem 5.1]{Naterrer2001}, if $v \in \mathcal{H}^{\nu}({\mathbb{R}}^{d})$ and $\operatorname{supp} v \subset B_1$ then \begin{equation}\label{eq:H-H} a(\nu,d)\|v\|_{\mathcal{H}^{\nu}({\mathbb{R}}^{d})} \leqslant \|\mathcal{R}[v]\|_{\mathcal{H}^{\nu+ (d-1)/2}({\mathbb{R}} \times \mathbb{S}^{d-1} ) } \leqslant b (\nu,d) \|v\|_{\mathcal{H}^{\nu}({\mathbb{R}}^{d})}. \end{equation} In addition, one can recover explicit expressions for $a(\nu,d)$ and $b(\nu,d)$ from the proof of \cite[Theorem 5.1]{Naterrer2001}. We will also use the following result generalizing the left inequality in \eqref{eq:H-H}. \begin{Lemma}\label{L:H-H} Let $u \in \mathcal{H}^{\nu+(d-1)/2}({\mathbb{R}} \times \mathbb{S}^{d-1})$, $\operatorname{supp} u \subseteq [-1,1]\times \mathbb{S}^{d-1}$, and $u(s,\theta) = u(-s,-\theta)$ for all $(s,\theta)\in {\mathbb{R}} \times \mathbb{S}^{d-1}$. Then, \[ a(\nu,d) \|v\|_{\mathcal{H}^{\nu}({\mathbb{R}}^{d})} \leqslant \|u\|_{\mathcal{H}^{\nu +(d-1)/2}({\mathbb{R}} \times \mathbb{S}^{d-1} ) }, \] where $v:=\mathcal{R}^{-1}[u]$ is defined by \eqref{inv:radon} and $a(\nu,d)$ is the same as in \eqref{eq:H-H}. \end{Lemma} In fact, the proof of Lemma \ref{L:H-H} is identical to the arguments of \cite[Theorem 5.1]{Naterrer2001} for the left inequality in \eqref{eq:H-H}. In addition, we use also that $u = \mathcal{R}[v]$. Note that $v$ defined by \eqref{inv:radon} might not be compactly supported; see, for example, \cite{Novikov2002} for the asymptotic analysis of $\mathcal{R}^{-1}[u]$ at infinity. \section{Stability estimates in 1D}\label{S:1D} The main result of this section is the following theorem. \begin{Theorem}\label{T:detailed} Let $f,w \in \mathcal{L}^2([-1,1])$ and $ \| \mathcal{F}_c [f] - w\|_{\mathcal{L}^2} \leqslant \delta $ for some $\delta \in (0,1)$. Suppose that $f \in \mathcal{H}^\nu([-1,1])$, $\nu> 0$. Then, \begin{equation}\label{eq:detailed} \begin{aligned} \|f - \mathcal{F}_{n^*,c}^{-1} [w]\|_{\mathcal{L}^2([-1,1])} \leqslant \gamma_1 c^{-\gamma_2} (1+c)^{\gamma_3} (1+\rho)^{\gamma_4} \exp\left(\dfrac{\pi^2c \log(1+\rho) }{2 e \rho} \right) \delta^{1-\alpha} \\ + C(1+c^2)^{\nu/2} \|f\|_{\mathcal{H}^\nu([-1,1])} \left( 2+ \dfrac{ec}{4}\cdot \dfrac{\rho }{\log(1+\rho)} \right)^{-\nu}, \end{aligned} \end{equation} where $\alpha \in (0,1)$, $\rho = \dfrac{4}{ec} \alpha \log (\delta^{-1})$, $ n^* = n^*(c,\alpha,\delta) $ is defined by \eqref{def:n-star}, $C$ is the constant from \eqref{eq:pi2}, and $\gamma_1, \gamma_2, \gamma_3,\gamma_4$ are some positive constants independent of $c$, $\alpha$, $\delta$. \end{Theorem} Theorem \ref{T:detailed} follows directly by combining estimate \eqref{eq:pi1} with $\mu =0$, estimate \eqref{eq:pi2}, Lemma~\ref{L:general}, and the following lemma. \begin{Lemma}\label{L:delta-mu} Let $c$, $\alpha$, $\delta$, $\rho$, $n^*$ be the same as in Theorem \ref{T:detailed}. Then \begin{equation}\label{delta-mu} \dfrac{\delta}{|\mu_{n^*,c}|} \leqslant \gamma_1 c^{-\gamma_2} (1+c)^{\gamma_3} (1+\rho)^{\gamma_4} \exp\left(\frac{\pi^2c \log(1+\rho) }{2 e \rho} \right) \delta^{1-\alpha}. \end{equation} \end{Lemma} We prove Lemma \ref{L:delta-mu} in Section \ref{S:detailed}. The proof of Lemma \ref{L:delta-mu} is based on two additional technical lemmas, namely, Lemma~\ref{L:tau} and Lemma~\ref{L:ass1}. Theorem \ref{T:detailed} implies the following corollary, which is equivalent to Theorem~\ref{T:multi} in dimension $d=1$. This corollary is also crucial for our considerations for $d\geqslant 2$ given in Section \ref{S:multi}. \begin{Corollary}\label{C:main} Let $f,w \in \mathcal{L}^2([-1,1])$ and $ \| \mathcal{F}_c [f] - w\|_{\mathcal{L}^2} \leqslant \delta M $ for some $\delta \in (0,1)$ and $M>0$. Suppose that $f \in \mathcal{H}^\nu([-1,1])$, $\nu> 0$. Let $\alpha \in (0,1)$ and $n^*$ be defined by \eqref{def:n-star}. Then, for any $\beta \in (0, 1-\alpha)$ and any $\mu \in (0,\nu)$, \[ \|f - \mathcal{F}_{n^*,c}^{-1} [w]\|_{\mathcal{L}^2([-1,1])} \leqslant C_1 M \delta^{\beta} + C_2 \|f\|_{\mathcal{H}^\nu([-1,1])} \left( \log \delta^{-1}\right)^{-\mu}, \] where $C_1 = C_1(c,\alpha,\beta)>0$ and $C_2 = C_2(c,\alpha,\nu,\mu)>0$. \end{Corollary} \begin{proof} It is sufficient to prove Corollary \ref{C:main} for the case $M=1$. The case $M \neq 1$ is reduced to $M=1$ by scaling $f \rightarrow \tilde{f} = f/M$ and $w \rightarrow \tilde{w} = w/M$. Therefore, it remains to show that, under the assumptions of Theorem \ref{T:detailed}, the following estimate holds for any $\beta \in (0, 1-\alpha)$ and any $\mu \in (0,\nu)$: \begin{equation}\label{eq:inter} \|f - \mathcal{F}_{n^*,c}^{-1} [w]\|_{\mathcal{L}^2([-1,1])} \leqslant C_1 \delta^{\beta} + C_2 \|f\|_{\mathcal{H}^\nu([-1,1])} \left( \log \delta^{-1}\right)^{-\mu} , \end{equation} where $C_1 = C_1(c,\alpha,\beta)>0$ and $C_2 = C_2(c,\alpha,\nu,\mu)>0$. Under our assumptions, we have that: \begin{align*} \rho = \dfrac{4}{ec} \alpha \log (\delta^{-1}) >0; \qquad \dfrac{\pi^2c \log(1+\rho) }{2 e \rho} \leqslant \dfrac{\pi^2c}{ 2e}; \end{align*} and, for some positive constants $m_1 = m_1 (c, \alpha,\beta, \gamma_4)$ and $m_2 = m_2( c, \alpha, \nu, \mu) $, \begin{align*} (1+\rho)^{\gamma_4} \delta^{1-\alpha } &\leqslant m_1 \delta^{\beta}; \\ \left( 2+ \dfrac{ec}{4}\cdot \dfrac{\rho }{\log(1+\rho)} \right)^{-\nu} &\leqslant m_2 \left( \log \delta^{-1}\right)^{-\mu}. \end{align*} Applying these estimates in \eqref{eq:detailed}, we derive \eqref{eq:inter} with \[ C_1 = \gamma_1 c^{-\gamma_2} (1+c)^{\gamma_3} e^{\pi^2c /(2e)}m_1, \qquad C_2 = C (1+c^2)^{\nu/2}m_2. \] This completes the proof of Corollary \ref{C:main}. \end{proof} \section{Multidimensional reconstruction}\label{S:multi} In this section, we prove Theorems \ref{T1} and \ref{T:multi}. \subsection{Proof of Theorem \ref{T1}}\label{S:T1} Let $\mathcal{R}[v]$ be the Radon transform of $v$; see formula \eqref{def:R}. Since $\operatorname{supp} v \subset B_\sigma$, we have that \begin{equation}\label{Rv=0} \mathcal{R} [v] (t,\theta) = 0 \text{ for $|t|>\sigma$.} \end{equation} Therefore, we only need to integrate over $t \in [-\sigma,\sigma]$ in \eqref{eq:projection}. Then, using the change of variables $s= r x$, $t= \sigma y$ and recalling $c = r\sigma$, we get that \begin{equation}\label{eq:grtheta} g_{r,\theta} (x) = \left(\dfrac{2\pi}{\sigma}\right)^d \hat{v} (r x \theta ) = \dfrac{1}{\sigma^{d-1}} \int_{-1}^1 e^{i c x y } \mathcal{R} [v] (\sigma y,\theta) dy, \qquad x \in [-1,1]. \end{equation} Using \eqref{Rv=0}, \eqref{eq:grtheta} and recalling the definitions of $\mathcal{F}_c$ and $f_{r, \sigma}$, we obtain that \begin{equation}\label{eq:eq:last} \mathcal{R} [v] (\sigma y,\theta) = \sigma^{d-1} \mathcal{F}_c^{-1} [g_{r,\theta}] (y) = \sigma^{d-1} f_{r, \sigma}(y, \theta). \end{equation} Let \begin{equation}\label{def:vsigma} v_{\sigma}(q) := v(\sigma q), \qquad q \in {\mathbb{R}}^d. \end{equation} Using \eqref{def:R} and the change of variables $q = \sigma q'$, we find that \[ \mathcal{R} [v] (\sigma y,\theta) = \int_{q\in {\mathbb{R}}^d \,:\, q \theta =\sigma y } v(q) dq = \sigma^{d-1}\int_{q'\in {\mathbb{R}}^d \,:\, q' \theta =y } v(\sigma q') dq' = \sigma^{d-1} \mathcal{R} [v_{\sigma}](y, \theta). \] Thus, also using \eqref{eq:eq:last}, we get \begin{equation}\label{eq:vf_sigma} \mathcal{R} [v_{\sigma}] = f_{r, \sigma}. \end{equation} Applying the inverse Radon transform and formula \eqref{def:vsigma} completes the proof. \subsection{Proof of Theorem \ref{T:multi}}\label{S:multi} We will repeatedly use the following bounds for the Sobolev norm with respect to the argument scaling. \begin{Lemma}\label{L:Snorm} Let $\mathfrak{v} \in \mathcal{H}^{\eta}({\mathbb{R}}^d)$ for some $\eta \in {\mathbb{R}}$. Then, for any $\sigma>0$, \begin{align*} \dfrac{\sigma^{\eta - d/2}}{ (1+\sigma)^{\eta}} \|\mathfrak{v} \|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)} \leqslant \|\mathfrak{v}_{\sigma}\|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)} \leqslant \dfrac{(1+\sigma)^{\eta}}{\sigma^{d/2}} \|\mathfrak{v}\|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)}, \qquad \text{for $\eta \geqslant 0$}, \\ \dfrac{(1+\sigma)^{\eta}}{\sigma^{d/2}} \|\mathfrak{v} \|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)} \leqslant \|\mathfrak{v}_{\sigma}\|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)} \leqslant \dfrac{\sigma^{\eta - d/2}}{ (1+\sigma)^{\eta}} \|\mathfrak{v}\|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)}, \qquad \text{for $\eta \leqslant 0$}, \end{align*} where $\mathfrak{v}_\sigma$ is defined by $\mathfrak{v}_\sigma (q): = \mathfrak{v}(\sigma q)$, $q \in {\mathbb{R}}^d$. \end{Lemma} \begin{proof} Recall that \begin{align*} \|\mathfrak{v}\|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)}^2 &= \int_{{\mathbb{R}}^{d}} (1+ p^2)^{\eta} |\hat{\mathfrak{v}}( p)|^2 dp,\\ \|\mathfrak{v}_\sigma \|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)}^2 &= \int_{{\mathbb{R}}^{d}} (1+p^2)^{\eta} |\hat{\mathfrak{v}}_\sigma(p)|^2 dp \\ &= \sigma^{-2d} \int_{{\mathbb{R}}^{d}} (1+p^2)^{\eta} |\hat{ \mathfrak{v}}( p/\sigma)|^2 dp \\ &= \sigma^{-d} \int_{{\mathbb{R}}^{d}} (1+ (\sigma p')^2)^{\eta} |\hat{\mathfrak{v}}( p')|^2 dp'. \end{align*} Bounding \begin{align*} \min \{1, \sigma^{2\eta}\} \leqslant \dfrac{ (1+ (\sigma p')^2)^{\eta}}{ (1+ (p')^2)^{\eta}} \leqslant \max \{1, \sigma^{2\eta}\}, \end{align*} we derive that \[ \min\{1,\sigma^{2\eta}\} \|\mathfrak{v}\|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)}^2 \leqslant \sigma^d \|\mathfrak{v}_\sigma \|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)}^2 \leqslant \max \{1,\sigma^{2\eta}\} \|\mathfrak{v}\|_{\mathcal{H}^{\eta}({\mathbb{R}}^d)}^2. \] To complete the proof, it remains to observe that \begin{align*} \max \{1, \sigma^{2\eta}\} &\leqslant \begin{cases} (1+\sigma)^{2\eta}, &\text{if } \eta \geqslant 0,\\ \dfrac{\sigma^{2\eta}}{(1+\sigma)^{2\eta}}, &\text{if } \eta \leqslant 0, \end{cases} \\ \min \{1, \sigma^{2\eta}\} &\geqslant \begin{cases} \dfrac{\sigma^{2\eta}}{(1+\sigma)^{2\eta}} , &\text{if } \eta \geqslant 0,\\ (1+\sigma)^{2\eta} , &\text{if } \eta \leqslant 0. \end{cases} \end{align*} \end{proof} Now we are ready to prove Theorem \ref{T:multi}. Let $v_{\sigma}$ be defined by \eqref{def:vsigma} and \[ v_{\sigma}^{\delta}(q) := v^{\delta}(\sigma q), \qquad q \in {\mathbb{R}}^d. \] Applying Lemma \ref{L:Snorm} with $\mathfrak{v} = v - v^{\delta}$ and $\eta = - \frac{d-1}{2}$, we find that \begin{equation}\label{eq1} \|v- v^{\delta}\|_{\mathcal{H}^{-(d-1)/2}({\mathbb{R}}^d)} \leqslant (1+\sigma)^{(d-1)/2} \sigma^{d/2} \|v_{\sigma}-v^{\delta}_{\sigma}\|_{\mathcal{H}^{-(d-1)/2}({\mathbb{R}}^d)}. \end{equation} Using the formulas for $ f_{r, \sigma}$ and $ u_{r,\sigma}$ of Theorems \ref{T1} and \ref{T:multi}, we find that \[ v_{\sigma} - v_\sigma^\delta = \mathcal{R}^{-1} [f_{r, \sigma} - u_{r,\sigma}]. \] Note also that both $f_{r, \sigma}$ and $ u_{r,\sigma}$ are supported in $[-1,1]\times \mathbb{S}^{d-1}$. Applying Lemma \ref{L:H-H} for $u = f_{r, \sigma} - u_{r, \sigma}$, we get that \begin{equation}\label{eq2} \|v_{\sigma}-v^{\delta}_{\sigma}\|_{\mathcal{H}^{-(d-1)/2}({\mathbb{R}}^d)} \leqslant \dfrac{1}{a} \|f_{r, \sigma} - u_{r, \sigma}\|_{\mathcal{L}^2({\mathbb{R}}\times \mathbb{S}^{d-1})}, \end{equation} where $a = a(-\tfrac{d-1}{2},d)$ is the constant from \eqref{eq:H-H}. Observe that \begin{equation}\label{eq25} \|f_{r, \sigma} - u_{r, \sigma}\|_{\mathcal{L}^2({\mathbb{R}}\times \mathbb{S}^{d-1})}^2 = \int_{\mathbb{S}^{d-1}} \|f_{r, \sigma}(\cdot, \theta) - u_{r,\sigma}(\cdot, \theta)\|_{\mathcal{L}^2([-1,1])}^2 d \theta. \end{equation} Applying Corollary~\ref{C:main} with functions $f = f_{r, \sigma}(\cdot, \theta)$ and $w =w_{r,\theta}$, we obtain that, for any $\mu \in (0,\nu+\tfrac{d-1}{2})$ and almost all $\theta \in \mathbb{S}^{d-1}$, \begin{equation}\label{eq26} \begin{aligned} &\|f_{r, \sigma}(\cdot, \theta) - u_{r,\sigma}(\cdot, \theta)\|_{\mathcal{L}^2([-1,1])} \leqslant C_1 M(\theta) \delta^{\beta} + C_2 H(\theta) \left( \log \delta^{-1}\right)^{-\mu}, \\&M(\theta):= \dfrac{1}{\delta} \|g_{r, \theta} - w_{r,\theta}\|_{\mathcal{L}^2([-1,1])}, \\ &H(\theta):= \|f_{r, \sigma}(\cdot, \theta)\|_{\mathcal{H}^{\nu+(d-1)/2}([-1,1])}, \end{aligned} \end{equation} where $f_{r,\sigma}$, $g_{r, \theta}$ and $ w_{r,\theta}$ are defined in Theorems \ref{T1} and \ref{T:multi}, $C_1 $ and $C_2$ are the constants of Corollary~\ref{C:main} with $\nu+ \frac{d-1}{2}$ in place of $\nu$. Here, the assumption of Corollary~\ref{C:main} that \[ \|\mathcal{F}_c[f_{r, \sigma}(\cdot, \theta)] -w_{r,\theta}\|_{\mathcal{L}^2([-1,1])} \leqslant \delta M(\theta) \] is fulfilled automatically, since $f_{r, \sigma}(\cdot, \theta) \equiv \mathcal{F}_c^{-1}[g_{r, \sigma}]$ on $[-1,1]$ by definition. In fact, the functions $M$, $H$ belong to $\mathcal{L}^2(\mathbb{S}^{d-1})$; see formulas \eqref{eq4} and \eqref{eq45} below. Combining formulas \eqref{eq25}, \eqref{eq26} and the Cauchy--Schwarz inequality \[ \int_{\mathbb{S}^{d-1}} H(\theta) M(\theta) d \theta \leqslant \|M\|_{\mathcal{L}^2(\mathbb{S}^{d-1})} \|H\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}, \] we get that \begin{equation}\label{eq3} \begin{aligned} \|f_{r, \sigma} - u_{r, \sigma}\|_{\mathcal{L}^2({\mathbb{R}}\times \mathbb{S}^{d-1})}^2 &\leqslant \int_{\mathbb{S}^{d-1}} \left(C_1 M(\theta) \delta^{\beta} + C_2 H(\theta) \left( \log \delta^{-1}\right)^{-\mu}\right)^2 d \theta \\ &\leqslant \left( C_1 \|M\|_{\mathcal{L}^2(\mathbb{S}^{d-1})} \delta^{\beta} + C_2 \|H\|_{\mathcal{L}^2(\mathbb{S}^{d-1})} \left( \log \delta^{-1}\right)^{-\mu} \right)^2. \end{aligned} \end{equation} Next, we estimate $ \|M\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}$ and $\|H\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}$. Since $\|w-\hat{v}\|_r \leqslant \delta N$, we get \begin{equation}\label{eq4} \begin{aligned} \|M\|_{\mathcal{L}^2(\mathbb{S}^{d-1})}^2 &= \int_{\mathbb{S}^{d-1}} \dfrac{1}{\delta^2} \|g_{r,\theta} - w_{r,\theta}\|_{\mathcal{L}^2 ([-1,1])}^2 d\theta \\&= \dfrac{1}{r\delta^2 } \left(\dfrac{2\pi}{\sigma}\right)^{2d} \int_{\mathbb{S}^{d-1}} \int_{-r}^r |w(s\theta) - \hat{v}(s\theta) |^2 ds \, d \theta. \\ &= \dfrac{2}{r\delta^2 } \left(\dfrac{2\pi}{\sigma}\right)^{2d} \|w - \hat{v}\|_r^2 \leqslant \dfrac{2}{r } \left(\dfrac{2\pi}{\sigma}\right)^{2d} N^2. \end{aligned} \end{equation} In addition, using \eqref{eq:vf_sigma}, we get \begin{equation}\label{eq45} \|H\|_{\mathcal{L}^2(\mathbb{S}^{d-1})} = \|f_{r,\sigma}\|_{\mathcal{H}^{\nu+ (d-1)/2}({\mathbb{R}} \times \mathbb{S}^{d-1} ) } = \|\mathcal{R}[v_\sigma]\|_{\mathcal{H}^{\nu+ (d-1)/2}({\mathbb{R}} \times \mathbb{S}^{d-1} ) }. \end{equation} Using formula \eqref{eq45}, the right inequality of \eqref{eq:H-H}, and applying Lemma \ref{L:Snorm} with $\mathfrak{v} = v$ and $\eta = \nu$, we obtain that \begin{equation}\label{eq5} \|H\|_{\mathcal{L}^2(\mathbb{S}^{d-1})} \leqslant b \|v_\sigma\|_{\mathcal{H}^{\nu}({\mathbb{R}}^d)} \leqslant b \dfrac{(1+\sigma)^{\nu}}{\sigma^{d/2}} \|v\|_{\mathcal{H}^{\nu}({\mathbb{R}}^d)}, \end{equation} where $b= b(\nu,d)$ is the constant from \eqref{eq:H-H}. Combining \eqref{eq1} -- \eqref{eq5}, we derive the required bound \eqref{eq:multi} with \begin{align*} \kappa_1 :=\frac{\sqrt{2} (2\pi)^d (1+\sigma)^{(d-1)/2} C_1 }{a \sigma^{\frac{d}{2}} \sqrt{r}}, \qquad \kappa_2 := \frac{ b}{a } (1+\sigma)^{\nu + (d-1)/2} C_2. \end{align*} \section{Proof of Lemma \ref{L:delta-mu}}\label{S:detailed} To prove Lemma \ref{L:delta-mu}, we need two additional technical results given below. \begin{Lemma}\label{L:tau} For any $\rho > 0$, the equation \begin{equation}\label{tau:eq} \tau \log \tau = \rho \end{equation} has the unique solution $\tau = \tau(\rho)>1$. Furthermore, \begin{equation}\label{rho:ineq} 1 \leqslant \dfrac{\rho}{\log (1+\rho)} \leqslant \tau(\rho) \leqslant 1+\rho. \end{equation} \end{Lemma} \begin{proof} Observe that $u_1(\tau) = \tau \log \tau$ is a strictly increasing continuous function on $[1,+\infty)$, $u_1(1) = 0$, and $u_1(\tau) \rightarrow +\infty$ as $\tau \rightarrow +\infty$. Then, by the intermediate value theorem, equation \eqref{tau:eq} has the unique solution $\tau(\rho) \in (0,+\infty)$ for any $\rho>0$. Next, note that $u_2(\tau) = \tau - \tau \log \tau$ is a strictly decreasing function on $[1,+\infty)$ since its derivative $u_2'(\tau) = -\log \tau$ is negative for $\tau>1$. Therefore, \[\tau(\rho) - \rho = \tau(\rho) - \tau(\rho) \log \tau(\rho) \leqslant u_2(1) = 1. \] Thus, we proved that $\tau(\rho) \leqslant 1 +\rho$. Then, we get $ \log (\tau(\rho) ) \leqslant \log(1+\rho) $ which implies the other bound \[ \tau(\rho) = \dfrac{\rho}{ \log (\tau(\rho) )} \geqslant \dfrac{\rho}{\log (1+ \rho)}. \] The remaining inequality $ \dfrac{\rho}{\log (1+ \rho)}\geqslant 1$ is equivalent to $e^{\theta}-1 \geqslant \theta$ with $\theta = \log (1+\rho)$. \end{proof} \begin{Lemma}\label{L:ass1} Let $\alpha, \delta \in (0,1)$ and $\tau $ be defined according to \eqref{tau:eq} with $\rho = \dfrac{4}{ec} \alpha \log (\delta^{-1})$. Then, for any $q\geqslant 0$, we have \[ e^{ \eta (\log \eta - \kappa)} \leqslant \left(\dfrac{4 \eta}{ c}\right)^{q} \delta^{-\alpha}, \] where $\kappa$ is defined according to \eqref{def:kappa} and $\eta = \eta(q,\alpha,\delta,c) := q+ \tau \dfrac{ec}{4}$. \end{Lemma} \begin{proof} First, observe that \begin{align*} \eta (\log \eta - \kappa) &= (q+ \tau \dfrac{ec}{4}) (\log \eta - \kappa) \\ &= q (\log \eta - \kappa) + \tau \dfrac{ec}{4} (\log (\tau \dfrac{ec}{4}) - \log (\dfrac{ec}{4}) + \log \eta - \log (\tau \dfrac{ec}{4})) \\ &= q (\log \eta - \kappa) + \tau\dfrac{ec}{4} \log \tau + \tau \dfrac{ec}{4} (\log \eta - \log (\tau \dfrac{ec}{4})). \end{align*} By the definition of $\tau$, we have that \[ \tau \dfrac{ec}{4} \log \tau = \alpha \log (\delta^{-1}).\] Besides, \[ \tau \dfrac{ec}{4} (\log \eta - \log (\tau \dfrac{ec}{4}) ) = \tau \dfrac{ec}{4} \log \left(1 + \dfrac{q }{\tau \tfrac{ec}{4}}\right) \leqslant q. \] Combining the formulas above and recalling the definition of $\kappa$, we derive that \[ \eta (\log \eta - \kappa) \leqslant q (\log \eta - \log (\dfrac{ec}{ 4})) + \alpha \log (\delta^{-1}) + q = q \log \left(\dfrac{4 \eta}{ c}\right) + \alpha \log (\delta^{-1}). \] The required bound follows by exponentiating the both sides of the last formula. \end{proof} Now, we are ready to prove Lemma \ref{L:delta-mu}. First, we combine formulas \eqref{eq:eigenrel} and \eqref{eigenestimate} to get \begin{equation}\label{mu_eq} |\mu_{n^*,c}|\geqslant \sqrt{\dfrac{2\pi}{ c\,A(n^*,c)}} e^{-\tilde n (\log \tilde{n}-\kappa)}, \end{equation} where $\tilde{n} = n^*+\dfrac12$. Note that \eqref{eigenestimate} requires $n^*\geqslant \max\left\{3, \dfrac {2c}{\pi}\right\}.$ The inequality $n^*\geqslant 3$ is immediate by the definition of $n^*$. In addition, using that $\tau>1$ by Lemma \ref{L:tau}, we can estimate \[ n^*\geqslant 2+ \tau \dfrac{ec}{4} \geqslant 2+ \dfrac{ec}{4} > \dfrac{ec}{4} > \dfrac {2c}{\pi}. \] Thus, we justified \eqref{mu_eq}. Using the inequalities $1\leqslant \tau \leqslant 1+ \rho$ from Lemma \ref{L:tau}, we estimate \begin{align*} n^* \leqslant 3 + \tau \dfrac{ec}{4} \leqslant 3(c+1)\tau \leqslant 3 (c+1) (1+\rho). \end{align*} Using the inequality $\tau \geqslant \dfrac{\rho}{ \log (1+\rho)}$ from Lemma \ref{L:tau}, we also find that \[ e^{(\pi c)^2/4n^*} \leqslant e^{ \pi^2 c / (e \tau) } \leqslant \exp\left(\dfrac{\pi^2 c \log(1+\rho)}{ e\rho}\right). \] Thus, we get that \begin{equation}\label{mu_eq1} \begin{aligned} A(n^*,c)&= \nu_1 (n^*)^{\nu_2} \left(\frac{c}{c+1}\right)^{-\nu_3} e^{(\pi c)^2/4n^*} \\&\leqslant \nu_1 3^{\nu_2} (c+1)^{\nu_2 - \nu_3} c^{-\nu_3}(1+\rho)^{\nu_2}\exp\left(\dfrac{\pi^2 c \log(1+\rho)}{ e\rho}\right). \end{aligned} \end{equation} Similarly as before, using the inequalities $1\leqslant \tau \leqslant 1+ \rho$ from Lemma \ref{L:tau}, we estimate \[ \tilde{n} \leqslant 3.5 + \tau \dfrac{ec}{4} \leqslant 3.5 (c+1)(1+\rho). \] Then, using Lemma \ref{L:ass1} with $q :=\tilde{n} - \tau \dfrac{ec}{4}$ and observing that $0\leqslant q\leqslant 3.5$, we find that \begin{equation}\label{mu_eq2} e^{\tilde n (\log \tilde{n}-\kappa)} \leqslant \left(\dfrac{4\tilde{n}}{c}\right)^{q}\delta^{-\alpha} \leqslant \left(\dfrac{14(c+1)}{c}\right)^{3.5} (1+\rho)^{3.5}\, \delta^{-\alpha}. \end{equation} Substituting the bounds of \eqref{mu_eq1} and of \eqref{mu_eq2} into \eqref{mu_eq}, we derive estimate \eqref{delta-mu} with \[ \gamma_1 = \sqrt{\dfrac{\nu_1 3^{\nu_2} } {2 \pi}} 14^{3.5},\qquad \gamma_2 = \dfrac{\nu_3}{2} +3, \qquad \gamma_3 = \dfrac{\nu_2-\nu_3}{2} + 3.5, \qquad \gamma_4 = \dfrac{\nu_2}{2} + 3.5. \] Note that if $\gamma_3 \leqslant 0$ then we can replace it with zero, since $(1+c)^{\gamma_3} \leqslant 1$ in this case. This completes the proof of Lemma \ref{L:delta-mu}. \end{document} \noindent { {\bf R.G. Novikov}\\ Centre de Math\'ematiques Appliqu\'ees, Ecole Polytechnique, 91128 Palaiseau, France\\ Institute of Earthquake Prediction Theory and Math. Geophysics RAS, 117997 Moscow, Russia\\ e-mail: \tt{[email protected]}} \end{document}
arXiv
CLRS Solutions 11.4 Open addressing 11.4 Open addressing 11.4 Open addressing Table of contents 11.4-4 $\star$ Consider inserting the keys $10, 22, 31, 4, 15, 28, 17, 88, 59$ into a hash table of length $m = 11$ using open addressing with the auxiliary hash function $h'(k) = k$. Illustrate the result of inserting these keys using linear probing, using quadratic probing with $c_1 = 1$ and $c_2 = 3$, and using double hashing with $h_1(k) = k$ and $h_2(k) = 1 + (k \mod (m - 1))$. We use $T_t$ to represent each time stamp $t$ starting with $i = 0$, and if encountering a collision, then we iterate $i$ from $i = 1$ to $i = m - 1 = 10$ until there is no collision. Linear probing: $$ \begin{array}{r|ccccccccc} h(k, i) = (k + i) \mod 11 & T_0 & T_1 & T_2 & T_3 & T_4 & T_5 & T_6 & T_7 & T_8 \\ \hline 0 \mod 11 & & 22 & 22 & 22 & 22 & 22 & 22 & 22 & 22 \\ 1 \mod 11 & & & & & & & & 88 & 88 \\ 2 \mod 11 & & & & & & & & & \\ 3 \mod 11 & & & & & & & & & \\ 4 \mod 11 & & & & 4 & 4 & 4 & 4 & 4 & 4 \\ 5 \mod 11 & & & & & 15 & 15 & 15 & 15 & 15 \\ 6 \mod 11 & & & & & & 28 & 28 & 28 & 28 \\ 7 \mod 11 & & & & & & & 17 & 17 & 17 \\ 8 \mod 11 & & & & & & & & & 59 \\ 9 \mod 11 & & & 31 & 31 & 31 & 31 & 31 & 31 & 31 \\ 10 \mod 11 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 \end{array} $$ Quadradic probing, it will look identical until there is a collision on inserting the fifth element: $$ \begin{array}{r|ccccccccc} h(k, i) = (k + i + 3i^2) \mod 11 & T_0 & T_1 & T_2 & T_3 & T_4 & T_5 & T_6 & T_7 & T_8 \\ \hline 0 \mod 11 & & 22 & 22 & 22 & 22 & 22 & 22 & 22 & 22 \\ 1 \mod 11 & & & & & & & & & \\ 2 \mod 11 & & & & & & & & 88 & 88 \\ 3 \mod 11 & & & & & & & 17 & 17 & 17 \\ 4 \mod 11 & & & & 4 & 4 & 4 & 4 & 4 & 4 \\ 5 \mod 11 & & & & & & & & & \\ 6 \mod 11 & & & & & & 28 & 28 & 28 & 28 \\ 7 \mod 11 & & & & & & & & & 59 \\ 8 \mod 11 & & & & & 15 & 15 & 15 & 15 & 15 \\ 9 \mod 11 & & & 31 & 31 & 31 & 31 & 31 & 31 & 31 \\ 10 \mod 11 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 \end{array} $$ Note that there is no way to insert the element $59$ now, because the offsets coming from $c_1 = 1$ and $c_2 = 3$ can only be even, and an odd offset would be required to insert $59$ because $59 \mod 11 = 4$ and all the empty positions are at odd indices. Double hashing: $$ \begin{array}{r|ccccccccc} h(k, i) = (k + i(1 + k \mod 10)) \mod 11 & T_0 & T_1 & T_2 & T_3 & T_4 & T_5 & T_6 & T_7 & T_8 \\ \hline 0 \mod 11 & & 22 & 22 & 22 & 22 & 22 & 22 & 22 & 22 \\ 1 \mod 11 & & & & & & & & & \\ 2 \mod 11 & & & & & & & & & 59 \\ 3 \mod 11 & & & & & & & 17 & 17 & 17 \\ 4 \mod 11 & & & & 4 & 4 & 4 & 4 & 4 & 4 \\ 5 \mod 11 & & & & & 15 & 15 & 15 & 15 & 15 \\ 6 \mod 11 & & & & & & 28 & 28 & 28 & 28 \\ 7 \mod 11 & & & & & & & & 88 & 88 \\ 8 \mod 11 & & & & & & & & & \\ 9 \mod 11 & & & 31 & 31 & 31 & 31 & 31 & 31 & 31 \\ 10 \mod 11 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 \end{array} $$ Write pseudocode for $\text{HASH-DELETE}$ as outlined in the text, and modify $\text{HASH-INSERT}$ to handle the special value $\text{DELETED}$. HASH-DELETE(T, k) j = h(k, i) if T[j] == k T[j] = DELETE return j else i = i + 1 until T[j] == NIL or i == m error "element not exist" By implementing $\text{HASH-DELETE}$ in this way, the $\text{HASH-INSERT}$ need to be modified to treat $\text{NIL}$ slots as empty ones. HASH-INSERT(T, k) if T[j] == NIL or T[j] == DELETE T[j] = k until i == m error "hash table overflow" Consider an open-address hash table with uniform hashing. Give upper bounds on the expected number of probes in an unsuccessful search and on the expected number of probes in a successful search when the load factor is $3 / 4$ and when it is $7 / 8$. $\alpha = 3 / 4$, unsuccessful: $\frac{1}{1 - \frac{3}{4}} = 4$ probes, successful: $\frac{1}{\frac{3}{4}} \ln\frac{1}{1-\frac{3}{4}} \approx 1.848$ probes. successful: $\frac{1}{\frac{7}{8}} \ln\frac{1}{1 - \frac{7}{8}} \approx 2.377$ probes. Suppose that we use double hashing to resolve collisions—that is, we use the hash function $h(k, i) = (h_1(k) + ih_2(k)) \mod m$. Show that if $m$ and $h_2(k)$ have greatest common divisor $d \ge 1$ for some key $k$, then an unsuccessful search for key $k$ examines $(1/d)$th of the hash table before returning to slot $h_1(k)$. Thus, when $d = 1$, so that $m$ and $h_2(k)$ are relatively prime, the search may examine the entire hash table. ($\textit{Hint:}$ See Chapter 31.) Suppose $d = \gcd(m, h_2(k))$, the $\text{LCM}$ $l = m \cdot h_2(k) / d$. Since $d | h_2(k)$, then $m \cdot h_2(k) / d \mod m = 0 \cdot (h_2(k) / d \mod m) = 0$, therefore $(l + ih_2(k)) \mod m = ih_2(k) \mod m$, which means $ih_2(k) \mod m$ has a period of $m / d$. Consider an open-address hash table with a load factor $\alpha$. Find the nonzero value $\alpha$ for which the expected number of probes in an unsuccessful search equals twice the expected number of probes in a successful search. Use the upper bounds given by Theorems 11.6 and 11.8 for these expected numbers of probes. $$ \begin{aligned} \frac{1}{1 - \alpha} & = 2 \cdot \frac{1}{\alpha} \ln\frac{1}{1 - \alpha} \\ \alpha & = 0.71533. \end{aligned} $$ Previous 11.3 Hash functions Next 11.5 Perfect hashing
CommonCrawl
Home Forums > Science > Physics & Math > LHC Safety and the Law Discussion in 'Physics & Math' started by rpenner, Sep 23, 2008. rpenner Fully Wired Valued Senior Member Dat punk-ass ATLAS gonna sprocket the whole damn space-time continuum. The CMS is where it's at. Fo' shizzle! rpenner, Jan 9, 2009 udarnik Registered Senior Member The only "mainstream" physicists to have been associated with this mess are Ranier Plaga, whom the OnScreenScientist eviscerated here: onscreen-scientist.com/?p=34 and one H. Kimball Hansen, emeritus at BYU. It should be noted that neither Hansen nor Plaga were particle physicists, and that by the time Hansen got involved he had been retired for 7 years after a 30m year career at BYU, and that Plaga is apparently no longer with Max Planck, but is a low-level bureaucrat at the Department for New Technologies and Scientific Foundations of the German Federal Office for Information Security (BSI) nature.com/nature/journal/v453/n7191/full/453048a.html#a1 This is the affidavit from H. Kimball Hansen for the Brookhaven case, posted on LHCdefense.org, which is actually rather lukewarm in its support: The following is a sworn affidavit filed in the U.S. District Court, Eastern District of New York, Walter L. Wagner v. Brookhaven Science Associates, LLC,Case No. 00CV1672 [2000]: I, H. Kimball Hansen, Ph.D., declare under penalty of perjury as follows: I am a Professor, emeritus, of astronomy in the Department of Physics and Astronomy at Brigham Young University, Provo, Utah. I was a member of the faculty there between 1963 and 1993, and from 1968 through 1991 I was also Associate-Editor of The Publications of the Astronomical Society of the Pacific. I have read the First Amended Complaint, the Affidavits of Drs. Richard J. Wagner and Walter L. Wagner, the Safety Review[1] referenced therein, and the science article on strangelets by Joshua Holden, and am familiar with the issues therein with respect to operation of the RHIC. I concur that the so-called 'supernova argument', used in the Safety Review to ostensibly show the safety of the RHIC, is wholly faulty. It presupposes the stability of small strangelets, with life-times on the order of centuries or longer, long enough to travel great distances through space. The authors had previously asserted, that to be dangerous, strangelets only needed to have lifetimes on the order of a billionth of a second, just long enough to travel a few centimeters and reach normal matter outside the vacuum of the RHIC. There are a number of theoretical arguments that show that strangelets might be dangerous, and there are faults in the arguments presented, to date, to show the safety of the RHIC. I am of the opinion that it would be wise to avoid head-on collisions in the RHIC until a more thorough safety review, preferably before the physics community as a whole, has been obtained. However, the fixed-target mode of operation for the RHIC would be acceptable. DATED: May 17, 2000 [signed] H. Kimball Hansen, Ph.D. [1] Review of Speculative "Disaster Scenarios" at RHIC This is pretty much the extent of the physicists who have supported LHC alarmism. I figure it's good to get this out in the open on every forum that laymen are likely to run to for information before the instrument gets back up and running, because there is going to be another flurry of publicity. udarnik, Jan 13, 2009 rpenner said: ↑ Bugaboo: "Supernovae", Collapse of the vacuum, and other super-obvious things not associated with p-p collisions at \(\sqrt{s} \leq 7 \, \mathrm{TeV}\) Competent, Published Paper addressing it: Competent, Published Paper rebutting it: J. Ellis, G. Giudice, M.L. Mangano, I. Tkachev and U. Wiedemann "Review of the Safety of LHC Collisions" Journal of Physics G 35, 115004 (2008) http://arxiv.org/abs/0806.3414 Competent response: Competent, Published Paper addressing it: M.S. Turner and F. Wilczek "Is Our Vacuum Metastable?" Nature 298, 635-636. (1982) http://www.nature.com/nature/journal/v298/n5875/abs/298633a0.html Competent, Published Paper rebutting it: P. Hut, and M. J. Rees "How Stable Is Our Vacuum?" Nature 302, 508-509. (1983) http://www.nature.com/nature/journal/v302/n5908/abs/302508a0.html Less than one month to go for Docket No.: 08-17389 at the 9th circuit appellate court (PACER service sign up required). At that time we will see Wagner's last chance to make the argument that the LHC, situated on the French-Swiss border and controlled by an internation organization of 20 European government is, in fact, a major US Federal Government project, significantly funded and/or controlled by the US and subject to US the NEPA law. I am not a lawyer, and definately not a bookmaker, but if I were here's how I'd place the odds: p = 5% Walter wins as technicality (19-to-1 against) only to go to a more expensive defeat in Hawai'i on the very issues the district court decided to not rule on. p = 14% Walter loses after oral argument (6-to-1 against) p = 66% Walter loses before chance of oral argument. (1-to-2 against) p = 14% Walter loses before US Government replies (6-to-1 against) rpenner, Jan 29, 2009 Nasor Valued Senior Member What exactly is the point of taking legal action in the U.S. over an accelerator in Europe? Nasor, Jan 30, 2009 BenTheMan Dr. of Physics, Prof. of Love Valued Senior Member Nasor said: ↑ In spite of what most European scientists are willing to admit, the US is a major underwriter of the LHC. In fact, I don't know that the LHC could operate without US involvement. Specifically, the US government funds the Dept. of Energy and the Nat'l. Science Foundation, which fund experimental high energy research at major Universities. I also think that DOE and NSF give money directly to CERN, but I'm not sure about this. Either way, if the judicial system ordered a freeze on the government funding the LHC, it more or less couldn't operate---especially in these times, it would not be possible to find someone to contribute as much money as the US does to CERN to cover the operating costs. BenTheMan, Jan 30, 2009 Wagner's contention (which has survived to this point) is that somehow the LHC is a "major Federal action" and requires NEPA review for potentially environmentally hazardous results of its operation. The judge agreed with the US government in that case law clearly supports a finding of no "major Federal action" when 10% or less of the funding and no control of operation is in Federal hands -- which are the uncontested facts of the case. (This appears to be the one and only point in a narrow case to be reviewed by the appellate court.) The judge has not yet ruled if Wagner failed to file his case in a timely fashion (and other more technical issues), which means even a win in the appellate court could just mean being slapped back down in the Hawaiian court or being transferred to a court in Washington D.C. Even if all the "let's sue in US court armed with the NEPA law" action was found sound, Wagner's case fails because he's not armed with physics. He's not claiming they haven't checked for known physics problems, but all the unknown physics problems. The NEPA has language in it so that these speculations unsupported by evidence and logic are not demonstrations of problems with safety review -- you can't hold up a highway construction project because Russell's Teapot prefers rail system infrastructure and is willing to rain down fire and damnation until it gets its way. But the point of filing suit in the US is that US lawsuits are usually cheap entertainment for plaintiffs. Both Wagner and Sancho filed this suit pro se, representing just themselves as individuals -- Wagner (B.S. Biology, J.D.) cannot even act as Sancho's lawyer. It's not about physics. It's not about justice. It's not about social niceties. Perhaps it is only about self-promotion from someone holds a thirty-year grudge against physics because his work misinterpreting scratches on a plastic plate as a lab technician did not outshine Newton. // Edit -- Yes, 10% is a lot of funding, but the US doesn't get a vote on LHC operations. But unless the US has some sort of time machine, when you are arguing from ignorances as Wagner is, you should sue when then monies are allocated, not after they were already spent. Farsight Can I encapsulate the argument for you? The fact remains that CERN say it's perfectly safe but they're hoping for the unexpected. How anybody can square this away I don't know. Especially in this day and age when a playground slide is too dangerous for kiddies. To be honest, I think it's safe too. But I don't know it's safe. So I wouldn't bet the farm. You know, when I first heard about Turok's "bouncing universe" I thought yeah sure. Now I think I know how it works. Farsight, Feb 3, 2009 Farsight, I think the people who built the LHC cannot be said to be hoping for the unexpected -- clearly they expect lots of things to happen and have built the detectors to measure those things. They expect to measure things better than they have been measured before and hope to measure things which have never been directly measured before. As far from actual unexpected things, like LHC transforming into a truly giant robot, I am certain CERN is not hoping for that. If we understand physics, then the LHC is safe. This can be demonstrated theoretically based on specific alleged models of danger as well as empirically for broad classes which generalize the results of theories permitted by our current observation of the universe. That's not a foregone conclusion, but it is a very basic one which proceeds from both specific detailed calculations and heuristics which have been a staple of particle physics research since at least the 1960's. New forms of matter quickly decay to the lowest mass object which preserves their electric, color, flavor or baryon number charge -- and this conservation of charge is not dangerous. Since at least 1983, examination of cosmic rays has resulted in formal limits on collider disaster scenarios. Strange matter, if it exists at zero temperature and zero pressure, is not a high-temperature phenomenon, and the LHC is a high-temperature machine. It's for basic reasons like these that the average expert in these fields doesn't find it even interesting to try and create detailed formal analyses of bad ideas like the current crop of aphysical mumbo-jumbo. On the other hand, if we don't understand physics, then no claim of safety, LHC-related or not, is well-founded, and the issue of "proving" safety is irrelevant. But the dishonest tactics of the anti-LHC crusaders are shameful and entirely about the merchandising of fear and not about hard work and physics. They equate their own personal ignorance and lack of ability with that of experts. They prey on the under-informed like Chaya. http://news.bbc.co.uk/2/hi/south_asia/7609631.stm If you'll notice, the anti-LHC forces have no experts backing their claims. Where are the papers which challenge our understanding? Where is the academic level debate? Even Wagner, the lawsuit enthusiast has stated in this thread that he doesn't wish to talk physics. If this is a metaphysical debate, then it doesn't belong in court for the same reason that the claim that God will strike us down if we don't switch on the LHC doesn't belong in court. Finally, I don't see how Turok and LHC safety argument connect. rpenner, Feb 3, 2009 They are hoping for the unexpected, Robert. Take your pick of this: http://www.google.co.uk/search?sour...=1T4ADBF_en-GBGB240GB240&q="lhc" "unexpected" And the very purpose of the LHC is to deliver the understanding that we currently lack. That's why we can't "prove" safety, and why the reassurances from CERN are meaningless. Again, there's lots of material out there: http://www.google.co.uk/search?hl=en&rlz=1T4ADBF_en-GBGB240GB240&q="lhc" "risk assessment"&meta= No, we don't understand physics I'm afraid. That's why we do experiments. However I believe I understand more than most, things like energy, mass, charge, particles, the forces. People have difficulty in believing this, so much so that they refuse to examine what I say. There's a huge issue of conviction here. It's a common problem, most often seen in religion, but not limited to religion. It's a "people problem". There are no papers that challenge our understanding and no academic-level debate because of the conviction issue best paraphrased by "The Trouble with Physics", and an issue best called "The End of the World is Nigh". Stick your neck out and you'll have trouble getting past peer review. Do it with the LHC and you'll get called names. With the LHC, nobody wants to stick their neck out. Especially since if you cry danger and if you're right, nobody will ever know. It's a no-win situation, so people keep their heads down. The Turok and LHC connection is that IMHO Hawking radiation is back to front. Whilst I think strange matter is hogwash, I'm still unhappy about you-know-what: http://www.google.co.uk/search?hl=en&rlz=1T4ADBF_en-GBGB240GB240&q="cern" "micro black hole"&meta= Virtual particles are a useful concept in QED calculations, but no particle, no matter how transient, conveys negative energy. Combine two out-of-phase photons and you're left with no photons. Both photons were positive energy, conservation of energy means the photon energy has gone into the "vacuum energy" of space. Play this backwards and you can separate this energy into two photons, and the micro black hole swallows them both. OK, sometimes one gets away, so a black hole gives off Hawking radiation. But it grows. I don't think anything spectacular will happen. But like I say, and in all seriousness, I feel uncomfortable about betting the farm. Polemics don't disguise the fact that "danger" is just an argument from ignorance. When this "Robert" character joins the discussion, I am confident he will support me. The schedule is set as follows: Designation of RT for Appellant Luis Sancho and Walter L. Wagner due 10/30/2008. Designation of RT for Appellee Center for Nuclear Energy Research, Doe Entities, Fermilab, National Science Foundation and US Department of Energy due 11/10/2008. Transcript order for Appellant Luis Sancho and Appellant Walter L. Wagner due 11/19/2008. Certificate of record due 11/26/2008. Appellant Luis Sancho and Appellant Walter L. Wagner opening brief due 02/04/2009. Appellee Center for Nuclear Energy Research, Appellee Doe Entities, Appellee Fermilab, Appellee National Science Foundation and Appellee US Department of Energy answering brief due 03/06/2009. Appellant's optional reply brief is due 14 days after service of the answering brief. That's 2009/02/04 to the rest of you. So I log on to read the filed brief and find ... nothing. How very disappointing. //Edit -- Just checked in Hawaii and the opening brief was not filed there, either. Does Wagner have an explanation? Last edited: Feb 6, 2009 I don't argue from ignorance. My argument is very logical. And you obviously have no answer to it. Sorry about the Robert. My mistake. Farsight said: ↑ It is, of course, possible that the LHC will produce hordes of fire breathing dragons (the total energy in the beam is equivalent to a fully loaded aircraft carrier moving at something like 20 knots) which could fly out and take over the world. There are, however, no reasons for us to expect this outcome. If you'd read the objections and arguments over the past few pages, you'd see that the collisions in the LHC are happening all the time in our solar system and in our galaxy, some at far greater energies than we could ever hope to engineer. There are no black holes consuming our sun, or Jupiter, or even the earth, so we have very good reason to assume that such events won't happen in the LHC. BenTheMan, Feb 6, 2009 News update -- http://cosmiclog.msnbc.msn.com/archive/2009/02/05/1781943.aspx Cosmic Log says James Tankersley forwarded the reported appellate brief, but this still has not been filed on the relevant Appeals Docket #: 08-17389. The Deadline was Wednesday. The delay might be mail, one of formatting, or one of slowness of paperwork shuffling. I am cursed by my heady Internet age expectations. Neither LHC Defense (Wagner's site?) nor LHC Facts (Tankersley's site?) has a copy of the brief. We only have this summary from Alan Boyle: In a nutshell, the plaintiffs say the federal government's contribution of $531 million to LHC construction over more than 11 years, plus the U.S. consultative role on the project, are factors that add up to a "major federal action." But we don't know if it is from both Sancho and Wagner, if it cites any case law, or how the "consultative role" of the U.S. is (mis-)represented. Judge Gillmor applied a two-prong test and skewered the issue of jurisdiction. Wagner, as last reported on this thread, was considering a strategy of demanding a bright line rule. Now, perhaps because of correspondence here, it seems that his strategy has changed. But the more than 10% and control tests seem sane, lawful and reasonable to me. Wagner hasn't always managed all three in my eyes. How I long to see that brief. I am cursed by my heady Internet age expectations. I phoned the SF circuit court clerks office, where I talked to a nice clerk who tells me for a pro-se, paper, mailed opening brief that it might be filed on PACER by the 13th. --- But.... I checked back after lunch and it's here! Section III (A) -- Wagner has sued the DOE, NSF, Fermilab, and CERN (under the wrong name) and up to 100 unnamed John Does. In this pleading he talks about the funding of LHC by the US Government through "DOE, etc." This leaves it to the court's imagination which of the defendants are "etc." A cynical court might speculate that Wagner is of the opinion that CERN is part of the US Government. Wagner admits that Government funding of LHC has been going on for 11+ years. This undisputed fact could cost him the war even if he wins this battle. Unlike Wagner, I believe the Large Hadron Collider is not made more accessible to non-particle-physicists by calling it an "atom smasher" since the nuclei are completely flayed of their electrons by the time they are introduced to the ring. Verbatim: "Recent considerations of theoretical physics have arisen which show that the use of the machine [which is anticipated to be completed for use in late 2009] could result in the creation of novel forms of matter [e.g. "strangelets'' "micro-black-holes'' etc.) that could prove environmentally destructive by slowly converting Earth into either a large lump of strange matter, or into a small black hole." This is, unsupportable. Specifically, the second "could" should, on the face of the literature, read "could not." Wagner asserts that NEPA should have forced some administrative board to consider Wagner's arguments in some form. I am not qualified to parse the NEPA law to see what exactly the government process is, if Wagner was correct and the literature said that the hypothetical products of the LHC could, in fact, prove environmentally dangerous, but as this is not the case, my reading says that NEPA doesn't force the government to audit imaginary threats, especially those at odds with fundamental physics and observation. Section III (B): Wagner says that the court made an error in declaring that funding of 7.7% of the LHC and no control makes it not a "major Federal action" that would be subject to the NEPA law. (Thanks to Alan Boyle of Cosmic Log, I knew that already, but it's good to see it's in the right place.) Section III (C): Wagner's paperwork was filed in a timely basis. Finally, an assertion I have no reason to question. Section III (D): Technical language for "they booted me out of court." Section IV : There it is, in bold print. Wagner thinks he should not have lost on that point just because case law, including cases he brought into the record, supported making that decision. He thinks the large dollar amount (less than 2 orders of magnitude greater) makes the dollar amount qualitatively different than the case law examples. I disagree, unless he has relevant case law to support a threshold amount, Wagner is once again arguing from the special basis that an invisible threshold exists which only he is aware of. He repeats the length of time that the Federal government has been funding the effort, which if I were deciding this on the basis of common sense would either skip ahead to the statue of limitations or divide the dollar amount by the funding period. He claims that the Federal government, which initially was a LHC competitor with Tevatron and the started Superconducting Supercollider (SSC), was deeply involved in the planning of LHC. But the existing two-prong test speaks of "control" not "involvement" which here I read as "negotiated." The US government is CERN's partner, not its toady, so of course there is negotiation. Somehow, Wagner claims the US government initiated the LHC project. Once again, I think this perception is uniquely his. He equates the US's "observer" role as a "lobbying" role and that as a "controlling" role. Ridiculous! He then brings into the discussion of whether NEPA should declare something a "major Federal action" is the magnitude of the asserted environmental disaster. Well in that case, since Wagner asserts (aphysically) that the LHC could cause merely the destruction of the Earth, I will borrow a page from Prof. Dixon and assert the act of ruling in Wagner's favor could cause the destruction of everything in the future light-cone, and since my "concerns" trump Wagner's, NEPA requires that Wagner be imprisoned and denied due process, and the mere lucky happenstance that this hasn't happen yet for the paltry few number of times that a court has ruled in Wagner's favor ought not to prejudice the court against my argument. Finally, Wagner says that the percentage funding is not 7.7% as the District court calculated, but 100%. Yes, 100% of the US dollars that contributed to LHC funding was funded by the US government which used real money, dollars, not that sissified, made-up stuff like pounds and euros. (Right now, James Tankersley should be doing something newsworthy and dramatic, like throwing himself into industrial machinery, to protest the stupidity of his own side.) Section V (A): Myopically, he tries to make the LHC look like an American program. He claims "no State or Local funding" but ignores the 20 member states of CERN because he didn't want to do the research of where the monies come from. He denies monies that aren't Dollars are even monies. He claims the risk isn't from building the LHC, but from operating it, but his complaint and the dollar amount indicate that construction is the issue. Wagner is a man who doesn't know his own mind. He claims that the risk issue is significant despite the court's ruling which doesn't indicate that the district court agreed. He claims that the risk wasn't known when funding was allocated and any NEPA document would have been due. This is true. The risk is still not known -- it's an argument from ignorance not from physical theory. He claims that the conditions in the LHC are unlike those anywhere in the universe, which is pedantically true, but misses the big picture of the cosmic ray safety arguments which date back to 1983. Collisions at least as energetic as those which happen in the LHC happen every day in our own backyard (Earth's upper atmosphere). Since Walter Wagner's only exposure to physics professionals are with a 1970's cosmic ray team, this is hard-to-explain blindness. The request to read all seven of the affidavits of the original court will not endear Wagner to the appellate court -- but if they do read these, they will see that not even Wagner's "experts" agree on how the LHC poses a danger, which weakens the thought that it might pose a danger. Since none of the "experts" are in fact "experts," then Wagner could face sanctions of having their testimony excluded. Wagner says "some" of the issues were addressed by the 2008 "LSAG Safety Report" -- but that didn't change the design of the LHC one whit -- it just showed that (at least) "some" of the (bizarre) inventions of Wagner's affiants are made-up and aphysical. Wagner makes up new physics and claims that the "nowhere in the universe" actually exists in the universe. He invents new motives for the LHC designers in that they want to create "strange matter," according to Wagner. But actually, the LHC operates at too high an energy for droplets of cold strange matter to form. It looks like Wagner has confused "strange matter" with "quark-gluon plasma." Wagner talks about ideas "proven to be wrong" in probably an aphysical sense. In a violation of good lawyerly practice he cites the whole of his own affidavit instead to referring to the paragraph numbers. He gives original "expert" testimony in the appellate brief. Wagner shouldn't be suing the LHC -- he should be suing his law school for not flunking him out. He plays games with the definition of words in the Amici brief by Nobel Laureates. He states in bold text that the only acceptable prerequisite is that someone needs to prove as impossible that any imagined disaster will happen. That's not the job of mathematicians or physicists or EPA reviews to work with the imagination of the uneducated. Section V (B): What is Luis Sancho a doctor of? What is Richard Wagner a doctor of? What is Walter Wagner a doctor of? (Oh, that I know -- he's got a J.D. from a lesser school of law.) What is Paul Dixon a doctor of? (Psychology.) All this puffery of titles is a smoke screen to hide that his "experts" are not relevant. * Ironically, I remember reading about cases where it was argued that a Ph.D. from some of the best schools in the world was not, perhaps, up to the standards of, say Mississippi Tech., but Wagner alludes to this when he claims that foreign currency can't be in the same class as "dollars." In this section, Wagner also takes pot-shots at the Government side for not following "local rules." He was admonished by the judge for smirking in court and for also not following the rules on September 2. (But that transcript is not part of this appeal.) Section V (C): Wagner omits the judges findings that the US Government funded no more than 7.7% of CERN, compared to case law that spoke of 10% or less, and omits that Judge Helen Gillmor found also that the US government had no control over the operation of the LHC. Wagner is focused on the length of time and the "dollar" amount. Section VI -- What are the facts as Wagner sees them? 1. Wagner characterized the 1997 International Cooperation Agreement as an agreement to build LHC, when I think it reads more a "pay-to-play" swap of resources for access. Nowhere is the sense that if the US doesn't sign, then the LHC won't be built. 2. It's very unclear where he gets his numbers from. Does "Document 20" say what he says it says? If he's referring to docket 20 from Hawaii, that has 17 attachments and is 258 pages long. Boy, Wagner is unreasonable. Ah, it's attachment 9. One possible read is that not-yet-allocated $72.450 million is largely for supporting the detectors, not the operation of LHC itself, and for doing the research for upgrades, also not operations. But Wagner calls it operations. 3. While the US contractors are responsible for the design of components, that's a long distance from design, purposing and control of the LHC. Wagner sees them as identical. 4. Rather than proving that the US initiated the LHC in 1997, as Wagner insists, Attachment 4 of Document 20 shows that CERN itself approved LHC in 1994. 5. Wagner reads "non-voting seat" and calls this "lobbying" and therefore "control." 6. Wagner argues rather than state facts, because no one can assign a non-zero probability to an event outside of physics. Where Wagner's hypotheses of danger are physical, they have been proven not to exist, and where they are aphysical they are irrelevant fancies. 7. Wagner asserts, without reference, that CERN members contribute in equal amounts to LHC and asserts that the maximum of 7.7% is really 10% when including undocumented European labor costs, it might be as little as 4%. This is at best lying through rounding and on the whole is a distortion. Section VII: Wagner argues briefly. (At least here it will be appropriate). Wagner complains, as he did in this thread, that different cases in case law had different situations, not all equivalent to each other. Wagner mistakes Judge Gillmor's use of a two-prong test as a one-prong test based on percentage total funding. Wagner attempts to portray Judge Gillmore's reliance on case law to establish judicial error. Wagner remembers Judge Gillmor did use also the second prong, but was silly to interpret "non-voting" as not "controlling." Section VIII: Wagner argues long-windedly. (Shoot me, shoot me now.) 1. Wagner says $531 million is a lot of money. He reuses his unsupported "double" assertation. He reuses his "Dollarz iz teh onleez moneez" argument. And he says that $531 million is quantitatively different than the amounts of funds in the case law cases. (A cynical judge might ask if this means that the Bush and Obama stimulus packages are then major Federal actions that might foreseeably lead to the burning of more fossil fuels and speeding up climate change and other environmental effects making these actions also subject to NEPA.) He cites two phrases where clearly the question is one of the federal relative percentage of project funding and claims the Judge erred in reading them the natural way. Wagner claims that the judge should have ignored case law (this assertion lacks a basis) and ruled based on the dollar amount (this assertion lacks a basis), ruled on the duration of a project (this assertion lacks a basis, since there is such a thing as a statue of limitations seems contrary to law), rule on his claim (contradicted by documents he cites) that the US initiated the LHC project, yadda, yadda, yadda. [Wasn't Wagner suppose to support this in his September 2nd court appearance?] Wagner repeats himself several times. 2. Wagner conflates design of subsystems for design of the system and operation of detectors for operation of collisions. You would think someone who made a decade-long crusade against colliders would learn about them. 3. Wagner gives a sketch of an argument on how partial funding + observer status = control. But since title to those expensive magnets and detectors has been given to CERN, it's obvious that the US is paying to play, and that it is CERN who holds all the cards. Also, Wagner cites "permanent" over and over, but that word is not used to describe the US's role as a non-voting observer. It is pay-to-play. 4. Wagner manages to say not much, as far as I can tell. 5. Wagner argues from ignorance -- and manages to distort what he can Section IX: Despite no case law indicating that the LHC funding level or duration is qualitatively different than previous cases, Wagner asserts it just is. Despite no case law indicating that pay-to-play observer status is "control," Wagner asserts it just is. Wagner asserts 10% is the same as 7.7% is the same as probably 4% which is more than 5% a number he made up. (It seems likely to me that CERN funding is proportional to GNP for member states.) Despite the lack of case law to support the contention that U.S. Dollars are the only form of funding that matters, Wagner asserts it just is. Wagner flip-flops if he is suing because the US authorized construction or because the US wants to continue to be involved -- but it doesn't matter since what matters is what funding was approved by congress before the case originally went to trial. Section X -- Claim that there are no related cases before the courts. Stab in the back to the European litigation team at ECHR. Signature: There's a space for signature for Luis Sancho but either he signed it in non-photocopy-blue or this is just Wagner's appeal. [In the September 2 transcript, Wagner was warned to not act as Sancho's lawyer, advice which is appears he intended to disregard by including this signature line.] Wagner lists one address for both himself and Luis Sancho. ¿Porqué? rpenner, Feb 10, 2009 http://skeptico.blogs.com/skeptico/2009/02/global-warming-denial.html Where have we seen these before? Conflation of words that can have either the same or different meanings on the basis of context. Conflation of argument from authority with reliance on communities of hard-working experts. Alleging Conspiracy Selectivity (cherry-picking) Fake experts Impossible expectations (also known as moving goalposts) General fallacies of logic, and continuing to repeat arguments long after they have been debunked. Walter L. Wagner Cosmic Truth Seeker Valued Senior Member Anyone who is interested in reading the complete appellate brief that was filed, to see how it compares with the "analysis" by rpenner, may PM me and provide me with an email address, I will email it to them as an attachment. It's only 25 pages of a Word document. Walter L. Wagner, Feb 12, 2009 cosmictraveler Be kind to yourself always. Valued Senior Member So when does the LHC start operations again if the law suits don't stop it? I thought it was to be operational in March of this year?:shrug: cosmictraveler, Feb 12, 2009 cosmictraveler said: ↑ The latest word is September, I think. BenTheMan, Feb 12, 2009 Walter L. Wagner said: ↑ Indeed, it is so brief it doesn't refer to attachment number, page number or paragraph of 250+ page documents in the case record. But if the intent was to have the appellate judges read and re-read the government-submitted affidavits and attachments, well done. The assumption that Wagner is off-base when he guessed that the CERN member states contributed equally to construction costs of the LHC is strongly supported by the 1954 "Convention for the establishment of a European Organization for Nuclear Research" which also corrects long-standing misconceptions of what the acronym "CERN" stands for. http://documents.cern.ch//archive/electronic/other/legal/articles/LSL00000014.pdf
CommonCrawl
The epidemiology of porcine Taenia solium cysticercosis in communities of the Central Highlands in Vietnam Dinh Ng-Nguyen1,2, John Noh3, Kathleen Breen4, Mark Anthony Stevenson1, Sukwan Handali3 & Rebecca Justine Traub1 Parasites & Vectors volume 11, Article number: 360 (2018) Cite this article Taenia solium cysticercosis, recognized as a neglected tropical disease by the WHO, is distributed mostly in developing countries of Latin America, sub-Saharan Africa and Asia. Pigs and humans act as intermediate hosts, acquiring T. solium cysticerci (larval stage) in their tissue, through the ingestion of T. solium eggs shed in the faeces of humans infected with adult tapeworms. The disease has a negative impact on rural economies due to losses in productivity arising from human disease, pork carcass condemnations and loss of market access. The aim of this study was to estimate the prevalence of T. solium cysticercosis in pigs in Dak Lak Province in the Central Highlands of Vietnam and to identify household level characteristics associated with T. solium porcine cysticercosis. This was a cross-sectional study of household pigs in three districts of Dak Lak Province. A total of 408 households in six villages in three districts were visited between June and October 2015. A questionnaire was administered to the head of each household, and within each household, serum samples were collected from three pigs. Serum samples were analyzed using the recombinant T24H antigen in enzyme-linked immunoelectrotransfer blot assay and lentil lectin purified glycoprotein in EITB assay. A Bayesian, mixed-effects logistic regression model was developed to identify management factors associated with the probability of a household having at least one cysticercosis-positive pig. The prevalence of porcine T. solium cysticercosis in this study was low at 0.94 [95% confidence interval (CI) 0.51–1.68] cases per 100 pigs at risk, in agreement with other studies conducted throughout Vietnam. Scavenging of food and coprophagy were associated with T. solium cysticercosis [odds ratios 1.98 (95% CrI: 0.55–4.74) and 2.57 (95% CrI: 1.22–4.66), respectively]. This study proves that the seroprevalence of porcine cysticercosis in Dak Lak Province was as low as that of other studies conducted throughout Vietnam. Scavenging of food and coprophagy are modifiable factors, providing the opportunity to decrease the prevalence of porcine cysticercosis further in the province. Taenia solium cysticercosis is recognized as a neglected tropical disease by the WHO [1]. It is distributed mostly in developing countries of Latin America, sub-Saharan Africa and Asia [2]. Pigs and humans act as intermediate hosts, acquiring T. solium cysticerci (larval stage) in their tissue, through the ingestion of T. solium eggs shed in the faeces of humans infected with adult tapeworms. Consumption of raw and/or undercooked pork with active T. solium cysticerci may result in T. solium taeniasis in humans. The presence of porcine cysticercosis impacts negatively on an economy due to costs arising from carcass condemnation and negative impacts on market access and trade of pork. Although Vietnam is located in a region endemic for T. solium [3], the data on porcine cysticercosis are limited. In addition, it is not clear whether porcine cysticercosis is endemic in the country. During 1994 to 2005, the highest reported prevalence of porcine cysticercosis in Vietnam among four carcass-based studies was less than 1% [4]. These estimates were, for the most part, based on studies conducted in commercial slaughterhouses, and therefore do not reflect the prevalence of cysticercosis in pig populations in rural communities that are not processed in commercial slaughterhouses [5]. Of the relatively small number of studies that have quantified the prevalence of porcine cysticercosis in the country, most were conducted prior to 2003 with a focus on the north of Vietnam [4]. To the best of our knowledge, the only study of porcine cysticercosis in the south of Vietnam was carried out in 1994 [6]. Dak Lak Province is located in the Central Highlands in the south of Vietnam. A recent study in the communities of the province demonstrated an apparent prevalence of 1.2% T. solium taeniasis [7]. In most communities in the province, pigs are free-roaming and outdoor defaecation is common [8]. These characteristics are conducive for infection and transmission of T. solium cysticercosis to pigs. The aim of this study was to estimate the prevalence of cysticercosis in pigs in Dak Lak and to identify household level characteristics associated with T. solium porcine cysticercosis. Study site and sampling Fieldwork was conducted between June and October 2015 in Krong Nang, M'Drak and Buon Don districts in Dak Lak Province, Vietnam. These districts were chosen as the study sites based on their diverse geographical characteristics. The study sites have been described in detail elsewhere [7]. In brief, M'Drak is located in the east of Dak Lak Province with an average altitude of 400 m to 500 m and has a tropical monsoon climate typical of the Vietnam's Central Coast. Krong Nang is situated in the north of the province at an altitude of 800 m. Buon Don, situated to the west of the province with an average elevation of 330 m and has a hot and dry climate. The standard of living in this area of Vietnam is generally poor. Open defaecation using outdoor pit latrines is common practice and livestock access to these latrine areas is usually unrestricted. The practice of non-confinement of pigs and cattle is common with slaughter activities often carried out in backyards [4]. A sampling frame listing the name of all villages in the three study districts was obtained from Sub-Department of Animal Health office. Villages eligible for sampling comprised those with more than 1000 pigs, as recorded by the Sub-Department of Animal Health within the Ministry of Agriculture. All eligible villages within each of the three study districts were assigned a number and two numbers were chosen at random to select villages from each district for inclusion in the study. Within each selected village, a list of householder names was obtained from the respective village head person, and each householder name was then coded with a number (the number of households per village ranged from 200 to 300). A sheet of paper was drawn up into squares and each square numbered from 1 to 300. The squares were cut into pieces and placed face-down on a table. The village head was then asked to select between 100 and 140 squares. The number on each selected square identified each household to be sampled. All selected households were visited several days before sampling to obtain consent from participants. Within each district, blood samples were collected from pigs in each of the study households. At the time of each household visit pig owners were asked to complete a questionnaire on the number and type of pigs kept and details of demography, husbandry practices, and diet (Table 1). All questionnaires were conducted in local Vietnamese phraseology and their validity pre-tested on 30 pig owners in another community in Dak Lak Province before application to the field survey. In addition, interviewers were trained before administering the questionnaires. Pigs were selected at random by the member of research group for blood sampling. Pigs that were pregnant or ill, and pigs aged less than 2 months of age were excluded from sampling. At the time of each household visit 10 ml of blood was obtained from the cranial vena cava of each pig into plain blood collection tubes. The blood samples were allowed to clot at ambient temperature prior to centrifugation at 3200× g for 5 min to separate serum. Serum was dispensed into 1.5 ml aliquots and stored at -20 °C until use. Table 1 Details of information about general and pig information Sample size estimation The aim of this study was to estimate the prevalence of porcine T. solium cysticercosis in Dak Lak Province. Based on a previous slaughterhouse based survey by Van De at al. [9], the prevalence of porcine cysticercosis was assumed to be 10%. Assuming 95% certainty that this estimate was within 5% of the actual population prevalence (i.e. cysticercosis prevalence ranged from 5% to 15%) and ignoring the possibility that cysticercosis positive pigs were clustered within households, we estimated that a total of 384 pigs were required to be sampled. We then assumed the average number of pigs eligible for random sampling per household was at least three and an intra-class correlation coefficient for T. solium cysticercosis of 0.07 [10] returning a design effect of 1.14. Our revised sample size, accounting for the likelihood that porcine cysticercosis clusters within households, was 384 × 1.14 = 438 for each of the three study districts. Pig serum samples were analyzed using the recombinant T24H antigen in enzyme-linked immunoelectrotransfer blot (rT24H-EITB) assay as described by Noh et al. [11] and lentil lectin purified glycoproteins in EITB (LLGP-EITB) assay as previously described by Tsang et al. [12] and Gonzalez et al. [13]. Both the LLGP-EITB and rT24H-EITB assay are immunoblot methods. The LLGP-EITB detects antibodies to one or more of seven lentil lectin purified glycoproteins (LLGPs), namely GP50, GP42, GP24, GP21, GP18, GP14 and GP13 which are present in the soluble fraction of an extract of T. solium cysticerci [11]. Reaction to any of these 7 glycoprotein antigens is considered positive. The rT24H-EITB assay detects antibodies against rT24H antigen derived from 24- and 42-kDa glycoproteins of the LLGPs [14]. To ascertain the analytical specificity of the rT24H antigen, we subjected 29 cysticercosis-negative USA pig sera, 12 necropsy-positive T. solium-positive Peruvian pig sera and 4 T. hydatigena necropsy-positive Vietnamese pig sera to the rT24H-EITB. All USA pig sera and Vietnamese T. hydatigena pig sera were negative for the rT24H-EITB, and all Peruvian T. solium positive pig sera were positive on the rT24H-EITB. These preliminary results provided the basis of results show that under experimental conditions, the rT24H-EITB do not cross-react to pig sera with T. hydatigena. Individual serum samples were screened in pools of four using the rT24H antigen in EITB assay format to detect the presence of antibodies against T. solium cysticerci. The rT24H antigen was utilized in the EITB assay as it offers the best overall diagnostic performance compared with other recombinant or synthetic antigens based on our preliminary data and previous human-based studies [15, 16]. Individual serum sample from each positive pools were then re-screened using rT24H-EITB and LLGP (native antigen of T. solium cysticerci) antigens. The LLGP-EITB has been used as the reference standard assay for serological diagnosis of T. solium cysticercosis in humans and pigs that has the specificity of 100% and sensitivity between 98–100% [12, 13]. Jayashi et al. [17] when validating the LLGP-EITB for naturally acquired porcine cysticercosis pointed out that the LLGP-EITB achieves optimal sensitivity of 78% (95% CI: 52–94%) and specificity of 76% (95% CI: 66–85%) when the assay reacts to ≥ 3 of 7 LLGP antigens. The diagnostic performance of LLGP-EITB for porcine T. solium cysticercosis was evaluated in Peruvian pigs, and the cross-reaction of the assay to T. hydatigena was not known [13]. An individual serum sample was considered positive for T. solium antibodies if it was positive to both rT24H and native LLGP antigens. Risk factors for porcine T. solium cysticercosis in the communities of Dak Lak Province were identified using logistic regression. In this study, the outcome of interest was a dichotomous variable where households where at least one pig was T. solium cysticercosis-positive were assigned a value of 1, and 0 otherwise. The association between each of a set of household-level candidate explanatory variables from the questionnaires and the outcome of interest were tested using unconditional odds ratios and the chi-square test. All explanatory variables associated with the outcome of being T. solium cysticercosis positive at an alpha level of < 0.20 using the chi-square test were selected for inclusion in the multivariable model. A frequentist fixed-effects logistic regression model was developed in which the probability of a household having at least one cysticercosis-positive pig was parameterized as a function of the explanatory variables with significance of the chi-square test at P < 0.20, as described above. The significance of each explanatory variable in the model was tested using the chi-square test. Explanatory variables that were not statistically significant were removed from the model one at a time, beginning with the least significant, until the estimated regression coefficients for all the explanatory variables retained in the model were significant at an alpha level of < 0.05. To account for the hierarchical structure of the data (households within villages) a village-level random effect term (V i ), was included in the model as shown in Equation 1. District was not a significant predictor of household level T. solium cysticercosis status at the alpha level of 0.05 and was therefore not considered further in the mixed-effects model. $$ \log \kern0.5em \left[\frac{pi}{1- pi}\right]=\kern0.5em {\beta}_0\kern0.5em +\kern0.5em {\beta}_{1\kern0.5em }{x}_{1i}\kern0.5em +\kern0.5em \cdots \kern0.5em +\kern0.5em {\beta}_m{x}_{mi\kern0.5em }\kern0.5em +\kern0.5em {V}_i\kern0.5em +\kern0.5em {\varepsilon}_i $$ Due to the low prevalence of T. solium cysticercosis (12 of 1281 pigs were positive) regression coefficients for the mixed-effects logistic regression model were estimated using a Bayesian approach implemented in JAGS [18, 19]. Flat (uninformed) prior distributions were assumed for the intercept β0 and each of the regression coefficients for the fixed effects β1 ⋯ β m . The village-level random effect term (V i ) was parameterized as having a normal distribution with mean 0 and precision (inverse variance) τ. For each of the Bayesian regression analyses we ran the Markov chain Monte Carlo sampler for 40,000 iterations and discarded the first 1000 'burn-in' samples. Convergence was visually assessed by plotting cumulative path plots for each of the monitored parameters [20, 21] and quantified using the Raftery & Lewis convergence diagnostic [22, 23]. Parallel chains were run using diverse initial values to ensure that convergence was achieved to the same distribution [24]. Posterior sample sizes were determined by running sufficient iterations to ensure that the Monte Carlo standard error of the mean was at least one order of magnitude smaller than the posterior standard deviation for each parameter of interest. The results of the final mixed-effects logistic regression model are reported in terms of adjusted odds ratios for each explanatory variable. Assuming a causal relationship between a given explanatory variable and porcine cysticercosis, an adjusted odds ratio [and its 95% credible interval (CrI)] of > 1 indicates that, after adjusting for other variables in the model, the explanatory variable increased the risk of a pig being cysticercosis positive. An adjusted odds ratio (and its 95% CrI) of < 1 indicates that exposure to the explanatory variable was protective, and an OR of 1 indicates that the variable was not associated with porcine cysticercosis risk. Statistical analyses were performed using the packages R2jags [25] and coda [26] implemented in R version 3.3.0 [27]. General description of study population A total of 1324 pig serum samples were collected in Krong Nang, M'Drak and Buon Don districts in Dak Lak Province. Of these, 1281 samples were eligible for further examination as 43 samples were excluded from analysis due to hemolysis or missing questionnaire information. All 1281 serum samples were screened in pools of 4 using the rT24H-EITB assay, and 10 pool samples were identified as positive. Twelve single samples among the 10 positive pool samples were positive for T. solium antibodies using the rT24H-EITB assay and all 12 single samples were positive using LLGP-EITB. The prevalence of T. solium cysticercosis in pigs in the study districts was 0.94 (95% CI: 0.51–1.68) per 100 pigs at risk. The 12 positive pigs belonged to 11 households in the three study districts. Of 203 households visited in M'Drak district, 9 (4.43%, 95% CI: 2.17–0.85%) possessed T. solium seropositive pigs. Among 70 visited households in Krong Nang district, two (2.8%, 95% CI: 0.49–10.8%) possessed T. solium cysticercosis positive pigs, and no seropositive pigs were identified in 135 households in Buon Don District. The hierarchical structure of the data in this study is shown in Table 2. The 1281 pigs were from 408 households and, within each household, an average of three pigs were sampled (minimum 1; maximum 24). Table 2 Structure of the data from 1281 study pigs from six villages in M'Drak, Buon Don and Krong Nang districts Of the 408 households, 266 (65%, 95% CI: 60–70%) used a pit latrine; however, most of these latrines were of temporary construction, which animals were able to access. A total of 35% (95% CI: 30–40%) of householders responded that their family members practiced outdoor defection; children typically defaecated around the main household building while adults defaecated some distance from the main household building, within the confines of the household property. Seventy-six percent (95% CI: 72–80%) of households had a pigsty and 58% (95% CI: 53–63%) confined their pigs at all times. Free-roaming pigs habitually ranged around the village to seek food and to return to their litters located under stilt housing in the afternoon. Approximately 7% (95% CI: 5–10%) of households used water sourced from lakes, streams or ponds for their pigs. The remaining households used either rainwater or water from wells or pipes. Dogs were kept in 63% (95% CI: 59–68%) of the households that owned pigs (Table 3). Table 3 General description of household data Among the 1281 pigs that were sampled, 41% (95% CI: 38–44%) were of local breed (Soc). A total of 27% (95% CI: 24–29%) of the sampled pigs were reported to regularly consume human faeces. A little over half of the pigs were routinely offered raw, unwashed vegetables (57%; 95% CI: 54–59%). Commercial and/or homemade bran was offered to 89% (95% CI: 86–91%) of pigs. The small proportion of pigs that were not supplied bran (11%, 95% CI: 9–12%) were scavenging for the most part (Table 4). Table 4 General description of pig data Risk factors for porcine T. solium cysticercosis Of the data recorded using the questionnaire, we identified two factors associated with a pig's likelihood of being T. solium cysticercosis positive: (i) frequent coprophagy of human faeces; and (ii) scavenging for food. Estimated regression coefficients for the mixed-effects logistic regression model provided in Table 5. After adjusting for the other explanatory variables in the model, the odds of a household where pigs routinely consumed human faeces being T. solium cysticercosis positive was 2.57 (95% CrI: 1.22–4.66) times that of a household where pigs did not consume human faeces. The odds ratio for a household where pigs routinely scavenged for food was 1.98 (95% CrI: 0.55–4.74). Table 5 Risk factors associated with T. solium cysticercosis positive in pigs In low-income rural communities of Vietnam, a substantial proportion of the human population practice open defaecation and uncooked pork and beef consumption is relatively common. Allowing pigs to roam freely is a common husbandry practice in the region [4, 5, 9, 28, 29]. Despite all relevant risk factors for cysticercosis being present in each of the communities in this study, the prevalence of T. solium cysticercosis was low at 0.94 (95% CI: 0.51–1.68) cases per 100 pigs at risk. A similar, low prevalence of cysticercosis has been observed not only in Dak Lak Province but in other regions of Vietnam [4, 6, 30]. In 1994 Huan, in a cross-sectional study of pigs submitted for slaughter from 12 provinces in the south of Vietnam, reported a prevalence of 0.90 (95% CI: 0.45–1.76) cases per 100 pigs at risk [6]. In three studies carried out in 10 provinces in the north of Vietnam between 1999 and 2003, the prevalence of cysticercosis ranged from 0 to 0.06 cases per 100 pigs at risk [31,32,33]. In the Province of Bac Ninh in the north of Vietnam, a known foci of T. solium in humans, carcass examination of 26 village pigs identified no cases of T. solium cysticercosis. Instead, 10 pigs were positive for T. hydatigena cysticerci [34]. These findings are in agreement with those of Conlan et al. [35], who reported a low prevalence of T. solium cysticercosis of 0.8% in village pigs in Laos with a relatively high prevalence of T. hydatigena (22 cases per 100 pigs at risk) and a high prevalence of T. solium taeniasis. Conlan et al. [35] hypothesized that T. hydatigena is likely to cross-protect pigs from T. solium infection. In Vietnam the prevalence of T. hydatigena cysticercosis in pigs has been reported to be high, ranging between 25–38% in the north and the prevalence has been strongly correlated with the presence of T. hydatigena infection in dogs [36]. In addition, most of the households that owned pigs in the three study districts also kept dogs and approximately one half of the households owning dogs reported that they had observed proglottids in their dog's faeces (Table 3). All of the 10 free-roaming village pigs that were backyard slaughtered had T. hydatigena cysticerci present in the mesentery, stomach, spleen, and liver (personal observation from fieldwork). It is our inference that the relatively high prevalence of T. hydatigena infection in dogs is likely to be a major source of T. hydatigena cysticercosis in pigs. If this is true a cautioned approach to cysticercosis control in pigs would be advised if T. hydatigena were to be targeted through pig confinement and canine deworming programs. Eliminating or reducing pig exposure to T. hydatigena would likely result in an increase in the observed prevalence of T. solium in non-compliant free-roaming pigs, providing an increased public health risk to the community. Among the investigated households that owned pigs, a little under one-quarter did not have a pigsty and most of the pigs that were kept were of the local breed (Table 4). Local breeds are preferred due to their ability to thrive under harsh raising conditions and poor feeding [12]. Allowing pigs to free-roam for food was common practice (Table 3). The odds of a household where pigs routinely scavenged for food being seropositive for T. solium cysticercosis was 1.98 (95% CrI: 0.55–4.74) times greater than the odds of a household where pigs were fed commercial and/or homemade bran (Table 5). Similarly, the odds ratio for a household where pigs routinely consumed human faeces being seropositive was 2.57 (95% CrI: 1.22–4.66) times greater than the odds of a household where this practice was not habitual. It is known that risk factors for transmission and circulation of porcine cysticercosis are numerous and may vary in different settings. We found that allowing pigs to free-roam and allowing pigs to routinely consume human faeces were associated with T. solium exposure, consistent with other studies [37,38,39]. Research on the epidemiological characteristics of porcine cysticercosis in Peru [40], Mozambique [41] and Mexico [42] found that older pigs were more likely to show evidence of exposure to T. solium compared to younger pigs, an association not identified in this study. Similarly, while studies from Zambia [43], Mexico [44], and Tanzania [45] showed that the prevalence of T. solium exposure was higher in male pigs compared with females, this association was not identified in this study. In this study, both of the risk factors identified are modifiable, which means that there exists an opportunity to decrease the prevalence of porcine cysticercosis even further. We propose that a combination of intervention measures including education and public awareness campaigns, strategies to reduce coprophagy among pigs and enhanced meat inspection (particularly backyard slaughtered stock) are likely to have the greatest impact on porcine cysticercosis risk with positive secondary effects on human health [46]. For this to be successful there is a need for commitment and support from local and/or central veterinary and medical health authorities. The prevalence of porcine T. solium cysticercosis in this study was low at 0.94 (95% CI: 0.51–1.68) cases per 100 pigs at risk, in agreement with other studies conducted throughout Vietnam. Scavenging of food and coprophagy were associated with T. solium cysticercosis risk. Both of these characteristics are modifiable providing the opportunity to decrease the prevalence of porcine cysticercosis even further. EITB: enzyme-linked immunoelectrotransfer blot LLGP: lentil lectin purified glycoprotein rT24H: recombinant antigen T24H FAO/WHO. Multicriterial-based Ranking for Risk Management of Food-borne Parasites. Microbiological Risk Assessment Series No 23. In: Rome: WHO Press; 2014. Donadeu M, Lightowlers MW, Fahrion AS, Kessels J, Donadeu M, Lightowlers MW, et al. Taenia solium: WHO endemicity map update. Wkly Epidemiol Rec. 2016;49/50:595–9. Aung AK, Spelman DW. Taenia solium taeniasis and cysticercosis in southeast Asia. Am J Trop Med Hyg. 2016;94:947–54. Ng-Nguyen D, Stevenson MA, Traub RJ. A systematic review of taeniasis, cysticercosis and trichinellosis in Vietnam. Parasit Vectors. 2017;10:150. Huong NT. Taeniasis and cysticercosis in a selected group of inhabitants from a mountainous province in North Vietnam. Antwerpen, Belgium: Prince Leopold Institute of Tropical Medicine; 2006. Huan L Van. Parasitic helminths in pigs in several southern provinces and preventative measures National Institution of Veterinary Research; 1994. http://luanan.nlv.gov.vn/luanan?a=d&d=TTkFvmnEdmia1994.1.2&e=-%2D-%2D-%2D-vi-20%2D-1%2D-img-txIN-%2D-%2D-%2D-%23. Accessed 10 Nov 2015. Ng-Nguyen D, Stevenson MA, Dorny P, Gabriël S, Van VT, Nguyen VT, et al. Comparison of a new multiplex real-time PCR with the Kato-Katz thick smear and copro-antigen ELISA for the detection and differentiation of Taenia spp. in human stools. PLoS Negl Trop Dis. 2017;11:e0005743. Nguyen TH. Evaluation of market opportunities for producing local pigs in Dak Lak. Tay Nguyen J Sci. 2009;5:21–6. Van De N, Le TH, Lien PTH, Eom KS. Current status of taeniasis and cysticercosis in Vietnam. Korean J Parasitol. 2014;52:125–9. Asaava LL, Kitala PM, Gathura PB, Nanyingi MO, Muchemi G, Schelling E. A survey of bovine cysticercosis/human taeniosis in Northern Turkana District, Kenya. Prev Vet Med. 2009;89:197–204. Noh J, Rodriguez S, Lee YM, Handali S, Gonzalez AE, Gilman RH, et al. Recombinant protein- and synthetic peptide-based immunoblot test for diagnosis of neurocysticercosis. J Clin Microbiol. 2014;52:1429–34. Article PubMed PubMed Central CAS Google Scholar Tsang VCW, Brand JA, Boyer AE. An enzyme-linked immunoelectrotransfer blot assay and glycoprotein antigens for diagnosing human cysticercosis (Taenia solium). J Infect Dis. 1989;159:50–9. Article PubMed CAS Google Scholar Gonzalez AE, Cama V, Gilman RH, Tsang VCW, Pilcher JB, Chavera A, et al. Prevalence and comparison of serologic assays, necropsy, and tongue examination for the diagnosis of porcine cysticercosis in Peru. Am J Trop Med Hyg. 1990;43:194–9. Hancock K, Pattabhi S, Whitfield FW, Yushak ML, Lane WS, Garcia HH, et al. Characterization and cloning of T24, a Taenia solium antigen diagnostic for cysticercosis. Mol Biochem Parasitol. 2006;147:109–17. Handali S, Klarman M, Gaspard AN, Noh J, Lee YM, Rodriguez S, et al. Multiantigen print immunoassay for comparison of diagnostic antigens for Taenia solium cysticercosis and taeniasis. Clin Vaccine Immunol. 2010;17:68–72. Rodriguez S, Wilkins P, Dorny P. Immunological and molecular diagnosis of cysticercosis. Pathog Glob Health. 2012;106:286–98. Jayashi CM, Gonzalez AE, Castillo Neyra R, Rodríguez S, García HH, Lightowlers MW. Validity of the enzyme-linked immunoelectrotransfer blot (EITB) for naturally acquired porcine cysticercosis. Vet Parasitol. 2014;199:42–9. Kruschke JK. Review of doing Bayesian data analysis: a tutorial with R, JAGS, and Stan (second edition). Clin Neuropsychol. 2017;31:1268–70. Plummer M. JAGS Version 3.4.0 user manual. 2013;0–41. http://www.stats.ox.ac.uk/~nicholls/MScMCMC15/jags_user_manual.pdf Yu B, Mykland P. Looking at Markov samplers through cusum path plots: a simple diagnostic idea. Stat Comput. 1998;8:275–86. Robert CP, Casella G. Monte Carlo Statistical Methods. New York, USA: Springer New York; 2004. Book Google Scholar Raftery AE, Lewis SM. One long run with diagnostics: implementation strategies for Markov Chain Monte Carlo. Stat Sci. 1992;7:493–7. Raftery EA, Lewis MS. How many iterations in the Gibbs Sampler? In: Benardo J, Berger J, Dawid A, Smith A, editors. Bayesian Stat 4. New York: The Clarendon Press and: Oxford University Press; 1992. p. 763–74. Gelman A. Inference and monitoring convergence. In: Gilks W, Richardson S, Spiegelhalter D, editors. Markov Chain Monte Carlo Pract. London: Chapman & Hall; 1996. p. 131–43. Su Y-S, Yajima M. Using R to Run "JAGS." 2015. https://cran.r-project.org/web/packages/R2jags/R2jags.pdf Plummer M, Best N, Cowles K, Vines K, Sarkar D, Bates D, et al. Coda: Output Analysis and Diagnostics for MCMC. R News. 2016;6:7–11. Team. R Core. In: R: A Language and Environment for Statistical Computing [Internet]. Vienna, Austria: R Foundation for Statistical Computing; 2017. http://www.r-project.org. Anh Tuan P, Dung TTK, Nhi VA. Sero-epidemiological investigation of cysticercosis in the southern provinces. J Malar Parasite Dis Control. 2001;4:81–7. Trung DD, Praet N, Cam TDT, Lam BVT, Manh HN, Gabriël S, et al. Assessing the burden of human cysticercosis in Vietnam. Trop Med Int Health. 2013;18:352–6. Willingham AL, Van De N, Doanh NQ, Cong D, Dung TV, Dorny P, et al. Current status of cysticercosis in Vietnam. Southeast Asian J Trop Med Public Health. 2003;34:35–50. Van KP, Luc P. Veterinary Parasitology. Hanoi: Hanoi Agricultural Publishing House; 1996. Doanh NQ, Holland W, Vercruysse J, Van DN. Results of survey on cysticercosis pig in some northern provinces in Vietnam. J Malar Parasite Dis Control. 2002;6:76–82. De N Van, Chau L Van, Son DT, Chuyen LT, Hop NT, Vien HV, et al. Taenia solium survey in Hanoi. J Malar Parasite Dis Control. 2004;6:93–99. Doanh NQ, Kim NT, De NV, Lung NL. Result of survey on taeniasis and cysticercosis humans and pigs in Bac Ninh and Bac Kan provinces. Vet Sci Tech. 2002;9:46–9. Conlan JV, Vongxay K, Khamlome B, Dorny P, Sripa B, Elliot A, et al. A cross-sectional study of Taenia solium in a multiple taeniid-endemic region reveals competition may be protective. Am J Trop Med Hyg. 2012;87:281–91. Lan NTK, Quyen NT, Hoat PC. The correlation between the prevalence of the tapeworm Taenia hydatigena in dogs and their larvae Cysticercus tenuicollis in cattle and pigs - the effect of tapeworm treatment in dogs. Vet Sci Tech. 2011;18:60–5. Ngowi HA, Kassuku AA, Maeda GEM, Boa ME, Carabin H, Willingham AL. Risk factors for the prevalence of porcine cysticercosis in Mbulu District, Tanzania. Vet Parasitol. 2004;120:275–83. Sikasunge CS, Phiri IK, Phiri AM, Dorny P, Siziya S, Willingham AL. Risk factors associated with porcine cysticercosis in selected districts of Eastern and Southern provinces of Zambia. Vet Parasitol. 2007;143:59–66. Pouedet MSR, Zoli AP, Nguekam VL, Assana E, Speybroeck N, et al. Epidemiological survey of swine cysticercosis in two rural communities of West-Cameroon. Vet Parasitol. 2002;106:45–54. Lescano AG, García HH, Gilman RH, Guezala MC, Tsang VCW, Gavidia CM, et al. Swine cysticercosis hotspots surrounding Taenia solium tapeworm carriers. Am J Trop Med Hyg. 2007;76:376–83. Pondja A, Neves L, Mlangwa J, Afonso S, Fafetine J, Willingham AL 3rd, et al. Prevalence and risk factors of porcine cysticercosis in Angonia District, Mozambique. PLoS Negl Trop Dis. 2010;4:e594. Sarti Gutierrez E, Schantz PM, Aguilera J, Lopez A. Epidemiologic observations on porcine cysticercosis in a rural community of Michoacan State, Mexico. Vet Parasitol. 1992;41:195–201. Sikasunge CS. The prevalence and transmission risk factors of porcine cysticercosis in eastern and southern provinces of Zambia. MSc Thesis: University of Zambia, Lusaka, Zambia; 2005. García HH, Gilman RH, Gonzalez AE, Verastegui M, Rodriguez S, Gavidia C, et al. Hyperendemic human and porcine Taenia solium infection in Perú. Am J Trop Med Hyg. 2003;68:268–75. Shonyela SM, Mkupasi EM, Sikalizyo SC, Kabemba EM, Ngowi HA, Phiri I. An epidemiological survey of porcine cysticercosis in Nyasa District, Ruvuma Region, Tanzania. Parasite Epidemiol Control. 2017;2:35–41. Gabriël S, Dorny P, Mwape KE, Trevisan C, Braae UC, Magnussen P, et al. Control of Taenia solium taeniasis/cysticercosis: The best way forward for sub-Saharan Africa? Acta Trop. 2017;165:252–60. We are grateful to the Institute of Biotechnology and Environment Tay Nguyen University for providing resources and facilities for the fieldwork. We thank to local veterinarian staff at M'Drak, Krong Nang and Buon Don district for assisting in sample collection. The authors are most thankful to Ms. Nguyen Thi Ngoc Hien, Ms. Long Khanh Linh, and Ms. Nguyen Thi Lan Huong, who assisted with laboratory work. This research was self-funded by RJT. This work was done with partial support for travel from the Faculty of Veterinary and Agricultural Sciences, University of Melbourne, Australia. DNN received PhD scholarship from Australia Awards Scholarships, Department of Foreign Affairs and Trade, Australia Government. The materials for serological diagnosis were provided by the Division of Parasitic Diseases and Malaria, Centers for Disease Control and Prevention, Atlanta, Georgia, United States of America. All relevant data are included within this published article. Faculty of Veterinary and Agricultural Sciences, University of Melbourne, Parkville, Victoria, 3052, Australia Dinh Ng-Nguyen, Mark Anthony Stevenson & Rebecca Justine Traub Faculty of Animal Sciences and Veterinary Medicine, Tay Nguyen University, Buon Ma Thuot, Dak Lak, Vietnam Dinh Ng-Nguyen Division of Parasitic Diseases and Malaria, Centers for Disease Control and Prevention, Atlanta, Georgia, USA John Noh & Sukwan Handali Department of Livestock, Montana Veterinary Diagnostic Lab, Bozeman, Montana, USA Kathleen Breen John Noh Mark Anthony Stevenson Sukwan Handali Rebecca Justine Traub DNN designed study, analyzed data and wrote manuscript; JN: provided material, designed study and edited paper; KB: assisted laboratory work, MAS: assisted with analyses of data and edited paper; SH: provided material, designed study and edited paper; RJT provided material, supervised study and edited paper. All authors read and approved the final manuscript. Correspondence to Dinh Ng-Nguyen. This study was reviewed and approved by the Animal Ethics and scientific committee, Tay Nguyen University (reference number 50.KCNTY), and conducted under the supervision of the local center for animal health, Dak Lak, Vietnam. Verbal consent was obtained to participate in the study. The findings and conclusions in this report are those of the author(s) and do not necessarily represent the official position of the Centers for Disease Control and Prevention. Ng-Nguyen, D., Noh, J., Breen, K. et al. The epidemiology of porcine Taenia solium cysticercosis in communities of the Central Highlands in Vietnam. Parasites Vectors 11, 360 (2018). https://doi.org/10.1186/s13071-018-2945-y Porcine cysticercosis
CommonCrawl
Let $a\equiv (3^{-1}+5^{-1}+7^{-1})^{-1}\pmod{11}$. What is the remainder when $a$ is divided by $11$? One way to do this is to find each inverse explicitly: \begin{align*} (3^{-1}+5^{-1}+7^{-1})^{-1} &\equiv (4+9+8)^{-1} \pmod{11} \\ &\equiv 21^{-1} \pmod{11} \\ &\equiv 10^{-1} \pmod{11} \\ &\equiv \boxed{10}\pmod{11}. \end{align*} Another way to do this is through manipulation: \begin{align*} & (3^{-1}+5^{-1}+7^{-1})^{-1}\\ \equiv~ & (3\cdot 5\cdot 7)(3\cdot 5\cdot 7)^{-1}(3^{-1}+5^{-1}+7^{-1})^{-1}\\ \equiv~ & (3\cdot 5\cdot 7)(3\cdot 5+5\cdot 7+ 7\cdot 3)^{-1}\\ \equiv~ & 6\cdot(15+35+21)^{-1}\\ \equiv~ & 6\cdot 5^{-1}\\ \equiv~ & 6\cdot 9\\ \equiv~ & \boxed{10} \pmod{11} \end{align*}
Math Dataset
Tagged: range of a matrix In which $\R^k$, are the Nullspace and Range Subspaces? Let $A$ be an $m \times n$ matrix. Suppose that the nullspace of $A$ is a plane in $\R^3$ and the range is spanned by a nonzero vector $\mathbf{v}$ in $\R^5$. Determine $m$ and $n$. Also, find the rank and nullity of $A$. Let $A=\begin{bmatrix} 2 & 4 & 6 & 8 \\ 1 &3 & 0 & 5 \\ 1 & 1 & 6 & 3 \end{bmatrix}$. (a) Find a basis for the nullspace of $A$. (b) Find a basis for the row space of $A$. (c) Find a basis for the range of $A$ that consists of column vectors of $A$. (d) For each column vector which is not a basis vector that you obtained in part (c), express it as a linear combination of the basis vectors for the range of $A$. Click here if solved 142 Find a Basis for Nullspace, Row Space, and Range of a Matrix Describe the Range of the Matrix Using the Definition of the Range Using the definition of the range of a matrix, describe the range of the matrix \[A=\begin{bmatrix} 2 & 4 & 1 & -5 \\ 1 &2 & 1 & -2 \\ 1 & 2 & 0 & -3 \end{bmatrix}.\] Find Bases for the Null Space, Range, and the Row Space of a $5\times 4$ Matrix 1 & -1 & 0 & 0 \\ 0 & 2 & 2 & 2\\ (a) Find a basis for the null space $\calN(A)$. (b) Find a basis of the range $\calR(A)$. (c) Find a basis of the row space for $A$. (The Ohio State University, Linear Algebra Midterm) by Yu · Published 03/01/2017 · Last modified 07/03/2017 Quiz 7. Find a Basis of the Range, Rank, and Nullity of a Matrix (a) Let $A=\begin{bmatrix} Find a basis for the range $\calR(A)$ of $A$ that consists of columns of $A$. (b) Find the rank and nullity of the matrix $A$ in part (a). Row Equivalent Matrix, Bases for the Null Space, Range, and Row Space of a Matrix Let \[A=\begin{bmatrix} 1 & 1 & 2 \\ 2 &2 &4 \\ 2 & 3 & 5 (a) Find a matrix $B$ in reduced row echelon form such that $B$ is row equivalent to the matrix $A$. (b) Find a basis for the null space of $A$. (c) Find a basis for the range of $A$ that consists of columns of $A$. For each columns, $A_j$ of $A$ that does not appear in the basis, express $A_j$ as a linear combination of the basis vectors. (d) Exhibit a basis for the row space of $A$. Prove that the Dot Product is Commutative: $\mathbf{v}\cdot \mathbf{w}= \mathbf{w} \cdot \mathbf{v}$ The Sum of Subspaces is a Subspace of a Vector Space Determine Bases for Nullspaces $\calN(A)$ and $\calN(A^{T}A)$ Solve a Linear Recurrence Relation Using Vector Space Technique True or False: $(A-B)(A+B)=A^2-B^2$ for Matrices $A$ and $B$ The Matrix for the Linear Transformation of the Reflection Across a Line in the Plane Find a Basis of the Eigenspace Corresponding to a Given Eigenvalue Examples of Prime Ideals in Commutative Rings that are Not Maximal Ideals
CommonCrawl
\begin{document} \title{Anonymity for practical quantum networks} \author{Anupama Unnikrishnan}\affiliation{Department of Atomic and Laser Physics, Clarendon Laboratory, University of Oxford, Oxford OX1 3PU, UK} \author{Ian J. MacFarlane}\affiliation{Massachusetts Institute of Technology, Cambridge, Massachusetts, USA} \author{Richard Yi}\affiliation{Massachusetts Institute of Technology, Cambridge, Massachusetts, USA} \author{Eleni Diamanti}\affiliation{LIP6, CNRS, Sorbonne Universit\'e, 75005 Paris, France} \author{Damian Markham}\affiliation{LIP6, CNRS, Sorbonne Universit\'e, 75005 Paris, France} \author{Iordanis Kerenidis}\affiliation{IRIF, CNRS, Universit\'e Paris Diderot, Sorbonne Paris Cit\'e, 75013 Paris, France} \begin{abstract} Quantum communication networks have the potential to revolutionise information and communication technologies. Here we are interested in a fundamental property and formidable challenge for any communication network, that of guaranteeing the anonymity of a sender and a receiver when a message is transmitted through the network, even in the presence of malicious parties. We provide the first practical protocol for anonymous communication in realistic quantum networks. \end{abstract} \date{\today} \maketitle The rapid development of quantum communication networks will allow a large number of agents with different technological, classical or quantum, capabilities to securely exchange messages and perform efficiently distributed computational tasks, opening new perspectives for information and communication technologies and eventually leading to the quantum internet \cite{Kim:nature08}. Many applications of quantum networks are known, including, for example, quantum key distribution (QKD) \cite{SBC:rmp09,DLQ:npjqi16} or blind and verifiable delegation of quantum computation \cite{GKK:tcs18}, and many more are yet to be developed. A crucial yet challenging functionality required in any network is the ability to guarantee the anonymity of two parties, the Sender and the Receiver, when they wish to transmit a message through the network. In a realistic network, anonymity should be guaranteed in the presence of malicious parties. We would additionally like that this happens in an information-theoretic setting, meaning without making any assumptions neither on the number nor on the computational power of these malicious parties, who might in fact have a quantum computer in their hands. In the classical setting, anonymity, as well as any multiparty secure computation, is possible with information-theoretic security when there is an honest majority of agents. Furthermore, Broadbent and Tapp \cite{BT:asiacrypt07} showed how to anonymously transmit a classical message, as well as a number of other secure protocols, in the absence of an honest majority. In order to do this, secure pairwise classical channels are required, as well as classical broadcast channels. In the quantum setting, the first work to deal with the anonymity of quantum messages was that of Christandl and Wehner \cite{CW:asiacrypt05}. In their work, one assumes that the $n$ agents share a perfect $n$-party GHZ state, \emph{i.e.}, the state $\frac{1}{\sqrt{2}} ( \ket{0^n}+\ket{1^n} )$ \cite{GHZ}. Under this assumption, they provide protocols with perfect anonymity both for the broadcast of a classical bit and for the creation of an EPR pair between a Sender and a Receiver. Then, they combine the two protocols in order to transmit a quantum message using a teleportation scheme \cite{BBC:prl93}. This first creates an EPR pair anonymously between Sender and Receiver, and then the Sender transmits the two classical outcomes of her measurements anonymously. The advantage of this protocol is that it only involves local operations and classical communication (LOCC) once the GHZ state is shared between the agents. However, it requires the assumption that a perfect GHZ state has been honestly shared between the agents. More recently, Lipinska \emph{et al.} \cite{LMW:pra18} showed how to perform a similar protocol starting from trusted W states, albeit only probabilistically. In order to remedy the drawback of a perfect shared quantum state, Brassard \emph{et al.} \cite{BBF:asiacrypt07} devised a different protocol, which includes a verification stage for ensuring that the shared state is at least symmetric with respect to the honest agents, and hence perfect anonymity is preserved. This test involves each agent performing a controlled-NOT operation between her initial quantum bit (qubit) and $n-1$ fresh ancilla qubits that she then sends to all other agents. Each agent then measures $n-1$ qubits in the subspace spanned by the all zeros and all ones strings and if the measurement accepts then the protocol continues with the remaining $n$-party GHZ state. While the authors manage in this way to preserve perfect anonymity, their protocol cannot be easily implemented, since each agent needs to perform a size-$n$ quantum circuit and also to have access to quantum communication with all other agents. We address this problem by considering quantum anonymous transmission in the presence of an untrusted source that may not be producing the GHZ state. Our two main ingredients are the Christandl-Wehner protocol for anonymous entanglement \cite{CW:asiacrypt05}, and a protocol for verifying GHZ states described in Ref. \cite{PAW:prl12}. We then present a new notion of approximate anonymity that is appropriate for realistic quantum networks, and show a practical and efficient protocol to achieve such anonymity in the transfer of a quantum message. \textbf{Communication scenario}.---Let us first describe the communication scenario we consider. Our network consists of $n$ agents who can perform local operations and measurements. A source, who may be malicious, produces GHZ states that our agents wish to use for anonymous quantum communication. The source may produce a different state in every round, or even entangle the states between different rounds. The agents themselves may be honest or malicious. Honest agents follow the protocol but malicious agents can collaborate with the source, work together, and apply any cheating strategy on their systems, including entangling them with some ancilla that they may store in memory to be accessed at will. The aim of the malicious agents is to break the anonymity or security of the protocol. In addition to public quantum channels between all agents, we require some classical communication channels. More specifically, we assume there are private classical channels between each pair of agents. This can be ensured by each pair of agents sharing a private random string, and is a standard assumption if we have malicious agents in a classical network. Furthermore, each agent has access to a broadcast channel, which she can use to send classical information to all other agents. We will use the term simultaneous broadcast when it is required that all agents must broadcast their bit simultaneously, which is an impractical resource as it is hard to ensure in practice. Crucially, we only need a regular (or non-simultaneous) broadcast channel in our anonymous quantum communication protocol; all the subprotocols that we use remove the requirement of simultaneous broadcasting. \textbf{Anonymous classical protocols}.---We start by providing the details of a few known anonymous classical protocols, some of which we will use directly. First, there exists a classical private protocol from Ref. \cite{BT:asiacrypt07}, \textsf{LogicalOR}, where each agent inputs a single bit and the protocol computes the logical OR of these bits. This protocol has correctness in that if the input of all agents is $0$, the protocol always outputs the correct answer (\emph{i.e.} $0$). If any agent inputs $1$, this protocol succeeds (\emph{i.e.} outputs $1$) with probability $1 - 2^{-S}$ after $S$ rounds. Privacy here means that only the agent can know their input. \textsf{LogicalOR} is built using another protocol \textsf{Parity} \cite{BT:asiacrypt07}, which privately computes the parity of the input string; however, contrary to the the \textsf{Parity} protocol, \textsf{LogicalOR} does not require a simultaneous broadcast channel. Further details of both protocols are given in Appendix A. We will use the \textsf{LogicalOR} protocol in order to create the functionality \textsf{RandomBit}, given in Protocol 1, which allows the Sender to anonymously choose a random bit according to some probability distribution $D$. The correctness and privacy of \textsf{RandomBit} follow directly from the properties of \textsf{LogicalOR}, namely the only thing the malicious agents learn is the bit chosen by the Sender, but not who the Sender is. We then extend the \textsf{RandomBit} functionality to define a \textsf{RandomAgent} functionality, where the Sender privately picks a random agent by performing the \textsf{RandomBit} protocol $\log_2 n$ times. \begin{algorithm}[H] \caption{\textsf{RandomBit}} \begin{flushleft} \textit{Input:} All: parameter $S$. Sender: distribution $D$. \\ \textit{Goal:} Sender chooses a bit according to $D$. \end{flushleft} \begin{algorithmic}[1] \STATE The agents pick bits $\{ x_i \}_{i = 1}^n$ as follows: the Sender picks bit $x_i$ to be 0 or 1 according to distribution $D$; all other agents pick $x_i =0$. \\ \ \STATE Perform the \textsf{LogicalOR} protocol with input $\{ x_i \}_{i=1}^n$ and security parameter $S$ and output its outcome. \end{algorithmic} \end{algorithm} Last, we need the \textsf{Notification} functionality \cite{BT:asiacrypt07}, given in Protocol 2, where the Sender anonymously notifies an agent as the Receiver. Note that we use the same security parameter $S$ throughout for simplicity, however this is not required. As we explicitly call on this in our main protocol, we describe it below. \begin{algorithm}[H] \caption{\textsf{Notification} \cite{BT:asiacrypt07}} \begin{flushleft} \textit{Input}: Security parameter $S$, Sender's choice of Receiver is agent $r$. \\ \textit{Goal}: Sender notifies Receiver. \end{flushleft} \begin{algorithmic}[1] \STATE For each agent $i$: \begin{enumerate} \item[(a)] Each agent $j \neq i$ picks $p_j$ as follows: if $i = r$ and agent $j$ is the Sender, then $p_j = 1$ with probability $\frac{1}{2}$ and $p_j = 0$ with probability $\frac{1}{2}$. Otherwise, $p_j = 0$. Let $p_i = 0$. \item[(b)] Run the \textsf{Parity} protocol with input $\{p_i\}_{i=1}^n$, with the following differences: agent $i$ does not broadcast her value, and they use a regular broadcast channel rather than simultaneous broadcast. If the result is $1$, then $y_i = 1$. \item[(c)] Repeat steps 1(a) - (b) $S$ times. If the result of the \textsf{Parity} protocol is never 1, then $y_i = 0$. \\ \ \end{enumerate} \STATE If agent $i$ obtained $y_i = 1$, then she is the Receiver. \end{algorithmic} \end{algorithm} \textbf{Anonymous entanglement with perfect trusted GHZ states}.---In addition to the previous classical protocols, we will need the \textsf{Anonymous Entanglement} protocol from Ref. \cite{CW:asiacrypt05}, given in Protocol 3. Here, it is assumed that the agents share a state which in the honest case is the GHZ state, and that the Sender and Receiver know their respective identities. It is not hard to see that assuming the initial state is a perfect GHZ state, then the protocol creates an EPR pair between the Sender and the Receiver perfectly anonymously. \begin{algorithm}[H] \caption{\textsf{Anonymous Entanglement} \cite{CW:asiacrypt05}} \begin{flushleft} \textit{Input}: $n$ agents share a GHZ state. \\ \textit{Goal}: EPR pair shared between Sender and Receiver. \end{flushleft} \begin{algorithmic}[1] \STATE Each agent, apart from the Sender and Receiver, applies a Hadamard transform to their qubit. They measure in the computational basis and broadcast their outcome. \\ \ \STATE The Sender first picks a random bit $b$, broadcasts it, and applies a phase flip $\sigma_z$ only when $b=1$. \\ \ \STATE The Receiver picks a random bit $b'$, broadcasts it and applies a phase flip $\sigma_z$ only when the parity of everyone else's broadcasted bits is 1. \end{algorithmic} \end{algorithm} \textbf{Efficient verification of GHZ states}.---The last ingredient we use is the \textsf{Verification} protocol for GHZ states from the work of Pappa \emph{et al.} \cite{PAW:prl12} that was also implemented for 3- and 4-party GHZ states in McCutcheon \emph{et al.} \cite{MPB:natcomm16}. There, one of the agents, the Verifier, would like to verify how close the shared state is to the ideal state. Let $k$ be the number of honest agents. The verification protocol is then given in Protocol 4. \begin{algorithm}[H] \caption{\textsf{Verification} \cite{PAW:prl12,MPB:natcomm16}} \begin{flushleft} \textit{Input}: $n$ agents share state $\ket{\Psi}$. \\ \textit{Goal}: GHZ verification of $\ket{\Psi}$ for $k$ honest agents. \end{flushleft} \begin{algorithmic}[1] \STATE The Verifier generates random angles $\theta_j \in [0,\pi)$ for all agents including themselves ($j\in[n]$), such that $\sum_j \theta_j$ is a multiple of $\pi$. The angles are then sent out to all the agents in the network. \\ \ \STATE Agent $j$ measures in the basis $\{\ket{+_{\theta_j}},\ket{-_{\theta_j}}\}=\{\frac{1}{\sqrt{2}}(\ket{0}+e^{i\theta_j}\ket{1}),\frac{1}{\sqrt{2}}(\ket{0}-e^{i\theta_j}\ket{1})\}$, and sends the outcome $Y_j=\{0,1\}$ to the Verifier. \\ \ \STATE The state passes the verification test when the following condition is satisfied: if the sum of the randomly chosen angles is an even multiple of $\pi$, there must be an even number of $1$ outcomes for $Y_j$, and if the sum is an odd multiple of $\pi$, there must be an odd number of $1$ outcomes for $Y_j$. We can write this condition as $ \bigoplus_j Y_j=\frac{1}{\pi}\sum_j\theta_j\pmod 2. $ \end{algorithmic} \end{algorithm} From the proofs in Refs.\@ \cite{PAW:prl12} and \cite{MPB:natcomm16}, one can see that the ideal state always passes the verification test, and, more interestingly, a soundness statement can also be proven. As in \cite{PAW:prl12}, we take the ideal $n$-party state to be $\ket{\Phi_0^n}$, given by: \begin{align} \ket{\Phi_0^n} = \frac{1}{\sqrt{2^{n-1}}} \Big[ \underset{\Delta(y) = 0 \text{ (mod 4)}}{\sum} \ket{y} - \underset{\Delta(y) = 2 \text{ (mod 4)}}{\sum} \ket{y} \Big],\nonumber \end{align} where $\Delta(y) = \sum_i y_i$ denotes the Hamming weight of the classical $n$-bit string $y$. This state is equivalent to the GHZ state up to local unitaries. Analogous to \cite{PAW:prl12, MPB:natcomm16}, to measure the quality of the state $\ket{\Psi}$ shared between the $n$ agents, we take a fidelity measure given by $F'(\ket{\Psi}) = \underset{U}{\max \ } F(U \ket{\Psi}, \ket{\Phi_0^n})$, where $U$ is any unitary operation on the space of the malicious agents. This reflects the fact that we are concerned with certifying the state up to operations on the malicious parts, since these are in any case out of the control of the honest agents. Then, even assuming the malicious agents apply their optimal cheating strategy, the probability of passing the test with the state $\ket{\Psi}$, denoted by $P(\ket{\Psi})$, satisfies $F'(\ket{\Psi}) \geq 4 P(\ket{\Psi}) - 3$ \cite{PAW:prl12, MPB:natcomm16}. Note that this holds even if the shared state is mixed; however, as we will see later, a clever malicious source will always create pure states. For our purposes, we will use below a version of this verification protocol that is similar to the Symmetric Verification protocol in Ref. \cite{PAW:prl12}. There, it was shown that with the use of a trusted common random string it was possible for all agents to take random turns verifying the validity of the GHZ state. This leads to the guarantee that if the state is accepted a large number of times before the agents decide to use it, then with high probability, when the state is used it should be very close to the correct one. \textbf{Anonymity for realistic quantum networks}.---All the quantum protocols we have seen that are used to achieve anonymity assume perfect operations and achieve perfect anonymity. In practice, of course, no operation can be perfect and hence perfect anonymity is unattainable. Nevertheless, it is still possible to define an appropriate notion of anonymity that is relevant for practical protocols. We define the notion of an {\em $\epsilon-$anonymous} protocol, where for any number $n-k$ of malicious agents out of $n$ agents in total, the malicious agents, even when they have in their possession the entire quantum state that corresponds to the protocol, can only guess who the Sender is (even when the Receiver is malicious) or who the Receiver is, with probability that is bounded by $\frac{1}{k}+\epsilon$. The perfect anonymity is defined when $\epsilon$ is equal to 0. \textbf{Efficient anonymous quantum message transmission}.---We will now show how to devise an efficient $\epsilon$-anonymous protocol for quantum message transmission. For simplicity we assume there is only one Sender. If not, the agents can run a simple classical protocol in the beginning of the protocol in order to deal with collisions (multiple Senders) and achieve the unique Sender property. See also Refs. \cite{BT:asiacrypt07} and \cite{CW:asiacrypt05} for details. Moreover, for simplicity we will describe a protocol where we distribute one EPR pair between the Sender and the Receiver. Then one can perform anonymous teleportation of the classical measurement results, using in particular the \textsf{Fixed Role Anonymous Message Transmission} functionality as was described in Ref. \cite{BT:asiacrypt07}. In case we want to increase the fidelity of the transmitted quantum message, we can further use the subroutines from Brassard \emph{et al.} \cite{BBF:asiacrypt07} which first create a number of non-perfect EPR pairs, then distill one pair and then perform the teleportation. Given that our main contribution is the efficient anonymous protocol for the GHZ verification, we do not provide here these details that are explained in Ref. \cite{BT:asiacrypt07}. Our scheme is outlined in Protocol 5. \begin{algorithm}[H] \caption{\textsf{$\epsilon$-Anonymous Entanglement Distribution}} \begin{flushleft} \textit{Input}: Security parameter $S$.\\ \textit{Goal}: EPR pair created between Sender and Receiver with $\epsilon$-anonymity. \end{flushleft} \begin{algorithmic}[1] \STATE {\bf The Sender notifies the Receiver:} The agents run the \textsf{Notification} protocol. \\ \ \STATE {\bf GHZ state generation:} The source generates a state $\ket{\Psi}$ and distributes it to the agents. \\ \ \STATE {\bf The Sender anonymously chooses Verification or Anonymous Entanglement:} \begin{enumerate} \item[(a)] The agents perform the \textsf{RandomBit} protocol, with the Sender choosing her input according to the following probability distribution: she flips $S$ fair classical coins, and if all coins are heads, she inputs $0$, else she inputs $1$. Let the outcome be $x$. \item[(b)] If $x=0$, the agents run \textsf{Anonymous Entanglement}, \;\;\; else if $x=1$: \begin{enumerate} \item[(i)] Run the \textsf{RandomAgent} protocol, where the Sender inputs a uniformly random $j \in [n]$, to get output $j$. \item[(ii)] Agent $j$ runs the \textsf{Verification} protocol as the Verifier, and if she accepts the outcome of the test they return to step 2, otherwise the protocol aborts. \end{enumerate} \end{enumerate} If at any point in the protocol, the Sender realises someone does not follow the protocol, she stops behaving like the Sender and behaves as any agent. \end{algorithmic} \end{algorithm} We are now ready to analyse the above protocol. First, note that if the state is a perfect GHZ state and the operations of the honest agents are perfect, then the anonymity of the protocol is perfect. In step 1, the agents run the \textsf{Notification} protocol which is perfectly anonymous. In the second step, the GHZ state is shared between the agents, which does not affect the anonymity. Note that the role of the source can be played by an agent, as long as the choice of the agent is independent of who the Sender is. In step 3(a), the agents run the \textsf{RandomBit} protocol which is also perfectly anonymous. The analysis of the step 3(b) follows from the analysis of the Symmetric Verification protocol in Ref. \cite{PAW:prl12}. The only difference here is that instead of using a common random string, it is the Sender who picks the randomness uniformly. Thus, since the input of the Sender completely determines the outcome of the protocol, the Sender can immediately see if her choice does not correspond to the outcome, and hence only continues if the randomness is perfectly uniform. Let $C_\epsilon$ be the event that the above protocol does not abort and that the state used for the \textsf{Anonymous Entanglement} protocol is such that no matter what operation the malicious agents do to their part, the fidelity of the state with the GHZ state is at most $\sqrt{1 - \epsilon^2}$. Then, we prove the following Theorem for the honest agents: \begin{theorem} For all $\epsilon >0$, \begin{equation} \text{Pr}[C_\epsilon] \leq 2^{-S}\frac{4n}{1 - \sqrt{1 - \epsilon^2}}. \end{equation} \end{theorem} \begin{proof-sketch} As proved in \cite{PAW:prl12}, the optimal cheating strategy of a malicious source, which maximises the probability of $C_\epsilon$, is to create in each round of the protocol a pure state $\ket{\Psi}$ such that $F'(\ket{\Psi}) = \sqrt{1-\epsilon^2}$. The probability of event $C_\epsilon$ is then given by the probability of the state being used and all the tests being passed in the previous rounds. This in turn will depend on the success probability of \textsf{RandomBit}, and if the agent chosen to act as the Verifier is honest. Given that a state with $F'(\ket{\Psi})$ passes the verification protocol with probability $P(\ket{\Psi})$, we can then determine a bound on $\text{Pr}[C_\epsilon]$ by following the proof in Ref. \cite{PAW:prl12}. The full proof is given in Appendix B. \end{proof-sketch} By taking $S= \log_2 (\frac{4n}{(1- \sqrt{1 - \epsilon^2})\delta})$, we have $\text{Pr}[C_\epsilon] \leq \delta$. Let us assume for simplicity that when the event $C_\epsilon$ is true, which happens with probability at most $\delta$, the malicious agents can perfectly guess the Sender or the Receiver. We will now see that when the event $C_\epsilon$ is false, which happens with probability at least $1-\delta$, the malicious agents cannot guess the Sender or the Receiver with probability much higher than a random guess. In other words, there is no strategy for breaking the anonymity of the communication that works much better than simply guessing an honest agent at random. Note that $C_\epsilon$ being false means that the fidelity of the shared state with the GHZ state (up to a local operation on the malicious agents) is at least $\sqrt{1-\epsilon^2}$. By doing enough rounds, we can ensure that the probability of $C_\epsilon$ is negligible. Our statement of anonymity is given as follows: \begin{theorem} If the agents share a state $\ket{\Psi}$ such that $F'(\ket{\Psi}) \geq \sqrt{1-\epsilon^2}$, then the probability that the malicious agents can guess the identity of the Sender is given by: \begin{align} \text{Pr}[\text{guess}] & \leq \frac{1}{k} + \epsilon. \end{align} \label{th:anon} \end{theorem} \begin{proof-sketch} First, we show that when the shared state is close to the GHZ state (up to some operation $U$ on the malicious agents' part of the state), then the fidelity between the final state of the protocol when the Sender is agent $i$, $\ket{\Psi_i}$, and the final state of the protocol when the Sender is agent $j$, $\ket{\Psi_j}$, is high. Then, we show that when the fidelity between the states $\ket{\Psi_i}$ and $\ket{\Psi_j}$ is close to 1, the probability that the malicious agents can guess the identity of the Sender is close to a random guess. The full proof is given in Appendix C. \end{proof-sketch} Finally, we consider the entangled state created anonymously between the Sender and Receiver. Although we have not considered a particular noise model, our analysis incorporates a reduced fidelity of $\ket{\Psi}$, the state shared by all the agents at the beginning of the protocol. We can carry this forward to the resulting anonymously entangled state, if we assume all the agents are honest and have followed the protocol. We find that the fidelity of the final entangled state with the EPR pair will be at least the fidelity of $\ket{\Psi}$ with the GHZ state. After the entangled state has been constructed, the Sender and Receiver can perform anonymous teleportation of any quantum message $\ket{\phi}$ by anonymously sending a classical message with the teleportation results. Our final statement is then given in Corollary \ref{cor:anon}. \begin{corollary} Using Protocol 5, we can achieve an $\epsilon$-anonymous protocol for quantum message transmission. \label{cor:anon} \end{corollary} \textbf{Discussion}.---We have proposed a practical protocol for anonymous quantum communications in the presence of malicious parties and an untrusted source. The verification step is carried out using a protocol that has been experimentally demonstrated \cite{MPB:natcomm16}, and is tolerant to losses and noise by design. Our protocol achieves in this full adversarial scenario an approximate notion of anonymity that we call $\epsilon$-anonymity and which is relevant in the context of realistic quantum networks. While the scheme in Ref. \cite{BBF:asiacrypt07} results in an exponential scaling, their protocol is not easily implementable. Recent work in Ref. \cite{LMW:pra18} provides a protocol for anonymous transmission using the W state rather than the GHZ state. While this is beneficial in terms of robustness to noise, the protocol creates the anonymously entangled state only with a probability $2/n$. Furthermore, the security analysis considers only the semi-active adversarial scenario, which requires a trusted source. Our anonymous quantum communication protocol opens the way to the integration and implementation of this fundamental functionality into quantum networks currently under development. {\bf Acknowledgments}.--- We acknowledge support of the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No. 820445 (QIA), the ANR through the ANR-17-CE24-0035 VanQuTe and ANR-17-CE39-0005 quBIC projects, the BPI France project RISQ, the EPSRC (UK), and the MIT-France International Science and Technology Initiative. \begin{thebibliography}{12} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{\citenamefont{Kimble}(2008)}]{Kim:nature08} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Kimble}}, \bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{453}}, \bibinfo{pages}{1023} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Scarani et~al.}(2009)\citenamefont{Scarani, Bechmann-Pasquinucci, Cerf, Dusek, L\"utkenhaus, and Peev}}]{SBC:rmp09} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Scarani}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Bechmann-Pasquinucci}}, \bibinfo{author}{\bibfnamefont{N.~J.} \bibnamefont{Cerf}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Dusek}}, \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{L\"utkenhaus}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Peev}}, \bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{1301} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Diamanti et~al.}(2016)\citenamefont{Diamanti, Lo, Qi, and Yuan}}]{DLQ:npjqi16} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Diamanti}}, \bibinfo{author}{\bibfnamefont{H.-K.} \bibnamefont{Lo}}, \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Qi}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Z.}~\bibnamefont{Yuan}}, \bibinfo{journal}{npj Quantum Info.} \textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{16025} (\bibinfo{year}{2016}). \bibitem[{\citenamefont{Gheorghiu et~al.}(2018)\citenamefont{Gheorghiu, Kapourniotis, and Kashefi}}]{GKK:tcs18} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Gheorghiu}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Kapourniotis}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Kashefi}}, \bibinfo{journal}{Theory Comput. Syst.} (\bibinfo{year}{2018}), \bibinfo{note}{https://doi.org/10.1007/s00224-018-9872-3}. \bibitem[{\citenamefont{Broadbent and Tapp}(2007)}]{BT:asiacrypt07} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Broadbent}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Tapp}}, in \emph{\bibinfo{booktitle}{Proc. ASIACRYPT}} (\bibinfo{year}{2007}), pp. \bibinfo{pages}{410--426}. \bibitem[{\citenamefont{Christandl and Wehner}(2005)}]{CW:asiacrypt05} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Christandl}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Wehner}}, in \emph{\bibinfo{booktitle}{Proc. ASIACRYPT}} (\bibinfo{year}{2005}), pp. \bibinfo{pages}{217--235}. \bibitem[{\citenamefont{Greenberger et~al.}(1989)\citenamefont{Greenberger, Horne, and Zeilinger}}]{GHZ} \bibinfo{author}{\bibfnamefont{D.~M.} \bibnamefont{Greenberger}}, \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Horne}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, in \emph{\bibinfo{booktitle}{Bell's Theorem, Quantum Theory, and Conceptions of the Universe, M. Kafatos (Ed.), Kluwer, Dordrecht}} (\bibinfo{year}{1989}), pp. \bibinfo{pages}{69--72}. \bibitem[{\citenamefont{Bennett et~al.}(1993)\citenamefont{Bennett, Brassard, Cr\'epeau, Jozsa, Peres, and Wootters}}]{BBC:prl93} \bibinfo{author}{\bibfnamefont{C.~H.} \bibnamefont{Bennett}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Brassard}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Cr\'epeau}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Jozsa}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Peres}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{W.~K.} \bibnamefont{Wootters}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{1895} (\bibinfo{year}{1993}). \bibitem[{\citenamefont{Lipinska et~al.}(2018)\citenamefont{Lipinska, Murta, and Wehner}}]{LMW:pra18} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Lipinska}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Murta}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Wehner}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{98}}, \bibinfo{pages}{052320} (\bibinfo{year}{2018}). \bibitem[{\citenamefont{Brassard et~al.}(2007)\citenamefont{Brassard, Broadbent, Fitzsimons, Gambs, and Tapp}}]{BBF:asiacrypt07} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Brassard}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Broadbent}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Fitzsimons}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Gambs}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Tapp}}, in \emph{\bibinfo{booktitle}{Proc. ASIACRYPT}} (\bibinfo{year}{2007}), pp. \bibinfo{pages}{460--473}. \bibitem[{\citenamefont{Pappa et~al.}(2012)\citenamefont{Pappa, Chailloux, Wehner, Diamanti, and Kerenidis}}]{PAW:prl12} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Pappa}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Chailloux}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Wehner}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Diamanti}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Kerenidis}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{108}}, \bibinfo{pages}{260502} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{McCutcheon et~al.}(2016)\citenamefont{McCutcheon, Pappa, Bell, McMillan, Chailloux, Lawson, Mafu, Markham, Diamanti, Kerenidis et~al.}}]{MPB:natcomm16} \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{McCutcheon}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Pappa}}, \bibinfo{author}{\bibfnamefont{B.~A.} \bibnamefont{Bell}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{McMillan}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Chailloux}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Lawson}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Mafu}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Markham}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Diamanti}}, \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Kerenidis}}, \bibnamefont{et~al.}, \bibinfo{journal}{Nature Commun.} \textbf{\bibinfo{volume}{7}}, \bibinfo{pages}{13251} (\bibinfo{year}{2016}). \end{thebibliography} \section{Appendix A: anonymous classical protocols} We first give the \textsf{Parity} protocol from \cite{BT:asiacrypt07}, by which a set of $n$ agents can privately determine the parity of their input string (or equivalently, the XOR of their input bits), in Protocol \ref{alg:parity}. Note that although this uses a simultaneous broadcast channel, we only use the modified version of this protocol (as given in the \textsf{LogicalOR} protocol afterwards), which just requires a regular broadcast channel. \begin{algorithm}[h] \caption{\textsf{Parity} \cite{BT:asiacrypt07}} \label{alg:parity} \begin{flushleft} \textit{Input}: $\{ x_i \}_{i=1}^n$. \\ \textit{Goal}: Each agent gets $y_i = \bigoplus_{i=1}^n x_i$. \end{flushleft} \begin{algorithmic}[1] \STATE Each of the $n$ agents wants to input their bit $x_i$. Every agent $i$ chooses random bits $\{r_i^j \}_{j=1}^n$ such that $\bigoplus_{j=1}^n r_i^j = x_i$. \\ \ \STATE Every agent $i$ sends their $j$th bit $r_i^j$ to agent $j$ ($j$ can equal $i$). \\ \ \STATE Every agent $j$ computes $z_j=\bigoplus_{i=1}^n r_i^j$ and reports the value in the simultaneous broadcast channel. \\ \ \STATE The value $z=\bigoplus_{j=1}^n z_j$ is computed, which equals $y_i$. \end{algorithmic} \end{algorithm} This protocol is then used to construct the \textsf{LogicalOR} protocol \cite{BT:asiacrypt07}, by which a set of $n$ agents can privately determine the logical OR of their inputs. Due to repeating the protocol with different orderings of the agents each time, the simultaneous broadcast channel is no longer required. This is given in Protocol \ref{alg:logicalor}. \begin{algorithm}[h] \caption{\textsf{LogicalOR} \cite{BT:asiacrypt07}} \label{alg:logicalor} \begin{flushleft} \textit{Input}: $\{ x_i \}_{i=1}^n$, security parameter $S$. \\ \textit{Goal}: Each agent gets $y_i = \bigvee_{i=1}^n x_i$. \end{flushleft} \begin{algorithmic}[1] \STATE The agents agree on $n$ orderings, with each ordering having a different last participant. \\ \ \STATE For each ordering: \begin{enumerate} \item[(a)] Each agent $i$ picks the value of $p_i$ as follows: if $x_i=0$, then $p_i=0$; if $x_i=1$, then $p_i=1$ with probability $\frac{1}{2}$ and $p_i=0$ with probability $\frac{1}{2}$. \item[(b)] Run the \textsf{Parity} protocol with input $\{p_i\}_{i=1}^n$, with a regular broadcast channel rather than simultaneous broadcast, and with the agents broadcasting according to the current ordering. If the result is $1$, then $y_i = 1$. \item[(c)] Repeat steps 2(a) - 2(b) $S$ times in total. If the result of the \textsf{Parity} protocol is never $1$, then $y_i = 0$. \end{enumerate} \end{algorithmic} \end{algorithm} If all agents input $x_i = 0$, then the \textsf{LogicalOR} protocol is correct with probability 1, however if any agent inputs $x_i = 1$, then the correctness is $(1 - 2^{-S})$. In Protocol 5, the functionality \textsf{RandomBit} is used to pick between the verification and use of the state. This in turn calls \textsf{LogicalOR}. In an honest run, the outcome of \textsf{RandomBit} is the outcome of the Sender, and so if any agent behaves dishonestly the Sender will abort. The functionality \textsf{RandomAgent}, which also calls \textsf{LogicalOR}, is simply a repetition of \textsf{RandomBit}, and so the same argument holds here. \section{Appendix B: Proof of Theorem \ref{th:prob}} Here, we prove the soundness of the protocol. \begin{manualtheorem}{1} Let $C_\epsilon$ be the event that the protocol does not abort and the state used for the anonymous transmission is such that $F'(\ket{\Psi}) = \sqrt{1-\epsilon^2}$. Then for the honest agents, for all $\epsilon >0$, \begin{equation} \text{Pr}[C_\epsilon] \leq 2^{-S}\frac{4n}{1 - \sqrt{1 - \epsilon^2}}. \end{equation}\label{th:prob} \end{manualtheorem} \begin{proof} Our aim is to bound the probability that the protocol does not abort and the fidelity of the state $\ket{\Psi}$ used for anonymous transmission is given by $F'(\ket{\Psi}) = \underset{U}{\max \ } F(U \ket{\Psi}, \ket{\Phi_0^n}) = \sqrt{1 - \epsilon^2}$, where $U$ is a general operator on the space of the malicious agents. Although we allow the malicious source to create any state in any round and even entangle the states between rounds, the optimal cheating strategy, which maximises the probability of the event $C_\epsilon$, is to create in each round some pure state $\ket{\Psi}$ such that $F'(\ket{\Psi}) = \sqrt{1-\epsilon^2}$, as proved in \cite{PAW:prl12}. In high level, one can first see that an entangled strategy does not help, as it can be replaced by a strategy sending unentangled states as follows. Given some entangled state, for a given round, the probability of passing the test and the fidelity of the state depend only on the reduced state, conditioned on passing previous rounds. The exact same effect can be achieved by sending these mixed reduced states corresponding to each round, without any entanglement. Next, one sees that by providing a mixed state, the source does not gain any advantage, as a mixed state is a probabilistic mixture of pure states, and the overall cheating probability of this mixed strategy is just a weighted combination of the cheating probabilities of each of the pure states. Then, obviously this mixed strategy is worse than the strategy that always sends the pure state that has the maximum cheating probability of all states in the mixture. Hence, one can continue the proof by only considering strategies with pure states. Moreover, since the adversary is just trying to maximise the probability the state $\ket{\Psi}$ used for anonymous transmission has $F'(\ket{\Psi})= \sqrt{1 - \epsilon^2}$, it is clear that there is no need to send any state with even smaller $F'(\ket{\Psi})$, since then the probability of failing the test (and therefore the protocol aborting) would just increase. Last, if in any round the source created a state with higher $F'(\ket{\Psi})$, then this certainly does not contribute to the event $C_\epsilon$, and in fact it may also cause the protocol to abort. Thus, to upper-bound the probability of event $C_\epsilon$ with respect to the best attack a malicious source can perform, we only need to consider the case where in each round the malicious source creates some state $\ket{\Psi}$ such that $F'(\ket{\Psi}) = \sqrt{1-\epsilon^2}$. First, we consider the probability that the state is used in round $l$. For this to happen, the Sender must get the result of all $S$ coin flips to be heads ($x=0$), which happens with probability $2^{-S}$. The Sender then calls \textsf{RandomBit} with her input as 0. The output of \textsf{RandomBit} will then be 0 with probability 1, since all agents input 0 (note that the Sender can see if any agent behaves dishonestly and inputs 1, as the output of \textsf{RandomBit} can then be 1, and so the Sender will abort). Second, we consider the probability that the state is tested in all $(l-1)$ previous rounds. Here, the Sender must input 1 to \textsf{RandomBit}. First, the probability of the Sender not getting all $S$ coin flips to be 0 is given by $1-2^{-S}$. Then, the probability that \textsf{RandomBit} will give an output of 1 is given by $1 - 2^{-S}$ after $S$ rounds. Malicious agents will not affect the output, since even if they input 1 to \textsf{RandomBit}, the output will still be 1. Thus, the overall probability is given by $[(1-2^{-S})(1-2^{-S})]^{l-1}$. Finally, we consider the probability that all the $(l-1)$ tests have passed. In our protocol, a randomly chosen agent $j$ runs the \textsf{Verification} protocol as the Verifier. If the Verifier is honest (which happens with probability $\frac{k}{n}$), the probability that the test is passed with a state $\ket{\Psi}$ is given by $P(\ket{\Psi})$. If the Verifier is malicious (with probability $ \frac{n-k}{n}$), we take the probability to be 1 as the worst case scenario. Then, we can write the probability that all $(l-1)$ tests have passed as $ \big( \frac{n-k}{n} + \frac{k}{n} P(\ket{\Psi}) \big)^{l-1}$. Note that from \cite{PAW:prl12}, the probability that a state $\ket{\Psi}$ with fidelity $F'(\ket{\Psi})$ will pass the test is given by $P(\ket{\Psi}) \leq \frac{3}{4} + \frac{F'}{4}$. Thus, the total probability of event $C_{\epsilon}$ at the $l^{th}$ repetition of the protocol is: \begin{align} Pr [C_{\epsilon}^l] & \leq 2^{-S} \Big( 1 - 2^{1-S} + 2^{-2S} \Big)^{l-1} \Big( 1 - \big( \frac{k - F'k}{4n} \big) \Big)^{l-1}. \end{align} We then take the integral to upper bound this probability as follows: \begin{align} Pr[C_{\epsilon}] & \leq \int_{0}^{\infty} 2^{-S} (1 - 2^{1-S} + 2^{-2S} )^l \Big( 1 - \big( \frac{k - F'k}{4n} \big)\Big)^{l} dl \\ & \leq 2^{-S} \int_{0}^{\infty} \Big( 1 - \big( \frac{k - F'k}{4n} \big)\Big)^{l} dl \\ & = - \frac{2^{-S}}{\log{(1 - \big( \frac{k - F'k}{4n} \big) )}} \\ & \leq 2^{-S} \frac{4n}{k (1 - F')} \\ & \leq 2^{-S} \frac{4n}{k (1 - \sqrt{1 - \epsilon^2})}. \end{align} Since each honest agent does not know which other agents are honest or malicious, we can further upper-bound this in terms of a security statement for the honest agents: \begin{align} \text{Pr}[C_\epsilon] \leq 2^{-S} \frac{4n}{1-\sqrt{1-\epsilon^2}}. \end{align} If the agents take $S = \log_2 (\frac{4n}{(1-\sqrt{1-\epsilon^2}) \delta})$, they get $\text{Pr}[C_\epsilon] \leq \delta$. The expected number of runs of the protocol is given by $2^S = \frac{4n}{(1-\sqrt{1-\epsilon^2}) \delta}$. Thus, they can make this probability of failure negligible by doing a large number of runs. \end{proof} \section{Appendix C: Proof of Theorem \ref{th:anon}} Next, we prove the anonymity of the protocol. For simplicity of the proof, recall that we denote the ideal state by $\ket{\Phi_0^n}$, which can be obtained from the GHZ state by applying a Hadamard and a phase shift $\sqrt{Z}$ to each qubit. The Sender's transformation now becomes $\sigma_x \sigma_z$. Further, we also define the state: \begin{align} \ket{\Phi_1^n} = \frac{1}{\sqrt{2^{n-1}}} \Big[ \underset{\Delta(y) = 1 \text{ (mod 4)}}{\sum} \ket{y} - \underset{\Delta(y) = 3 \text{ (mod 4)}}{\sum} \ket{y} \Big], \end{align} and note that $\sigma_x \sigma_z \ket{\Phi_0^n} = \ket{\Phi_1^n}, \sigma_x \sigma_z \ket{\Phi_1^n} = - \ket{\Phi_0^n}$. We consider two cases here: first, when all the agents are honest (Lemma \ref{l:honest}), and secondly, when we have malicious agents who could apply some operation on their part of the state (Lemma \ref{l:dishonest}). \renewcommand{2B}{2A} \begin{lemma} If all the agents are honest, and they share a state $\ket{\Psi}$ such that $F(\ket{\Psi}, \ket{\Phi_0^n}) = \sqrt{1 - \epsilon^2}$, then for every honest agent $i, j$ who could be the Sender, we have that $F(\ket{\Psi_i}, \ket{\Psi_j}) \geq 1- \epsilon^2$, where $\ket{\Psi_i}$ is the state after agent $i$ has applied the Sender's transformation. \label{l:honest} \end{lemma} \begin{proof} If we have $F(\ket{\Psi}, \ket{\Phi_0^n}) = \abs{\bra{\Psi}\ket{\Phi_0^n}}^2 = \sqrt{1 - \epsilon^2}$, then similarly to \cite{PAW:prl12} we can write the state shared by all the agents as: \begin{align} \ket{\Psi} = (1 - \epsilon^2)^{1/4} \ket{\Phi_0^n} + \epsilon_1 \ket{\Phi_1^n} + \sum_{i=2}^{2^n-1} \epsilon_i \ket{\Phi_i^n}, \end{align} where $\sum_{i=1}^{2^n-1} \epsilon_i^2 = 1 - \sqrt{1-\epsilon^2}$. If agent $i$ is the Sender, then she applies $\sigma_x \sigma_z$, and the state becomes: \begin{align} \ket{\Psi_i} = (1 - \epsilon^2)^{1/4} \ket{\Phi_1^n} - \epsilon_1 \ket{\Phi_0^n} + \sum_{i=2}^{2^n-1} \epsilon_i' \ket{\Phi_i^n}. \end{align} Instead, if agent $j$ is the Sender and she applies $\sigma_x \sigma_z$, the state becomes: \begin{align} \ket{\Psi_j} = (1 - \epsilon^2)^{1/4} \ket{\Phi_1^n} - \epsilon_1 \ket{\Phi_0^n} + \sum_{i=2}^{2^n-1} \epsilon_i'' \ket{\Phi_i^n}. \end{align} The fidelity is then given by: \begin{align} F(\ket{\Psi_i}, \ket{\Psi_j}) & = \abs{\bra{\Psi_i}\ket{\Psi_j}}^2 \\ & = \abs{\sqrt{1 - \epsilon^2} + \epsilon_1^2 + \sum_{i=2}^{2^n-1} \epsilon_i' \epsilon_i''}^2 \\ & \geq 1 - \epsilon^2. \end{align} \end{proof} \renewcommand{2B}{2B} \begin{lemma} If some of the agents are malicious, and they share a state $\ket{\Psi}$ such that $F'(\ket{\Psi}) \geq \sqrt{1-\epsilon^2}$, then for every honest agent $i, j$ who could be the Sender, we have that $F(\ket{\Psi_i}, \ket{\Psi_j}) \geq 1 - \epsilon^2 $, where $\ket{\Psi_i}$ is the state after agent $i$ has applied the Sender's transformation. \label{l:dishonest} \end{lemma} \begin{proof} Recall that our fidelity measure is given by $F'(\ket{\Psi}) = \underset{U}{\max \ } F(U\ket{\Psi}, \ket{\Phi_0^n})$. Let us now denote by $\ket{\Psi'}=U \ket{\Psi}$ the state after the operation $U$ which maximises this fidelity has been applied. As in \cite{PAW:prl12}, we can write this state in the most general form as: \begin{align} \ket{\Psi'} = \ket{\Phi_0^k} \ket{\psi_0} + \ket{\Phi_1^k} \ket{\psi_1} + \ket{\chi}, \end{align} where note that $\ket{\chi}$ contains both honest and malicious parts, of which the honest part is orthogonal to both $\ket{\Phi_0^k}$ and $\ket{\Phi_1^k}$. We want to find the closeness of the states $\ket{\Psi_i}, \ket{\Psi_j}$, which are the states after the $\sigma_x \sigma_z$ operation is applied to $\ket{\Psi'}$ by either agent $i$ or $j$ who is the Sender. These states are given by: \begin{align} \ket{\Psi_i} & = \ket{\Phi_1^k} \ket{\psi_0} - \ket{\Phi_0^k} \ket{\psi_1} + \ket{\chi'}, \\ \ket{\Psi_j} & = \ket{\Phi_1^k} \ket{\psi_0} - \ket{\Phi_0^k} \ket{\psi_1} + \ket{\chi''}. \end{align} The fidelity is then given by: \begin{align} F(\ket{\Psi_i}, \ket{\Psi_j}) & = \abs{\bra{\Psi_i}\ket{\Psi_j}}^2 \\ & = \abs{\bra{\psi_0}\ket{\psi_0} + \bra{\psi_1}\ket{\psi_1} + \bra{\chi'}\ket{\chi''}}^2. \end{align} However, although the overall state $\ket{\Psi'}$ is normalised, the malicious agents' part of the state is not. Thus, we need to determine a bound on $\bra{\psi_0}\ket{\psi_0}$ and $\bra{\psi_1}\ket{\psi_1}$. We have: \begin{align} F( \ket{\Psi'}, \ket{\Phi_0^n}) = \abs{\bra{\Phi_0^n}\ket{\Psi'}}^2 \geq \sqrt{1 - \epsilon^2}. \end{align} It was shown in \cite{PAW:prl12} that we can write for any $k, n$: \begin{align} \ket{\Phi_0^n} = \frac{1}{\sqrt{2}} \Big[ \ket{\Phi_0^k} \ket{\Phi_0^{n-k}} - \ket{\Phi_1^k} \ket{\Phi_1^{n-k}} \Big], \end{align} and using this, we get: \begin{align} \frac{1}{2} | & (\bra{\Phi_0^{n-k}}\ket{\psi_0})^2 + (\bra{\Phi_1^{n-k}}\ket{\psi_1})^2 \nonumber \\ & - 2 \bra{\Phi_0^{n-k}}\ket{\psi_0} \bra{\Phi_1^{n-k}}\ket{\psi_1} | \geq \sqrt{1 - \epsilon^2}. \end{align} Using the triangle inequality, we have: \begin{align} \frac{1}{2} \Big[ \abs{\bra{\Phi_0^{n-k}}\ket{\psi_0}}^2 & + \abs{\bra{\Phi_1^{n-k}}\ket{\psi_1}}^2 \Big] \geq \sqrt{1 - \epsilon^2 }. \end{align} Using the Cauchy-Schwarz inequality, we have: \begin{align} \bra{\psi_0}\ket{\psi_0} + \bra{\psi_1}\ket{\psi_1} & \geq \abs{\bra{\Phi_0^{n-k}}\ket{\psi_0}}^2 + \abs{\bra{\Phi_1^{n-k}}\ket{\psi_1}}^2 \\ & \geq \sqrt{1 - \epsilon^2}. \end{align} Since the overall state $\ket{\Psi'}$ is normalised, we have $\bra{\chi'}\ket{\chi''} \leq 1 - \sqrt{1 - \epsilon^2}$. Thus, we get our expression for fidelity as: \begin{align} F(\ket{\Psi_i}, \ket{\Psi_j}) & = \abs{\bra{\psi_0}\ket{\psi_0} + \bra{\psi_1}\ket{\psi_1} + \bra{\chi'}\ket{\chi''}}^2 \\ & \geq 1 - \epsilon^2. \end{align} \end{proof} We are now ready to prove Theorem \ref{th:anon}. \begin{manualtheorem}{2} If the agents share a state $\ket{\Psi}$ such that $F'(\ket{\Psi}) \geq \sqrt{1-\epsilon^2}$, then the probability that the malicious agents can guess the identity of the Sender is given by: \begin{align} \text{Pr}[\text{guess}] & \leq \frac{1}{k} + \epsilon. \end{align} \label{th:anon} \end{manualtheorem} \begin{proof} We will now show that if the agents share close to the GHZ state, then the Sender remains anonymous. From Theorem \ref{th:prob}, we saw that the probability that the state used for anonymous transmission satisfies $F'(\ket{\Psi}) \leq \sqrt{1 - \epsilon^2}$ is given by $ \text{Pr}[C_\epsilon] \leq \delta$ for the honest agents, where $\delta$ depends on the number of runs of the verification protocol. Thus, by doing enough runs, we can make this very small, and so we have that the state used for anonymous transmission will be close to the GHZ state, as given by $F'(\ket{\Psi}) \geq \sqrt{1 - \epsilon^2}$. From the previous proof, we see that if $F'(\ket{\Psi}) \geq \sqrt{1-\epsilon^2}$, the distance between the states if agent $i$ or $j$ was the Sender is $D(\ket{\Psi_i}, \ket{\Psi_j}) \leq \epsilon$. A malicious agent who wishes to guess the identity of the Sender would make some sort of measurement to do so. Thus, we wish to find the maximum success probability of a measurement that could distinguish between the $k$ states that are the result of the Sender (who can only be an honest agent) applying the $\sigma_x \sigma_z$ transformation. The success probability of discriminating between $k$ states is given by $ \sum_{i=1}^k p_i \text{Tr} (\Pi_i \rho_i)$. From Lemma \ref{l:dishonest}, we know that the distance between any two states after the Sender's transformation is upper-bounded by $\epsilon$. Thus, if we take $\ket{\alpha} = \ket{\Psi_j}$, then we know that any of these $k$ states is of distance $\epsilon$ away from this same state $\ket{\alpha}$. For any POVM element $P$, we can write the trace distance between two states $\rho, \sigma$ as $ \text{Tr} \big[ P (\rho - \sigma) \big] \leq D(\rho, \sigma)$. Thus, we have for a POVM element $\Pi_i$ and for states $\ket{\Psi_i}, \ket{\alpha}$: \begin{align} \text{Tr} (\Pi_i \ket{\Psi_i}\bra{\Psi_i}) - \text{Tr} (\Pi_i \ket{\alpha}\bra{\alpha}) \leq \epsilon. \end{align} Assuming that each honest agent has an equiprobable chance of becoming the Sender, the probability that the malicious agents can guess the identity of the Sender is bounded by: \begin{align} \text{Pr}[\text{guess}] & = \sum_{i=1}^k \frac{1}{k} \text{Tr} (\Pi_i \ket{\Psi_i}\bra{\Psi_i}) \\ & \leq \frac{1}{k} \sum_{i=1}^k \Big[ \text{Tr} (\Pi_i \ket{\alpha}\bra{\alpha}) + \epsilon \Big] \\ & = \frac{1}{k} \text{Tr}\Big[\sum_{i=1}^k \Pi_i \ket{\alpha}\bra{\alpha}\Big] + \frac{1}{k} k \epsilon \\ & = \frac{1}{k} \text{Tr} (\ket{\alpha}\bra{\alpha}) + \epsilon \\ & = \frac{1}{k} + \epsilon. \end{align} \end{proof} \end{document}
arXiv
High-Order Spatial Simulation Using Legendre-Like Orthogonal Splines Ilnur Minniakhmetov ORCID: orcid.org/0000-0002-4199-33581, Roussos Dimitrakopoulos1 & Marcelo Godoy2 Mathematical Geosciences volume 50, pages753–780(2018)Cite this article High-order sequential simulation techniques for complex non-Gaussian spatially distributed variables have been developed over the last few years. The high-order simulation approach does not require any transformation of initial data and makes no assumptions about any probability distribution function, while it introduces complex spatial relations to the simulated realizations via high-order spatial statistics. This paper presents a new extension where a conditional probability density function (cpdf) is approximated using Legendre-like orthogonal splines. The coefficients of spline approximation are estimated using high-order spatial statistics inferred from the available sample data, additionally complemented by a training image. The advantages of using orthogonal splines with respect to the previously used Legendre polynomials include their ability to better approximate a multidimensional probability density function, reproduce the high-order spatial statistics, and provide a generalization of high-order simulations using Legendre polynomials. The performance of the new method is first tested with a completely known image and compared to both the high-order simulation approach using Legendre polynomials and the conventional sequential Gaussian simulation method. Then, an application in a gold deposit demonstrates the advantages of the proposed method in terms of the reproduction of histograms, variograms, and high-order spatial statistics, including connectivity measures. The C++ course code of the high-order simulation implementation presented herein, along with an example demonstrating its utilization, are provided online as supplementary material. Geostatistical simulations are used to quantify the uncertainty of spatially distributed attributes of interest describing mineral deposits, petroleum reservoirs, hydrogeological horizons, environmental contaminants, and other spatially variant natural phenomena. Since the 1990s, multiple-point spatial simulation (MPS) methods and variations (Guardiano and Srivastava 1993; Strebelle 2002; Journel 2005, 2018; Zhang et al. 2006; Arpat and Caers 2007; Chugunova and Hu 2008; de Vries et al. 2009; Mariethoz and Renard 2010; Mariethoz et al. 2010; Straubhaar et al. 2011; De Iaco and Maggio 2011; Honarkhah 2011; Strebelle and Cavelius 2014; Chatterjee et al. 2012; Lochbühler et al. 2014; Mustapha et al. 2014; Rezaee et al. 2013; Toftaker and Tjelmeland 2013; Zhang et al. 2017; others) have been developed to advance the simulation methods beyond the past generation of second-order spatial statistics, which were typically based on Gaussian processes (e.g., Journel and Huijbregts 1978; David 1988; Goovaerts 1998; Chilès and Delfiner 1999). A limitation of MPS approaches is that they are largely algorithmic and do not consistently account for the high-order spatial relations in the available sample data. Patterns and complex spatial relations are derived from the so-termed training images (TIs), or geological analogues, rather than from sample data; this is a critical topic for relatively data-rich type applications, where data statistics have been shown to not be reproduced by simulated realizations based on MPS methods (Osterholt and Dimitrakopoulos 2018; Goodfellow et al. 2012). To address some of these limits, high-order simulation techniques for complex and non-Gaussian spatially distributed variables have also been developed (Mustapha and Dimitrakopoulos 2010, 2011; Mustapha et al. 2011; Tamayo-Mas et al. 2016; Minniakhmetov and Dimitrakopoulos 2017a), based on generating conditional distributions through Legendre polynomials (Lebedev 1965) and high-order spatial cumulants. Yao et al. (2018) developed a new computational model to significantly reduce the computation cost of the method. The high-order simulation approach does not require any transformation of initial data and makes no assumptions about the related probability distribution function. The approach reproduces high-order spatial statistics of sample data and a training image. The high-order spatial statistics are shown to capture directional multiple-point periodicity, connectivity of extreme values, and complex spatial architecture (Dimitrakopoulos et al. 2010). However, polynomial approximations do not always converge to analytic functions (Runge 1901; Boyd and Ong 2009; Fornberg and Zuev 2007). In addition, the high-order polynomials are very sensitive to rounding errors for values near the endpoints of an approximation domain; therefore, even if the interpolants converge in theory, they will diverge rapidly when computed (Platte et al. 2011). This is critical for the simulation of extreme values, as they are located at the endpoints of an approximation domain. In an effort to improve upon the limitations of polynomial approximation, a spline approximation of complex multidimensional functions is considered here (Piegl 1989; Hughes et al. 2005; Ruiu et al. 2016). Splines are piecewise-defined polynomial functions in which pieces are connected by some condition of smoothness. The places where these pieces meet are called knots, and two adjacent knots form a knot interval, hereafter referenced simply as an interval. Knot locations have a significant impact on the quality and flexibility of approximation, particularly in the approximation of functions with discontinuities (López de Silanes et al. 2001; Sinha and Schunck 1992) and functions with locally high gradients (Malagù et al. 2014). Furthermore, through the proper choosing of the knot sequence, splines can accurately approximate very complex functions, such as the shapes of three-dimensional objects in a computer-aided geometric design (Hoschek and Lasser 1993; Park and Lee 2007). Therefore, splines are chosen herein to approximate complex multidimensional joint distributions. The most commonly used mathematical formulation in different applications of splines are B-splines (from basis spline). The construction of B-splines is straightforward and simple to implement; however, the high-order simulation framework proposed by Mustapha and Dimitrakopoulos (2010) assumes the orthogonality of basis functions, and, therefore, splines in the form of B-splines are not suitable for a high-order spatial simulation approach. In this paper, Legendre-like splines (Wei et al. 2013) are used, which are shown to be orthogonal and can be easily integrated in the high-order simulation framework. There are two user-defined parameters used for the constructing of Legendre-like splines: the order of splines and the maximum number of knots. In practice, cubic splines (order 3) are commonly used (Hughes et al. 2005; Piegl 1989), as they provide efficient smooth approximation. For the cubic splines, the first four Legendre-like splines are defined at two endpoint knots of the approximation domain and Legendre polynomials up to order 3. Next, Legendre-like splines are constructed by adding an additional knot per Legendre-like spline until the user-defined maximum number of knots is reached. Increasing the number of knots improves the approximation and describes more complex relations in the available data in the same way that the high-order polynomials capture the complex behavior of the function to be approximated. Thus, the maximum number of knots reflects the maximum order of high-order spatial relations that can be calculated from the available data. This spline approach aims to improve the estimation of the conditional probability density function (cpdf) and overcome the limitations of polynomial approximations. In addition, the proposed approach provides a general framework for high-order simulation techniques. For example, by using only one interval for spline construction, the technique becomes the one proposed by Mustapha and Dimitrakopoulos (2010, 2011). The paper is organized as follows. First, the high-order simulation framework is outlined. Then, two systems of basis functions are outlined: Legendre polynomials (Lebedev 1965) and Legendre-like orthogonal splines. In the following section, the capabilities of both systems are compared using a fully known dataset to demonstrate the advantages of orthogonal splines in simulating connected high values. Next, the proposed approach is applied to a gold deposit and compared with the sequential Gaussian simulation approach in terms of the reproduction of histograms, variograms, high-order spatial statistics, and the connectivity of high values. Discussion and conclusions follow. Supplementary material available online provides the C++ course code of the high-order sequential simulation implementation detailed in Sect. 2. Sequential High-Order Simulation Let Z(ui) be a stationary ergodic random field indexed in Rn, where \( {\mathbf{u}}_{i} \in D \subseteq R^{n} (n = 1,2,3),i = 1 \ldots N \) and where N is the number of points in a discrete grid \( D \subseteq R^{n} \). Random variables indexed on the grid \( D \subseteq R^{n} \) are denoted by \( Z_{i} \equiv Z({\mathbf{u}}_{i} ) \), whereas their outcomes are denoted by \( z_{i} = z({\mathbf{u}}_{i} ) \). The focus of high-order simulation techniques is to simulate the realization of the random field \( Z({\mathbf{u}}_{i} ) \) for all nodes of a grid D with a given set of conditioning data \( {\mathbf{d}}_{n} = \{ z({\mathbf{u}}_{\alpha } ),\alpha = 1 \ldots n\} \). The joint probability density function \( f({\mathbf{u}}_{0} ,{\mathbf{u}}_{1} , \ldots {\mathbf{u}}_{N} ;z_{0} ,z_{1} , \ldots z_{N} |{\mathbf{d}}_{n} ) \) of the random field \( Z({\mathbf{u}}_{i} ) \) can be decomposed into the product of conditional univariate distributions using the basic concept of sequential simulation (Journel and Alabert 1989; Journel 1994; Dimitrakopoulos and Luo 2004) $$ \begin{aligned} & f({\mathbf{u}}_{1} , \ldots {\mathbf{u}}_{N} ;z_{1} , \ldots z_{N} |{\mathbf{d}}_{{\mathbf{n}}} ) \\ &\quad = f({\mathbf{u}}_{2} , \ldots {\mathbf{u}}_{N} ;z_{2} , \ldots z_{N} |z_{1} ,{\mathbf{d}}_{{\mathbf{n}}} )f({\mathbf{u}}_{1} ;z_{1} |{\mathbf{d}}_{{\mathbf{n}}} ) \\ &\quad = f({\mathbf{u}}_{3} , \ldots {\mathbf{u}}_{N} ;z_{3} , \ldots z_{N} |z_{1} ,z_{2} ,{\mathbf{d}}_{{\mathbf{n}}} )f({\mathbf{u}}_{2} ;z_{2} |z_{1} ,{\mathbf{d}}_{{\mathbf{n}}} )f({\mathbf{u}}_{1} ;z_{1} |{\mathbf{d}}_{{\mathbf{n}}} ) \\ &\quad = \prod\limits_{i = 2}^{N} {f({\mathbf{u}}_{i} ;z_{i} |z_{1} , \ldots ,z_{i - 1} ,{\mathbf{d}}_{{\mathbf{n}}} )} f({\mathbf{u}}_{1} ;z_{1} |{\mathbf{d}}_{{\mathbf{n}}} ). \\ \end{aligned} $$ Accordingly, the random path of visiting all grid nodes is defined first. Then, starting from the first node in the random path, the value zi is simulated based on the estimated cpdf \( f({\mathbf{u}}_{i} ;z_{i} |z_{1} , \ldots ,z_{i - 1} ,{\mathbf{d}}_{{\mathbf{n}}} ) \). Finally, the simulated value is added to the set of conditional data, and the process is repeated until all grid nodes in the random path are visited. Eventually, any resulting simulation represents a realization of the complex joint distribution \( f({\mathbf{u}}_{0} ,{\mathbf{u}}_{1} , \ldots {\mathbf{u}}_{N} ;z_{0} ,z_{1} , \ldots z_{N} |{\mathbf{d}}_{{\mathbf{n}}} ) \). Without loss of generality, let u0 be the first node in the random path. According to Bayes' rule (Lee 2012) $$ f({\mathbf{u}}_{0} ;z_{0} |{\mathbf{d}}_{{\mathbf{n}}} ) = \frac{{f({\mathbf{u}}_{0} ,{\mathbf{u}}_{1} , \ldots ,{\mathbf{u}}_{n} ;z_{0} ,{\mathbf{d}}_{{\mathbf{n}}} )}}{{f({\mathbf{u}}_{1} , \ldots ,{\mathbf{u}}_{n} ;{\mathbf{d}}_{{\mathbf{n}}} )}}, $$ where \( f({\mathbf{u}}_{0} ,{\mathbf{u}}_{1} , \ldots ,{\mathbf{u}}_{n} ;z_{0} ,{\mathbf{d}}_{{\mathbf{n}}} ) \) is a joint probability density function and \( f({\mathbf{u}}_{1} , \ldots ,{\mathbf{u}}_{n} ;{\mathbf{d}}_{{\mathbf{n}}} ) \) can be calculated as $$ f({\mathbf{u}}_{1} , \ldots ,{\mathbf{u}}_{n} ;{\mathbf{d}}_{{\mathbf{n}}} ) = \int {f({\mathbf{u}}_{0} ,{\mathbf{u}}_{1} , \ldots ,{\mathbf{u}}_{n} ;\xi_{0} ,{\mathbf{d}}_{{\mathbf{n}}} )} d\xi_{0}. $$ In this paper, the joint probability density function \( f({\mathbf{u}}_{0} ,{\mathbf{u}}_{1} , \ldots ,{\mathbf{u}}_{n} ;z_{0} ,{\mathbf{d}}_{{\mathbf{n}}} ) \) is approximated using Legendre polynomials and Legendre-like orthogonal splines. Approximation of a Joint Probability Density Using Orthogonal Functions Let f(z) be a probability density function of a random variable Z defined on [a, b] and let \( \varphi_{1} (z),\varphi_{2} (z), \ldots \) be a complete system of orthogonal functions in [a, b], then f(z) can be approximated by the finite number ω of functions \( \varphi_{1} (z),\varphi_{2} (z), \ldots \varphi_{\omega } (z) \) $$ f(z) \approx \sum\limits_{m = 0}^{\omega } {L_{m} \varphi_{m} (z)}, $$ where Lm are coefficients of approximation. The system of functions \( \varphi_{1} (z),\varphi_{2} (z), \ldots \varphi_{\omega } (z) \) is orthogonal $$ \int\limits_{a}^{b} {\varphi_{k} \varphi_{m} (z){\text{d}}z} = \delta_{km}, $$ where \( \delta_{mk} = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {m = k} \hfill \\ {0,} \hfill & {m \ne k} \hfill \\ \end{array} } \right. \) is the Kronecker delta, and, therefore, \( \forall k = 0 \ldots \omega \) $$ \int\limits_{a}^{b} {\varphi_{k} (z)f(z){\text{d}}z} \approx \int\limits_{a}^{b} {\varphi_{k} \sum\limits_{m = 0}^{\omega } {L_{m} \varphi_{m} (z)} {\text{d}}z} = \sum\limits_{m = 0}^{\omega } {L_{m} } \int\limits_{a}^{b} {\varphi_{k} \varphi_{m} (z){\text{d}}z} = \sum\limits_{m = 0}^{\omega } {L_{m} } \delta_{mk} = L_{k}. $$ By definition $$ E[\varphi_{k} (z)] = \int\limits_{a}^{b} {\varphi_{k} (z)f(z){\text{d}}z}, $$ where E stands for mathematical expectation. The coefficients Lm can be estimated from the available data. Similarly, a joint probability density function \( f(z_{0} ,z_{1} , \ldots z_{n} ) \) of a set of random variables \( Z_{0} ,Z_{1} , \ldots Z_{n} \) defined on \( [a,b] \times [a,b] \times \ldots [a,b] \) can be approximated as $$ f(z_{0} ,z_{1} , \ldots z_{n} ) \approx \sum\limits_{{m_{0} = 0}}^{{\omega_{0} }} {\sum\limits_{{m_{1} = 0}}^{{\omega_{1} }} { \cdots \sum\limits_{{m_{n} = 0}}^{{\omega_{n} }} {L_{{m_{0} ,m_{1} , \ldots ,m_{n} }} \varphi_{{m_{0} }} (z_{0} )\varphi_{{m_{2} }} (z_{1} ) \cdots \varphi_{{m_{n} }} (z_{n} )} } }. $$ Coefficients \( L_{{m_{0} ,m_{1} , \ldots ,m_{n} }} \) are obtained from the orthogonality property $$ \begin{aligned} & \int\limits_{a}^{b} {\int\limits_{a}^{b} { \cdots \int\limits_{a}^{b} {\varphi_{{k_{0} }} (z_{0} )\varphi_{{k_{1} }} (z_{1} ) \cdots \varphi_{{k_{n} }} (z_{n} )f(z_{0} ,z_{1} , \ldots z_{n} )|d{\mathbf{z}}|} } } \approx \\ & \quad \;\sum\limits_{{m_{0} = 0}}^{{\omega_{0} }} {\sum\limits_{{m_{1} = 0}}^{{\omega_{1} }} { \cdots \sum\limits_{{m_{n} = 0}}^{{\omega_{n} }} {L_{{m_{0} ,m_{1} , \ldots ,m_{n} }} } } } \int\limits_{a}^{b} {\int\limits_{a}^{b} { \cdots \int\limits_{a}^{b} {\varphi_{{k_{0} }} (z_{0} )\varphi_{{m_{0} }} (z_{0} ) \cdots \varphi_{{k_{n} }} (z_{n} )\varphi_{{m_{n} }} (z_{n} )|d{\mathbf{z}}|}}}\\&\quad = \sum\limits_{{m_{0} = 0}}^{{\omega_{0} }} {\sum\limits_{{m_{1} = 0}}^{{\omega_{1} }} { \cdots \sum\limits_{{m_{n} = 0}}^{{\omega_{n} }} {L_{{m_{0} ,m_{1} , \ldots ,m_{n} }} } } } \delta_{{m_{0} k_{0} }} \delta_{{m_{1} k_{1} }} \cdots \delta_{{m_{n} k_{n} }} = L_{{k_{0} ,k_{1} , \ldots ,k_{n} }} ,\forall k_{0} ,k_{1} , \ldots ,k_{n} = 0 \ldots \omega, \\ \end{aligned} $$ where \( |{\text{d}}{\mathbf{z}}| = {\text{d}}z_{0} {\text{d}}z_{1} \cdots {\text{d}}z_{n} \). By definition $$ E[\varphi_{{k_{0} }} (z_{0} )\varphi_{{k_{1} }} (z_{1} ) \cdots \varphi_{{k_{n} }} (z_{n} )] = \int\limits_{a}^{b} {\int\limits_{a}^{b} { \cdots \int\limits_{a}^{b} {\varphi_{{k_{0} }} (z_{0} )\varphi_{{k_{1} }} (z_{1} ) \cdots \varphi_{{k_{n} }} (z_{n} )f(z_{0} ,z_{1} , \ldots z_{n} )|{\text{d}}{\mathbf{z}}|} } }. $$ When considering the spatial locations \( {\mathbf{u}} = \{ {\mathbf{u}}_{0} ,{\mathbf{u}}_{1} , \ldots ,{\mathbf{u}}_{n} \} \) of random variables \( Z_{0} ,Z_{1} , \ldots Z_{n} \), the coefficients \( L_{{k_{0} ,k_{1} , \ldots ,k_{n} }} \) can be estimated from available data using Eqs. (9) and (10) by calculating $$ L_{{k_{0} ,k_{1} , \ldots k_{n} }} \approx E[\varphi_{{k_{0} }} (z_{0} )\varphi_{{k_{1} }} (z_{1} ) \cdots \varphi_{{k_{n} }} (z_{n} )] \approx \frac{1}{{N_{{h_{1} ,h_{2} , \ldots h_{n} }} }}\sum\limits_{k = 1}^{{N_{{h_{1} ,h_{2} , \ldots h_{n} }} }} {\varphi_{{k_{0} }} (z_{0}^{k} )\varphi_{{k_{1} }} (z_{1}^{k} ) \cdots \varphi_{{k_{n} }} (z_{n}^{k} )}, $$ where values \( z_{i}^{k} ,i = 0 \ldots n \) are taken from the available data \( z_{i}^{k} \in {\mathbf{d}}_{n} \) and the given training image, and separated by lags \( {\mathbf{h}}_{i} = {\mathbf{u}}_{i} - {\mathbf{u}}_{0} ,i = 1 \ldots n \). Finally, high-order sequential simulations are generated using the following algorithm: Algorithm A.1 Define a random path for visiting all unsampled nodes on the simulation grid. For each node u0 in the path: Find the closest sampled grid nodes \( {\mathbf{u}}_{1} ,{\mathbf{u}}_{2} , \ldots {\mathbf{u}}_{n} \). Calculate lags \( {\mathbf{h}}_{i} = {\mathbf{u}}_{i} - {\mathbf{u}}_{0} ,i = 1 \ldots n \) for unsampled location u0. Scan the initial data and find values \( z_{k}^{i} ,i = 0 \ldots n \) separated by lags \( {\mathbf{h}}_{i} = {\mathbf{u}}_{i} - {\mathbf{u}}_{0} ,i = 1 \ldots n \). Calculate the coefficients \( L_{{k_{0} ,k_{1} , \ldots ,k_{n} }} \) using Eq. (11). Build the cpdf \( f({\mathbf{u}}_{0} ;z_{0} |z_{1} , \ldots z_{n} ) \) for the random variable Z0 at the unsampled location u0 given the conditioning data \( z_{1} , \ldots z_{n} \) at the corresponding neighbors \( {\mathbf{u}}_{1} ,{\mathbf{u}}_{2} , \ldots {\mathbf{u}}_{n} \) using Eqs. (2) and (8). Draw a uniform random value in [0, 1] to generate a simulated value z0 from the conditional distribution \( f({\mathbf{u}}_{0} ;z_{0} |z_{1} , \ldots z_{n} ) \). Add z0 to the set of sample data and the previously simulated values. Repeat Steps 2a–g for the next points in the random path defined in Step 1. Legendre Polynomials Mustapha and Dimitrakopoulos (2010) proposed using a Legendre series as a set of basis functions \( \varphi_{1} (z),\varphi_{2} (z), \ldots \). The Legendre polynomial Pk of order k is defined as in Lebedev (1965) $$ P_{k} = \frac{1}{{2^{k} k!}}\left( {\frac{{{\text{d}}^{k} }}{{{\text{d}}z^{k} }}} \right)\left[ {(z^{2} - 1)^{k} } \right],\quad - 1 \le z \le 1. $$ The set of Legendre polynomials \( \{ P_{k} (z)\}_{k} \) forms a complete basis set on the interval [− 1, 1], and, accordingly, the function \( f({\mathbf{u}}_{0} ;z_{0} |z_{1} , \ldots z_{n} ) \) can be approximated using Eqs. (2) and (8). By their construction, the order of Legendre polynomials corresponds to the order of high-order spatial statistics of the probability function \( f({\mathbf{u}}_{0} ;z_{0} |z_{1} , \ldots z_{n} ) \). However, there are practical limitations when using Legendre polynomials for the approximation of functions in multidimensional space. This is discussed in Sect. 3. Legendre-Like Orthogonal Splines Approximation In the present work, Legendre-like splines (Wei et al. 2013) are used as a set of basis functions. These splines are constructed using Legendre polynomials and the linear combination of B-splines. B-splines of order r in a variable \( t \in [a,b] \) are piecewise polynomials defined over the domain $$ T = \{ \underbrace {{a,a, \ldots ,t_{0} = a}}_{r + 1} < t_{1} \le t_{2} \le \ldots \le t_{{m_{\hbox{max} } }} < \underbrace {{t_{{m_{\hbox{max} } + 1}} = b,b, \ldots ,b}}_{r + 1}\} . $$ The points \( t_{i} ,i = 0 \ldots m_{\hbox{max} } \) are called knots. Each piece, a B-spline of order r, can be derived using de Boor's formula (de Boor 1978) $$ B_{i,0} = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {t_{i} \le t \le t_{i + 1} } \hfill \\ {0,} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. $$ $$ B_{i,r} (t) = \frac{{t - t_{i} }}{{t_{i + r - 1} - t_{i} }}B_{i,r - 1} (t) + \frac{{t_{i + r} - t}}{{t_{i + r} - t_{i + 1} }}B_{i + 1,r - 1} (t). $$ B-splines do not form an orthogonal basis; however, Wei et al. (2013) introduced orthogonal splines based on the combination of B-splines and a set of knot sequences. The first r + 1 splines are defined as the Legendre polynomials up to order r $$ S_{k} (t) = P_{k} (t),k = 0 \ldots r. $$ The subsequent splines are constructed on subsets \( T_{m} = \{ t_{i,m} \}_{i = - r}^{r + m + 1} \), \( m = 1 \ldots m_{\hbox{max} } - 1 \) of the knot sequence T, where the ti,m are defined as follows $$ t_{i,m} = \left\{ {\begin{array}{*{20}l} {a,} \hfill & { - r \le i \le 0} \hfill \\ {t_{i} ,} \hfill & {1 \le i \le m} \hfill \\ {b,} \hfill & {m + 1 \le i \le m + r + 1}. \hfill \\ \end{array} } \right. $$ For example, the first and second subsets are \( T_{1} = \{ \underbrace {a,a, \ldots ,a}_{r + 1} < t_{1} < \underbrace {b,b, \ldots ,b}_{r + 1}\} \) and \( T_{2} = \{ \underbrace {a,a, \ldots ,a}_{r + 1} < t_{1} \le t_{2} < \underbrace {b,b, \ldots ,b}_{r + 1}\} \), respectively. Let \( B_{i,r,m} (t) \) be a B-spline of order r on the knot sequence Tm $$ B_{i,0,m} = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {t_{i,m} \le t \le t_{i + 1,m} } \hfill \\ {0,} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. $$ $$ B_{i,r,m} (t) = \frac{{t - t_{i,m} }}{{t_{i + r - 1,m} - t_{i,m} }}B_{i,r - 1,m} (t) + \frac{{t_{i + r,m} - t}}{{t_{i + r,m} - t_{i + 1,m} }}B_{i + 1,r - 1,m} (t), $$ then, the remaining Legendre-like splines \( S_{k} (t),k = r + 2 \ldots r + m_{\hbox{max} } \) are determined by $$ S_{r + m} (t) = \frac{{d^{r + 1} }}{{dt^{r + 1} }}f_{m} (t),m = 1 \ldots m_{\hbox{max} }, $$ where fm(t) is the determinant of the matrix: $$ f_{m} (t) = \det \left( {\begin{array}{*{20}c} {B_{ - r,2r + 1,m} (t)} & {B_{ - r + 1,2r + 1,m} (t)} & \cdots & {B_{ - r + m - 1,2r + 1,m} (t)} \\ {B_{ - r,2r + 1,m} (t_{1} )} & {B_{ - r + 1,2r + 1,m} (t_{1} )} & \vdots & {B_{ - r + m - 1,2r + 1,m} (t_{1} )} \\ \vdots & \vdots & \ddots & \vdots \\ {B_{ - r,2r + 1,m} (t_{m - 1} )} & {B_{ - r + 1,2r + 1,m} (t_{m - 1} )} & \cdots & {B_{ - r + m - 1,2r + 1,m} (t_{m - 1} )} \\ \end{array} } \right). $$ The examples of orthogonal splines of order r = 3 and the knot sequence \( T = [ - 1, - 1, - 1, - 1, - 0.6, - 0.2,0.2,0.6,1,1,1,1] \) are presented in Figs. 1 and 2. The first r + 1 splines are defined on the knot sequence with only one interval [− 1, 1] and are, thus, simply Legendre polynomials up to the order r (Fig. 1). For each subsequent spline, the knot sequence is updated by adding a knot from the initial knot sequence T, e.g., the fifth spline (Fig. 2a) is defined by Eq. (20) on two intervals [− 1, − 0.6] and [− 0.6, − 1], or the knot sequence \( T_{1} = [ - 1, - 1, - 1, - 1, - 0.6,1,1,1,1] \)). It should be noted that Eq. (20) is obtained from the condition of orthogonality in respect to all previous splines. The following three splines (Fig. 2b–d) are defined on the knot sequence \( T_{2} = [ - 1, - 1, - 1, - 1, - 0.6, - 0.2,1,1,1,1] \), \( T_{3} = [ - 1, - 1, - 1, - 1, - 0.6, - 0.2,0.2,1,1,1,1] \), and T4 = T. The first four Legendre-like splines over the knot sequence T The last four Legendre-like splines over the knot sequence T In this work, the initial values are linearly transformed into the [− 1, 1] values range and divided into mmax intervals. There are two parameters that have a significant impact on the quality of approximation: the maximum number of intervals mmax and the order of splines r. These parameters reflect the maximum order of high-order spatial statistics that can be captured from the available data. The first parameter is the order of splines r. High values of the order r lead to a non-stable approximation (Runge 1901; Platte et al. 2011) and high computational costs, whereas low values affect the continuity and smoothness of approximation. For example, zero-order splines are good for the approximation of the stepwise function because each spline is a constant function. Splines with r = 1 are used for the approximation of continuous, but not smooth, functions, i.e., a polygonal line. In practice, cubic splines, i.e., r = 3, are commonly used (Hughes et al. 2005; Piegl 1989). The second parameter is the maximum number of intervals mmax. Low values of mmax lead to an approximation that is close to a polynomial case, for which limitations are discussed in Sect. 4. For example, an approximation with mmax = 1 corresponds to the Legendre polynomial approximation presented by Mustapha and Dimitrakopoulos (2010). High values of mmax result in overfitting or poor predictive performance, as it overreacts to minor fluctuations in the data. In addition, an approximation with a high value of mmax affects the variability of the simulations because it directly samples values from the initial available data and pastes them into simulations. To choose values of r and mmax, different measures are tested. The widely known Kolmogorov–Smirnov statistics test (Stephens 1974) that indicates whether two data samples come from the same distribution is not utilized herein because the related quantile–quantile plot is reproduced well for a very wide range of r and mmax values; thus, such a statistical test does not provide guidance on selecting suitable parameters. Other approaches, including comparing high-order spatial statistics maps or connectivity properties, are hard to quantify by a single number and complex to implement. In this work, a simple and fast measure of the quality of approximation is used. This quality of approximation is expressed in terms of the number of grid nodes where splines fail to approximate the conditional distribution. At these nodes, the high-order simulation method produces numerical artefacts, such as outliers or noise values. Outliers can be easily detected by comparing the value at nodes with their local neighborhood average value. The average number of outliers is calculated for different values of r and mmax (Fig. 3). According to Fig. 3, as the number of intervals mmax is increased, the quality of approximation improves. At the same time, increasing the order of splines r decreases the quality of approximation. For cubic splines (order r = 3), the reasonable number of intervals is 30, as it provides the same quality of approximation as 50, demonstrates better predictive performance, and is computationally less expensive. The corresponding order of high-order spatial statistics is mmax + r = 33. The number of outliers depending on the order of splines r and the maximum number of intervals mmax. Low number of outliers corresponds to good quality of approximation Testing the Simulation Approach with a Fully Known Dataset The high-order simulation method presented in the previous section is tested with a fully known dataset obtained from an image of a fracture network downloaded from a texture synthesis website (http://br.depositphotos.com/5211338/stock-photo-dry-terrain.html) and displayed in Fig. 4. Gray-scale values of the image are transformed to the [0, 1] domain. The reference image (Fig. 5a) and the TI (Fig. 5b) are taken from different parts of the image and have sizes 150 × 150 and 200 × 400 grid nodes, respectively. The dataset (Fig. 5c) is generated from the reference image with a uniform random sampling of 225 values (1% of the image points). Image of a fracture network (public dataset) Fracture network: a reference image; b training image; c dataset from reference image Three different systems of functions for the high-order simulation approach (hosim) and the sequential Gaussian simulation (sgsim) method are compared next: (a) Legendre-like splines (r = 3, mmax = 30, the corresponding order of high-order spatial statistics is mmax + r = 33), (b) sgsim method, (c) Legendre polynomials of order 10 (the corresponding order of high-order spatial statistics is 10), and (d) Legendre polynomials of order 20 (the corresponding order of high-order spatial statistics is 20). Hereafter, the system of functions based on splines and Legendre polynomials are correspondingly called hosim-splines and hosim-polynomials. The simulation using hosim-splines (Fig. 6a) shows a stable reproduction of spatially connected structures. Simulations using hosim-polynomials (Fig. 6c, d) have less connected features than the simulation using splines. sgsim (Fig. 6b) fails to reproduce the spatial continuity of high values. Table 1 shows the average value, median, and variance for the sample data, reference image, TI, and simulated realizations; note that only hosim-polynomials of order 20 are included in the comparisons that follow. All methods reproduce well the low-order statistics of the sample data and the TI. The simulation results are as follows: a the simulation using hosim-splines, b the simulation using sgsim, and c, d the simulations using hosim-polynomials of orders 10 and 20, respectively Table 1 The basic statistics of the sample data, reference image, training image, and simulations Figure 7 shows the quantile–quantile (QQ) plots of ten simulated realizations of each simulation approach with the sample data. The QQ plots for the simulations using hosim-splines are represented by red lines. The 45° black line represent QQ plots of the sample data with the sample data. The blue line represents the QQ plot of the reference image with the sample data. The green line represents the QQ plot of the training image with the sample data. The QQ plots for the simulations using sgsim are depicted by gray dashed lines and the QQ plots for the simulations using hosim-polynomials are shown by gray solid lines. Overall, the QQ plots of simulations using hosim-splines are consistent with the QQ plot of the sample data and the reference image, whereas QQ plots of simulations using hosim-polynomials and sgsim slightly deviate from the QQ plot of the sample data. Quantile–quantile (QQ) plots of simulations with the sample data. The blue line is the reference image; the green line is the training image; the red lines are the simulations using splines; the gray solid lines are the simulations using Legendre polynomials; and the gray dashed lines are the simulations using sgsim Figure 8 shows variograms along the north–east and north–west directions; that is, directions of the main continuity of high values, calculated for the simulations using hosim-splines (the red lines), the sample data (dots), the reference image (the blue line), the TI (the green line), the simulations using hosim-polynomials (the gray solid lines), and the simulations using sgsim (the gray dashed lines). All techniques demonstrate reasonable reproduction of the second-order statistics of the sample data. Variograms along the north–east direction (top subfigures a, b) and north–west direction (bottom subfigures c, d) of the sample data (dots), reference image (blue line), training image (green line), simulations using hosim-splines (red lines), simulations using hosim-polynomials (gray lines), and simulations using sgsim (gray dashed lines) For the calculation of the third-order and fourth-order spatial statistics, the estimations of the high-order moment are used (Dimitrakopoulos et al. 2010) $$ m_{3} ({\mathbf{h}}_{{\mathbf{1}}} ,{\mathbf{h}}_{{\mathbf{2}}} ) = \frac{1}{{N_{{h_{1} h_{2} }} }}\sum\limits_{i = 0}^{{N_{{h_{1} h_{2} }} }} {Z({\mathbf{u}})Z({\mathbf{u}} + {\mathbf{h}}_{1} )} Z({\mathbf{u}} + {\mathbf{h}}_{2} ), $$ $$ m_{4} ({\mathbf{h}}_{1} ,{\mathbf{h}}_{2} ,{\mathbf{h}}_{3} ) = \frac{1}{{N_{{h_{1} h_{2} h_{3} }} }}\sum\limits_{i = 0}^{{N_{{h_{1} h_{2} h_{3} }} }} {Z({\mathbf{u}})Z({\mathbf{u}} + {\mathbf{h}}_{1} )} Z({\mathbf{u}} + {\mathbf{h}}_{2} )Z({\mathbf{u}} + {\mathbf{h}}_{3} ), $$ where Nh1h2 is the number of elements of replicates found on lags h1 and h2, and Nh1h2h3 is the number of elements of replicates found on distances h1, h2, and h3. To highlight the connectivity property along the north–east (NE) and north–west (NW) directions, the third-order moments are calculated for binary images with a cut-off value of 0.82 (95th percentile). An example of a binary image is shown in Fig. 9a. The third-order spatial statistics are estimated based on a template with directional vectors along the NE and NW directions (Fig. 9b), i.e., \( {\mathbf{h}}_{1} = (i{\text{d}}x,i{\text{d}}y) \) and \( {\mathbf{h}}_{2} = ( - j{\text{d}}x,j{\text{d}}y) \), respectively, where \( i,j = 1 \ldots 30 \) and the lag discretization along x and y is dx = dy = 1 pixel. The physical meaning of the third-order moment of the binary image is straightforward—it is the probability of having high values at the three points separated by lags h1 and h2 (Minniakhmetov and Dimitrakopoulos 2017b). The red–orange values represent the average sizes of connected high values along the NE and NW directions. In the third-order indicator moment map of the reference image, the average sizes of the interconnected high values are 10 and 20 pixels along the NE and NW directions, respectively. The third-order indicator moment calculation for the reference image: a the binary values of the reference image for a cut-off value of 0.82 (95th percentile), the black lines represent L-type template for calculation of the third-order spatial statistics; b the third-order spatial moments of the binary image The third-order moments are calculated for simulations and averaged to account for differences between the realizations. The high-order simulation technique using hosim-splines and hosim-polynomials (Fig. 10b, e) reproduce the third-order moment map of the sample data (Fig. 10a), the reference image (Fig. 10c), and the TI (Fig. 10d), as can be seen from the similar size of the red–orange value areas in the corresponding figures. The moment map of the simulation using sgsim (Fig. 10f) does not reproduce connectivity along the NE and NW directions; the size of the red-shaded area is 8 × 10 pixels, compared to a 10 × 20-pixel area in the reference image's moment map (Fig. 10c). The third-order indicator moments: a sample data samples, b the simulation using hosim-splines, c the reference image, d the training image, e a simulation using hosim-polynomials, and f a simulation using sgsim The fourth-order spatial statistics are estimated for binary images with a cut-off value of 0.82 (95th percentile) based on a template with directional vectors NE \( {\mathbf{h}}_{1} = (i{\text{d}}x,i{\text{d}}y) \), NW \( {\mathbf{h}}_{2} = ( - j{\text{d}}x,j{\text{d}}y) \), and south-west (SW) \( {\mathbf{h}}_{3} = ( - k{\text{d}}x, - k{\text{d}}y) \), where \( i,j,k = 1 \ldots 30 \) and lag discretization along x and y is dx = dy = 1 pixels. Similarly to the third order, the fourth-order moments are calculated for simulations and averaged to account for differences in the various realizations. The red–orange areas along the axes of the fourth-order spatial statistics (Fig. 11) represent the high values along the NE, NW, and SW directions. According to Fig. 11, the fourth-order moment map for the simulation using hosim-splines (Fig. 11b) reproduce the sizes of fractures along the NE, NW, and SW directions in the fourth-order moment of the sample data (Fig. 11a), the reference image (Fig. 11c), and the TI (Fig. 11d). The fourth-order moment map of the simulation using hosim-polynomials (Fig. 11e) shows a smaller connectivity of fractures along the NE and SW directions. The spatial statistics map of the simulation using sgsim (Fig. 11f) does not reproduce the connectivity of fractures along the NE and SW directions. The fourth-order indicator moments: a sample data samples, b the simulation using hosim-splines, c the reference image, d the training image, e a simulation using hosim-polynomials, and f a simulation using sgsim The connectivity of high values is measured using the function presented by Journel and Alabert (1989). As in the above examples, the cut-off value is equal to 0.82 (95th percentile). Figure 12 shows P50 statistics of the connectivity measure along the NE (top subfigures) and NW directions (bottom subfigures). The P50 statistics of connectivity are calculated for the simulations using the proposed techniques (red solid line), hosim-polynomials (gray solid line), and sgsim method (gray dashed line). The connectivity measures of the reference image (blue line) and the TI (green line) falls within the P10 and P90 statistics of the connectivity measure in the simulations using hosim-splines (red dash-dot lines), whereas the connectivity of the simulations using hosim-polynomials and sgsim is lower, on average, than the connectivity of the TI and the reference image. The connectivity functions along the north–east (top subfigures) and north–west directions (bottom subfigures) 95th percentile: reference image (blue line), training image (green line), P50 statistics for simulations using hosim-splines (red solid line), P10 and P90 statistics for simulations using hosim-splines (red dash-dot line), P50 statistics for simulations using hosim-polynomials (gray solid line), and P50 statistics for simulations using sgsim (gray dashed line) Application at a Gold Deposit Data from a gold deposit are used as a case study to demonstrate the intricacies and advantages of the high-order spatial simulation method described above. In addition, the method is compared with the sgsim approach for the reproduction of histograms, variograms, high-order spatial statistics, and the connectivity of high and extreme values. The deposit is 2 km by 2 km wide and extends to a depth of 500 m. Sample data are available from 288 exploration drillholes. Blast-hole data are also available for the deposit and used in the construction of a training image. The three-dimensional TI is defined on 405 × 445 × 86 grid blocks of size 5 × 5×5 m3. The simulation grid coincides with the grid of the training image. The simulation of grades is performed using the proposed method with cubic splines r = 3 and a maximum number of intervals mmax = 30. Examples of horizontal sections and a vertical profile for the orebody area are shown in Figs. 13 and 14. High grades are located in the south-eastern sector of the deposit, predominantly in the bottom part. Sections of the training image from part of the deposit: a horizontal section at Z = 50 m, b horizontal section at Z = 100 m, and c the vertical profile. The colors indicate grades in g/t Sections of the simulation using hosim-splines from part of the deposit: a horizontal section at Z = 50 m, b horizontal section at Z = 100 m, and c the vertical profile. The colors indicate grades in g/t The two-dimensional sections in Fig. 14 show that the simulation using the proposed method reproduces the spatial distribution of grades and the continuity of high grades. The areas with high values in Fig. 14 are in good agreement with the drillhole data (Fig. 15) and the TI (Fig. 13). The simulation using sgsim (Fig. 16) exhibits a greater number of disconnected structures with high values and sparsely distributed outliers. Sections of the exploration drillhole data from part of the deposit: a horizontal section at Z = 50 m, b horizontal section at Z= 100 m, and c the vertical profile. The colors indicate grades in g/t Sections of the simulation using the sgsim method from part of the deposit: a horizontal section at Z= 50 m, b horizontal section at Z = 100 m, and c vertical profile. The colors indicate grades in g/t These observations are confirmed by a quantitative analysis in a subsequent validation by (1) mean and variance comparison, (2) QQ plots between drillhole data and simulated values, (3) variogram validation, (4) high-order spatial cumulant validation, and (5) connectivity measure. Table 2 shows the average value, median, and variance for the drillhole data, the TI, and the simulations. Both methods reproduce well the low-order statistics of the drillhole data and the TI. Table 2 The basic statistics of the drillhole data, training image, and simulations Figure 17 shows the QQ plots of the simulated realizations and the drillhole data. Quantiles of simulations using hosim-splines are shown by red lines. Quantiles of simulations using the sgsim method are shown by gray lines. In addition, the QQ plots of the training image and the drillhole data are depicted by the blue line. The closer these curves are to the 45° black line in the graph, the better they reproduce the distribution of the drillhole data. Both methods provide simulations consistent with the drillhole data in terms of distributions. Figure 18 presents variograms for the north–south and east–west directions. Simulations using hosim-splines (red lines) share the second-order statistics of drillhole data and the TI (blue lines). The simulations using the sgsim method (gray lines) preserve the second-order statistics of the drillhole data (black dots). Quantile–quantile (QQ) plots of sample data (black line), training image (TI, blue line), simulations using hosim-splines (red lines), and simulations using sgsim (gray lines) Variograms of the hard data (dots), training image (blue lines), simulations using hosim-splines (red lines), and simulations using sgsim (gray lines) along the north–south (a) and east–west (b) directions Applying a zero-mean transformation, the third-order cumulants can be calculated using Eq. (22) with lags \( {\mathbf{h}}_{1} = (i{\text{d}}x,0)\,{\mathbf{h}}_{2} = (0,j{\text{d}}y) \) indexed by \( i = 1 \ldots 7,j = 1 \ldots 7 \), where dx and dy are distances between drillholes, that is, 100 m × 100 m. Figure 19 shows the comparison of cumulant maps for sample data, the TI, and the simulations. The values along axes reflect variograms along their corresponding directions because the third-order moment \( E(Z^{2} ({\mathbf{x}})Z({\mathbf{x}} + {\mathbf{h}})) \) has similar spatial relations as the second-order moment \( E(Z({\mathbf{x}})Z({\mathbf{x}} + {\mathbf{h}})) \). However, the square term Z2(x) in \( E(Z^{2} ({\mathbf{x}})Z({\mathbf{x}} + {\mathbf{h}})) \) affects the absolute value of the statistics and, moreover, combines both negative and positive correlations of Z(x) due to the square operation. Thus, in addition to analyzing the values along the axes, it is important to compare the area of [200; 400] × [200; 400] on the third-order cumulant maps. The simulations using the proposed hosim-splines method (Fig. 19c) reproduce red areas along the x–y axes and yellow–green areas in the cumulant map of the drillhole data (Fig. 19a) and the TI (Fig. 19b). These areas reflect the size of connected high grades and are equal to approximately 400 m along the x-axis and 300 m along the y-axis. The cumulant map for the simulation using sgsim (Fig. 19d) neither reproduce the magnitude of the red area along the x and y axes nor the values at the area of [200; 400] × [200; 400]. Third-order spatial cumulant maps of the a drillhole data, b training image (TI), c a simulation using hosim-splines, and d a simulation using sgsim The fourth-order cumulants are calculated using the following equation $$ \begin{aligned} c_{4} ({\mathbf{h}}_{1} ,{\mathbf{h}}_{2} ,{\mathbf{h}}_{3} ) = \frac{1}{{N_{{h_{1} h_{2} h_{3} }} }}\sum\limits_{i = 0}^{{N_{{h_{1} h_{2} h_{3} }} }} {Z({\mathbf{u}})Z({\mathbf{u}} + {\mathbf{h}}_{1} )} Z({\mathbf{u}} + {\mathbf{h}}_{2} )Z({\mathbf{u}} + {\mathbf{h}}_{3} ) \\ - m_{2} ({\mathbf{h}}_{{\mathbf{1}}} )m_{2} ({\mathbf{h}}_{{\mathbf{2}}} ) - m_{2} ({\mathbf{h}}_{{\mathbf{1}}} )m_{2} ({\mathbf{h}}_{{\mathbf{3}}} ) - m_{2} ({\mathbf{h}}_{{\mathbf{2}}} )m_{2} ({\mathbf{h}}_{{\mathbf{3}}} ), \\ \end{aligned} $$ where Nh1h2h3 is the number of elements of replicates found on distances h1 and h2, and m2(h) is the second-order moment along direction h, which is equal to the covariance for a zero-mean random field. The lags \( {\mathbf{h}}_{1} = (i{\text{d}}x,0),{\mathbf{h}}_{2} = (0,j{\text{d}}y) \), and \( {\mathbf{h}}_{3} = (0,k{\text{d}}z) \) are indexed by i = 1…7, j = 1…7, and k = 1…7, where, dx, dy, and dz are distances between data samples, that is, 100 m × 100 m × 5 m. The high-order cumulants calculated reflect the complex structures of orebodies (Dimitrakopoulos et al. 2010). According to Figs. 19 and 20, the size of connected structures is reproduced in simulations using the proposed method (Figs. 19c, 20c). This can also be traced in the vertical profiles (Figs. 13c, 14c, 15c). The fourth-order cumulant map of the simulation using sgsim (Fig. 20d) has a rather small red area in comparison with structures in the cumulant maps of the drillhole data (Fig. 20a) and the TI (Fig. 20b). The fourth-order spatial cumulant maps of the a drillhole data, b training image (TI), c a simulation using hosim-splines, and d a simulation using sgsim The connectivity along the x and y axes is analyzed using the connectivity measure presented by Journel and Alabert (1989). The cut-off value is equal to 5 ppm (99th percentile). The P10, P50, and P90 statistics of connectivity measures are calculated for simulations using the hosim-splines method and depicted by red lines in Fig. 21. Solid lines represent the P50 of connectivity measures, whereas dashed lines show the P10 and P90 statistics. The connectivity of the simulations using the proposed method (red lines) remains close to the connectivity measure of the TI (blue lines). The P50 statistics of connectivity measure calculated using sgsim simulations (gray lines) is quite far from the connectivity of the TI. Thus, despite reproducing the histograms and variograms, Gaussian simulation methods fail to reproduce an important property of the connectivity of high values. The connectivity along x (left subfigure) and y (right subfigure) for the 99th percentile: the training image (TI, blue lines), P50 statistics for hosim simulations (red solid lines), P10 and P90 statistics for hosim simulations (red dashed lines), and P50 statistics for simulations using sgsim (gray lines) This paper presents a novel approach for the high-order simulation of continuous variables based on Legendre-like orthogonal splines. Splines are flexible tools for the approximation of complex probability density functions. Using different knot sequences, orders of splines, and smoothness of piecewise polynomials, it is possible to obtain a stable approximation that reproduces the spatial connectivity of the extreme values. The simulations are consistent with the spatial statistics of the sample data and share the high-order spatial statistics of the available data and the training image. The proposed approach is also compared with the conventional second-order approach sequential Gaussian simulation and the high-order simulation method using Legendre polynomials. The approach using splines exhibits a more stable approximation of the conditional probability density function (cpdf) and a better representation of the spatial connectivity of extreme values. The applied connectivity measure confirms the results obtained by analyzing the high-order statistics and demonstrates the limitations of Gaussian simulation methods in the characterization of a mineral deposit. In addition, the proposed approach provides a general framework for high-order simulation techniques. For example, by using just one interval for spline construction, the technique reproduces the method proposed by Mustapha and Dimitrakopoulos (2010, 2011). Further research will address the simulation of categorical variables using splines of order zero and the simulation of multiple correlated continuous and discrete variables within a general framework. In addition, the adaptive knot sequence will be investigated for a better approximation of the cpdf. Arpat GB, Caers J (2007) Conditional simulation with patterns. Math Geosci 39(2):177–203 Boyd JP, Ong JR (2009) Exponentially-convergent strategies for defeating the Runge phenomenon for the approximation of non-periodic functions, Part I: single-interval schemes. Commun Comput Phys 5:484–497 Chatterjee S, Dimitrakopoulos R, Mustapha H (2012) Dimensional reduction of pattern-based simulation using wavelet analysis. Math Geosci 44(3):343–374 Chilès J-P, Delfiner P (1999) Geostatistics: modeling spatial uncertainty. Wiley, New York Chugunova TL, Hu LY (2008) Multiple-point simulations constrained by continuous auxiliary data. Math Geosci 40(2):133–146 David M (1988) Handbook of applied advanced geostatistical ore reserve estimation. Elsevier, Amsterdam de Boor C (1978) A practical guide to splines. Springer, Berlin De Iaco S, Maggio S (2011) Validation techniques for geological patterns simulations based on variogram and multiple-point statistics. Math Geosci 43(4):483–500 de Vries LM, Carrera J, Falivene O, Gratacós O, Slooten LJ (2009) Application of multiple point geostatistics to non-stationary images. Math Geosci 41(1):29–42 Dimitrakopoulos R, Luo X (2004) Generalized sequential Gaussian simulation on group size ν and screen-effect approximations for large field simulations. Math Geol 36(5):567–591 Dimitrakopoulos R, Mustapha H, Gloaguen E (2010) High-order statistics of spatial random fields: exploring spatial cumulants for modeling complex non-Gaussian and non-linear phenomena. Math Geosci 42(1):65–99 Fornberg B, Zuev J (2007) The Runge phenomenon and spatially variable shape parameters in RBF interpolation. Comput Math Appl 54(3):379–398 Goodfellow R, Consuegra FA, Dimitrakopoulos R, Lloyd T (2012) Quantifying multi-element and volumetric uncertainty, Coleman McCreedy deposit, Ontario, Canada. Comput Geosci 42:71–78 Goovaerts P (1998) Geostatistics for natural resources evaluation. Oxford, New York Guardiano FB, Srivastava RM (1993) Multivariate geostatistics: beyond bivariate moments. In: Soares A (ed) Geostatistics Tróia '92. Quantitative Geology and Geostatistics, vol 5. Springer, Dordrecht, pp 133–144 Honarkhah M (2011) Stochastic simulation of patterns using distance-based pattern modeling. Ph.D. dissertation, Stanford University, Stanford Hoschek J, Lasser D (1993) Fundamentals of computer aided geometric design. AK Peters, London Hughes TJR, Cottrell JA, Bazilevs Y (2005) Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Comput Methods Appl Mech Eng 194(39–41):4135–4195 Journel AG (1994) Modelling uncertainty: some conceptual thoughts. In: Dimitrakopoulos R (ed) Geostatistics for the next century. Kluwer, Dordrecht, pp 30–43 Journel AG (2005) Beyond covariance: the advent of multiple-point geostatistics. In: Leuanthong O, Deutsch CV (eds) Geostatistics Banff 2004. Springer, Dordrecht, pp 225–233 Journel AG (2018) Roadblocks to the evaluation of ore reserves—the simulation overpass and putting more geology into numerical models of deposits. In: Dimitrakopoulos R (ed) Advances in applied strategic mine planning. Springer, Cham, pp 47–55. https://doi.org/10.1007/978-3-319-69320-0_5 Journel AG, Alabert F (1989) Non-Gaussian data expansion in the earth sciences. Terra Nova 1:123–134 Journel AG, Huijbregts CJ (1978) Mining geostatistics. Academic Press, London Lebedev NN (1965) Special functions and their applications. Prentice Hall, New Jersey Lee PM (2012) Bayesian statistics: an introduction. Wiley, New York Lochbühler T, Pirot G, Straubhaar J, Linde N (2014) Conditioning of multiple-point statistics facies simulations to tomographic images. Math Geosci 46(5):625–645 López de Silanes MC, Parra MC, Pasadas M, Torrens JJ (2001) Spline approximation of discontinuous multivariate functions from scattered data. J Comput Appl Math 131(1–2):281–298 Malagù M, Benvenuti E, Duarte CA, Simone A (2014) One-dimensional nonlocal and gradient elasticity: assessment of high order approximation schemes. Comput Methods Appl Mech Eng 275(15):138–158 Mariethoz G, Renard P (2010) Reconstruction of incomplete data sets or images using direct sampling. Math Geosci 42(3):245–268 Mariethoz G, Renard P, Straubhaar J (2010) The direct sampling method to perform multiple-point geostatistical simulations. Water Resour Res. https://doi.org/10.1029/2008wr007621 Minniakhmetov I, Dimitrakopoulos R (2017a) Joint high-order simulation of spatially correlated variables using high-order spatial statistics. Math Geosci 49(1):39–66 Minniakhmetov I, Dimitrakopoulos R (2017b) A high-order, data-driven framework for joint simulation of categorical variables. In: Gómez-Hernández JJ, Rodrigo-Ilarri J, Rodrigo-Clavero ME, Cassiraga E, Vargas-Guzmán JA (eds) Geostatistics Valencia 2016. Springer, Cham, pp 287–301 Mustapha H, Dimitrakopoulos R (2010) High-order stochastic simulations for complex non-Gaussian and non-linear geological patterns. Math Geosci 42(5):457–485 Mustapha H, Dimitrakopoulos R (2011) HOSIM: a high-order stochastic simulation algorithm for generating three-dimensional complex geological patterns. Comput Geosci 37(9):1242–1253 Mustapha H, Dimitrakopoulos R, Chatterjee S (2011) Geologic heterogeneity representation using high-order spatial cumulants for subsurface flow and transport simulations. Water Resour Res. https://doi.org/10.1029/2010wr009515 Mustapha H, Chatterjee S, Dimitrakopoulos R (2014) CDFSIM: efficient stochastic simulation through decomposition of cumulative distribution functions of transformed spatial patterns. Math Geosci 46(1):95–123 Osterholt V, Dimitrakopoulos R (2018) Simulation of orebody geology with multiple-point geostatistics—application at Yandi channel iron ore deposit, WA, and implications for resource uncertainty. In: Dimitrakopoulos R (ed) Advances in applied strategic mine planning. Springer, Cham, pp 335–352. https://doi.org/10.1007/978-3-319-69320-0_22 Park H, Lee JH (2007) B-spline curve fitting based on adaptive curve refinement using dominant points. Comput Aided Des 39(6):439–451 Piegl L (1989) Modifying the shape of rational B-splines. Part 1: curves. Comput Aided Des 21(8):509–518 Platte RB, Trefethen LN, Kuijlaars AB (2011) Impossibility of fast stable approximation of analytic functions from equispaced samples. SIAM Rev 53:308–318 Rezaee H, Mariethoz G, Koneshloo M, Asghari O (2013) Multiple-point geostatistical simulation using the bunch-pasting direct sampling method. Comput Geosci 54:293–308 Ruiu J, Caumon G, Viseur S (2016) Modeling channel forms and related sedimentary objects using a boundary representation based on non-uniform rational B-splines. Math Geosci 48(3):259–284 Runge C (1901) Über empirische Funktionen und die Interpolation zwischen äquidistanten Ordinaten. Zeitschrift für Mathematik und Physik 46(224–243):20 Sinha SS, Schunck BG (1992) A two-stage algorithm for discontinuity-preserving surface reconstruction. IEEE Trans Pattern Anal Mach Intell 14(1):36–55 Stephens MA (1974) EDF statistics for goodness of fit and some comparisons. J Am Stat Assoc 69(347):730–737 Straubhaar J, Renard P, Mariethoz G, Froidevaux R, Besson O (2011) An improved parallel multiple-point algorithm using a list approach. Math Geosci 43(3):305–328 Strebelle S (2002) Conditional simulation of complex geological structures using multiple-point statistics. Math Geol 34(1):1–21 Strebelle S, Cavelius C (2014) Solving speed and memory issues in multiple-point statistics simulation program SNESIM. Math Geosci 46(2):171–186 Tamayo-Mas E, Mustapha H, Dimitrakopoulos R (2016) Testing geological heterogeneity representations for enhanced oil recovery techniques. J Petrol Sci Eng 146:222–240 Toftaker H, Tjelmeland H (2013) Construction of binary multi-grid Markov random field prior models from training images. Math Geosci 45(4):383–409 Wei Y, Wang G, Yang P (2013) Legendre-like orthogonal basis for spline space. Comput Aided Des 45(2):85–92 Yao L, Dimitrakopoulos R, Gamache M (2018) A new computational model of high-order spatial simulation based on spatial Legendre moments. Math Geosci (submitted) Zhang T, Switzer P, Journel A (2006) Filter-based classification of training image patterns for spatial simulation. Math Geol 38(1):63–80 Zhang T, Gelman A, Laronga R (2017) Structure- and texture-based fullbore image reconstruction. Math Geosci 49(2):195–215 This work was funded from NSERC Collaborative Research and Development Grant CRDPJ 411270, entitled "Developing new global stochastic optimization and high-order stochastic models for optimizing mining complexes with uncertainty", NSERC Discovery Grant 239019, and mining industry partners of the COSMO Stochastic Mine Planning Laboratory (AngloGold Ashanti, Barrick Gold, BHP, De Beers Canada, Kinross Gold, Newmont Mining, and Vale). The authors would also like to thank Newmont Mining for providing and allowing the use of their dataset for this publication. COSMO—Stochastic Mine Planning Laboratory, McGill University, Montreal, QC, H3A 0E8, Canada Ilnur Minniakhmetov & Roussos Dimitrakopoulos Newmont Mining Corporation, Denver, CO, USA Marcelo Godoy Ilnur Minniakhmetov Roussos Dimitrakopoulos Correspondence to Ilnur Minniakhmetov. Below is the link to the electronic supplementary material. Supplementary material 1 (ZIP 1854 kb) Minniakhmetov, I., Dimitrakopoulos, R. & Godoy, M. High-Order Spatial Simulation Using Legendre-Like Orthogonal Splines. Math Geosci 50, 753–780 (2018). https://doi.org/10.1007/s11004-018-9741-2 Issue Date: October 2018 Stochastic simulation Orthogonal splines High-order spatial statistics Non-Gaussian distribution Spatial complexity
CommonCrawl
Out-of-pocket expenditure by private households for dental services – empirical evidence from Austria Alice Sanwald1 and Engelbert Theurl1Email author © Sanwald and Theurl. 2016 Published: 5 March 2016 Dental services differ from other health services in several dimensions. One important difference is that a substantial share of costs of dental services–especially costs beyond routine dental treatment–is paid directly by the patient out-of-pocket. Settings and design This study analyses the socio-economic determinants of out-of-pocket expenditure for dental services (OOPE) in Austria at the household level. Methods and material Cross-sectional information on OOPE and household characteristics provided by the Austrian household budget survey 2009/10 was analysed. Statistical analysis used A two-part model (Logit/GLM) and one-part GLM was applied. The probability of OOPE is strongly affected by the life cycle (structure) of the household. It is higher for higher age classes, higher income, and partially higher levels of education. The type of public insurance has an influence on expenditure probability while the existence of private health insurance has no significant effect. In contrast to the highly statistically significant coefficients in the first stage, the covariates of the second stage remain predominantly insignificant. According to the results, the level of expenditure is driven mainly by the level of education and income. The results of the one-part GLM confirm the results of the two-part model. The results allow new insights into the determinants of OOPE for dental care. The household level turns out to be an adequate basis to study the determinants of OOPE, although caution should be applied before jumping to conclusions for the individual level. Out-of-pocket expenditure Two-part model Dental care services differ to some extent from other medical services, which might influence the mechanisms of service provision and financing. Dental diseases are normally not life threatening and the need for dental services is to some extent predictable and/or preventable. Patients' ability to learn from experience about provider quality is at least partially possible. However, expenditure smoothing by public and/or private insurance arrangements offer lower opportunities for welfare improvement and higher rates of copayment seem to be optimal. In fact, empirically, out-of-pocket expenditure for dental services (OOPE) is higher compared to other medical services. According to an unweighted OECD average in 2011, OOPE accounted for 53 % of total dental service expenditure, which is roughly three times the level of overall OOPE of healthcare services [1]. In Austria, the situation is similar. OOPE accounts for 50 % of dental service expenditure, leaving 2 % for the general government, 46 % for social health insurance, and 2 % for private health insurance financing in 2011 [2]. This high level of OOPE raises several equity- and efficiency-related questions. From an equity point of view one could ask to which extent OOPE lead to changes in the income distribution on the household level. From an efficiency perspective it is interesting whether OOPE are an adequate tool to reduce moral hazard. However, before drawing any policy conclusions on this equity-efficiency trade off, it seems to be useful to identify the determinants influencing the level of OOPE for dental services. Such an analysis allows deeper insights into the different distributional effects of OOPE beyond the well analysed income dimension (f. e. age structure, household structure, education level, insurance level). This paper focuses on this question at the private household level and analyses cross-sectional information of OOPE and several household characteristics in the latest Austrian household budget survey. The study benefits from several strands of previous research. It builds on research work on out-of-pocket healthcare expenditure based on micro data in general [3–9], and on the bounded literature on the demand for dental services, in particular on OOPE [10–16]. Finally, the study benefits from research work which focuses on the link between the institutional background of healthcare service consumption and preferred empirical strategies [17–24]. Previous research on the determinants of dental care utilization focuses on different issues. Within the framework of a Becker-type consumer's choice model Holtman/Olsen [12] study the demand for dental care. They specifically analyse waiting time and travel time as well as the money price as covariates of demand for dental services. Generally, the results confirm the theoretical expectations of the role of the mentioned covariates, but the elasticities are low. Manning/Phelps [13] find a strong effect of insurance coverage on demand for dental care. Groenewegen/Postma [11] stress the role of regional differences in the supply of dental capacities for the utilization of dental services. Their results do not unequivocally support the prediction that the capacity density increases utilization. Nguyen/Häkkinnen [14] investigate the determinants of the utilization of dentists' services among the Finnish population entitled to subsidized dental care on the basis of age. In particular they focus on the impact of a two-channel financed health care system. They find that the choice between the private and public sector is influenced by the knowledge of the level of dental services provided by each sector. Our study contributes to the empirical research on OOPE for dental services. It adds evidence from the household perspective, completes and adjusts findings available at the individual level, and studies OOPE in a highly particularized healthcare system, which is based on Bismarckian principles and a specific two-tiered institutional architecture. Reliable information on OOPE on a micro basis (individual or household level) is rare in many countries. We use a data basis which in principle offers a high data quality. So as a side effect our paper also evaluates the validity of this data source to study the determinants of health expenditures in general and expenditure for dental services in particular. The remainder of this paper is organized as follows. The next section briefly describes the institutional setting of consuming dental services in Austria. Subsequently, a brief description of the data, elaboration of the econometric framework, presentation and discussion of the empirical findings, and summary are provided. Institutional setting of dental care in Austria With minor modifications, the general institutional design of demand and supply of outpatient healthcare services in Austria is also relevant for dental services. The social health insurance system represents the first tier of coverage against the risks of illness. Membership in this system is obligatory for wage earners in both the public and private sectors, for self-employed people, and farmers. Individuals with family ties to people with mandatory insurance and without their own coverage obtain free health coverage. Overall, the social health insurance system covers around 99.3 % of the population, excluding only marginal groups. Social health insurance is financed mainly by income-related contributions. Private health insurance and out-of-pocket payments constitute the second tier of the Austrian healthcare system. Dental services in Austria are offered by (i) private dentists, (ii) public dentists, (iii) dental services offered by the social health insurance system directly (so-called dental laboratories), and (iv) dental ambulances of public and private hospitals. As a workable definition, public dentists are those that have a contract with the social health insurance system. Private and public dentists are self-employed and mainly work in single practices. Patients with social health insurance coverage are free to consult providers of categories (i), (ii), (iii), and, with minor restrictions, also (iv). However, the associated costs of utilization are considerably different. The consumption of public dental services is based on a benefit-in-kind scheme. Basic dental services (e.g. fillings and teeth extraction) are offered with negligible cost-sharing elements. This is especially true for workers in the private sector (76 % of the population, who are covered by the insurance label GKK). Public workers (8.6 % of the population, who are covered by the insurance label BVA) and employers (8.4 % of the population, who are covered by the insurance label SVA) face a proportional cost-sharing scheme of 20 % for these services, while farmers (4 % of the population, who are covered by the insurance label SVB) have to pay a quarterly lump-sum fee when using dental services, Patients are confronted with substantial amounts of cost sharing (approximately 50 % of the costs) when they undergo specialized treatments, such as endodontic services, crowns and bridges, and prosthodontic and orthodontic services. A closer inspection of the arrangements reveals quite a heterogeneous mix of copayment methods for these dental services (proportional and absolute cost sharing, as well as public subsidies). Cost-sharing designs differ between the different public medical insurance funds, in all of which fixed prosthodontics are cofinanced only by the social health insurance system in exceptional cases. Similar regulation of service prices and copayments exists for dental services offered by the public health insurance system directly. Dental costs for private dental services are paid out of pocket, and/or by the social health insurance system. The latter reimburses only a portion of a private dentist's invoice. For basic services, the maximum refundable amount is fixed at 80 % of the amount a public dentist is allowed to charge for the same service. For specialized private treatments, the remuneration schedules of contracted dentists are applied. Since the prices of private dentists for basic and specialized treatments are higher than those for contracted dentists, the financial burden for the utilization of private dentists is substantial. Private health insurance, which in general completes social health insurance coverage in Austria, plays only a very limited role in the coverage of dental expenditure risks. In 2011, 2 % of the total costs for dental services were paid by the private health insurance system [2]. To analyse the determinants of OOPE empirically, data from the latest household budget survey 2009/10 conducted by Statistics Austria was used. This periodically repeated survey (at the moment, with a 5-year interval) is used to study the level and structure of private consumption of households within the system of national accounts. The observation unit is the private household without institutionalized households. There is no overlapping in the sample of households, which take part in the different waves of the survey. The total sample offered by Statistics Austria consists of 6534 households with 15,540 members. Owing to unclear household and/or social health insurance status, 747 households were excluded, which results in a final sample size of 5787 households. Information on the consumer behaviour is gathered in two ways: (i) the diary approach and (ii) the recall approach. Households participating in the survey are asked to fill in a diary over 14 days in which they record every single expense. These expenditures are converted into monthly expenditure presented in euros. The dataset results in 52 overlapping weeks of bookkeeping. The recall approach is used for consumer durables and irregular/seasonal expenditures within the last 12 months. In addition, in general, households are asked for expenses greater than 300 € in the last year using the recall method. As far as out-of-pocket expenditures are concerned only information on therapeutic aids in ophthalmology and dentistry is collected by the recall method. Selected socio-economic characteristics of each household are gathered by face-to-face interviews. The household budget survey includes two forms of expenditures for dental care. The first form includes expenditures for dental services in the private and the public sector. These expenditures are mainly for "routine dental services". The second form of expenditures are expenditures for "specialized treatments". Specialized treatment mainly includes different forms of dental prostheses (crowns, bridges). In our basic specification we analyse total dental expenditures including routine dental services and specialized treatments. For robustness checks we also estimate the coefficients of the covariates of the two expenditure forms separately. As far as the mode of data gathering is concerned we assume that the information on the expenditures for routine dental services is mainly gathered by the diary approach while information on the expenditures for specialized treatments is collected by the recall method. For econometric and economic reasons, hurdle models, specifically, two-part models, serve as methodological cornerstones to explain healthcare utilization/expenditure [21]. The first part is a binary model that focuses on the separation between users and nonusers. The second part explains the level/frequency of medical-care use conditional on some use. Statistically, the split in the estimation procedure is motivated by the specific characteristics of healthcare expenditure: (i) skewness, (ii) excess zeros, and (iii) heavy right tails. From an economic perspective, the split in the estimation procedure is motivated by the fact that the two decision stages are characterized by differences in the involved actors and decision covariates. The empirical strategy in the first step normally is based on explicit or reduced versions of the Grossman model of demand for health services [25, 26]. The patient seeking care decides autonomously whether to seek professional diagnostic and curative medical help at all. The modelling of the second step is influenced by principle–agent considerations leading to joint decisions of patients and their service suppliers. In summary, the ideal starting point of two-part models is that the entire episode of medical services is defined as a set of medical services received by a patient in response to particular requests caused by a specific illness (for an extended discussion, see Stoddart and Barer) [24]. The data should portray individual behaviour and should allow separation between the initial spell and additional visits. The description of the data collection for OOPE in Austria makes clear that the dataset does not perfectly fulfil these preconditions for using a two-part model. Therefore, different econometric approaches were used. First, a two-part model was applied. The first stage of the model predicts the likelihood of any OOPE and was specified as logit leading to the formula: $$ \Pr ob\left({y}_1>0\right)=\frac{ \exp \left(x\alpha \right)}{1+ \exp \left(x\alpha \right)} $$ The second part predicts the level of spending, conditional on having non-zero OOPE. For the latter part, a generalized linear model (GLM) was used. As an alternative modelling strategy, a one-part GLM and joint estimation of both decision stages was applied [8]. The GLM accommodates skewness and related problems via variance-weighting. In both GLM specifications, the link function and relationship between the mean and variance was determined as suggested by, for example, Manning and Mullahy [27] and Matsaganis et al. [8]. Thereby the mean function is given by: $$ \mathrm{E}\left(y\Big|x\right)=\mu \left(x\beta \right) $$ If the link function is the log, as it is normally the case in health expenditure applications, then μ is the exponential function. The variance function is normally presented as: $$ v(x)=\kappa {\left(\mu \left(x\beta \right)\right)}^{\lambda } $$ When λ = 0, the variance is constant, when λ = 1, the variance is proportional to the mean, and when λ = 2 the variance is proportional to the mean squared [8]. In a modified Park test, the squared residuals of a provisional log-transformed ordinary least squares (OLS) model or a provisional GLM model are regressed on the predictions from the same model. The estimated coefficient λ indicates which variance function is appropriate, suggesting either a constant variance model (λ = 0), a proportional to the mean model (λ = 1) or a standard deviation proportional to the mean model (λ = 2). The last two models are sometimes also called 'Poisson-like' models or 'Gamma-like' models, respectively [17]. As suggested, the goodness of fit of competing model specifications was evaluated by comparing the mean absolute error, mean squared error, and R2 scores [8]. Tests concerning model fit encompass Pregibon's link test, Ramsey's regression equation specification error test, a modified Hosmer–Lemeshow test, Cook's distance, and a goodness of fit test for the combined model. The dependent variable was defined as monthly OOPE per household and several socio-economic characteristics of the household were used as covariates: household structure or household life cycle, adults' age structure, adults' education level, public and private insurance characteristics, sex of the householder, income level, and degree of urbanization. In Table 4 (Appendix), detailed information on the specification of these variables and the percentages of observations with a specific characteristic are given. Table 1 presents the summary statistics of the dependent variable for the explanatory variables and distinguishes between the expenditure means and standard deviation (SD) for the total sample and those households with expenditure higher than zero (1384 households). The average OOPE for the total household size is 35.57 euros (SD = 133.28). The mean for households with non-0 OOPE is 148.74 (SD = 239.72). The data show substantial differences in the OOPE level between households with different characteristics. Descriptive statistics according to households' characteristics and structures Total households Dental Care Expenditures Average exp. Expenditures >0 Single person I Single person II Unmarried couple Full nest I Full nest II Married couple w/o childs Degree of urbanization High urbanization Average urbanization Low urbanization Age <25 Insurance characteristics (public) Private health insurance (1) N (households) Robustness checks Specialized treatments Routine dental services Notes: (1) corresponds to one adult of a household that has additional private health insurance. (2) corresponds to both adults of the household having additional private health insurance. This also includes households consisting of one individual (single person I and single person II). Dummy variables for female householders and income are not reported As a robustness check, OOPE is separated into two components, (i) routine dental services and (ii) specialized treatments (e.g. endodontic services, crowns, and bridges). Overall, 1291 households with positive OOPE are in this expenditure category (the mean OOPE for the total sample is 27.41; the mean OOPE for the sample whose OOPE is more than 0 is 122.88) while only 157 households have positive OOPE for routine dental services (the mean OOPE for the total sample is 8.15; the mean OOPE for the sample whose OOPE is more than 0 is 300.70), see Table 1. The econometric results of the two-part-model and the one-part GLM are summarized in Table 2. The probability of having OOPE is influenced strongly by the life cycle of households. In particular, larger observation units, like full nest I (for the specification of the household structure see Appendix Table 4 ), married couples without children, and full nest II, have higher probabilities of spending OOPE. Furthermore, these three household types represent the largest observation units with on average 3.3–4 household members. As only one household member with non-zero OOPE is sufficient to classify the total observation unit as a household that consumes OOPE, the higher probability of the mentioned household types might be explained, at least partially. In addition, there is strong evidence of the relationship between adults' age and the probability of consuming OOPE. Old age is an important driver of healthcare needs in general, and this is also true for dental healthcare. In the case of the used dataset, the age class of 65–85 years shows the highest probability of OOPE. The type of public insurance influences the probability of OOPE (the reference group is GKK). Households insured by BVA, SVB, and SVA show a higher probability of OOPE, but the results for SVA members remain insignificant. This might reflect the higher proportion of cost sharing in these medical insurance funds. The existence of private health insurance is without any statistically significant effect. Households with a higher level of education and income show a significantly higher probability of having OOPE. Econometric results of the two-part model and one-part GLM Dental care expenditure Probability (Logit) Conditional (GLM) a GLM a Rob. SD −0.287* −0.005 0.478** 0.775*** Married couple w/o children −0.287** Age 25–45 years Other characteristics Female householder Income (log) −4.795*** Observations (households) Notes: aGLM with log-link and gamma distribution. (1) corresponds to one adult of the household with additional private health insurance. (2) corresponds to both adults of the household having additional private health insurance. This also includes households consisting of one individual (single person I and single person II). Reference groups: single person I, high urbanization, age class 18–25 years, primary education, GKK, no additional private health insurance, and male householder. Significance levels: *** p < 0.01, ** p < 0.05, and * p < 0.1 In the second stage, the tested kurtosis verifies a log-link function and the estimated λ clearly suggests a SD proportional to the mean model. The estimated λ of the provisional OLS model with a log-transformed dependent variable has a score of 2.04 and a score of 2.004 in the provisional GLM model. In contrast to the highly statistically significant coefficients in the first stage, the covariates of the second stage remain predominantly insignificant. According to the results, the level of expenditure is driven mainly by the levels of education and income. One explanation is the well-known attitude of both these groups to contact private dentists with higher service fees. Columns 5 and 6 show the results of the one-stage GLM. The tested kurtosis takes the score 3.3, and therefore, justifies a log-link function. The applied Park test shows an estimated λ of 1.84 for the provisional OLS model with a log-transformed dependent variable and an estimated λ of 1.60 for the provisional GLM model. In the evaluation process, the SD proportional to the mean model clearly outperforms the proportional to the mean model, which is used in the subsequent analysis. The considered household types, income, adults' age, and education level show strong impacts on expenditure level. A negative effect of a lower degree of urbanization is revealed, which might reflect limited access to dental-care facilities. In summary, the findings of the one-stage GLM widely confirm the results the two-part-model. The econometric results for households consuming specialized treatments are presented in Table 3. The results of the two-part model and the one-part GLM are very similar to the results for total OOPE. The results for routine dental services are widely insignificant and therefore not presented in detail. Econometric results of the two-part model and one-part GLM (specialized treatments) Expenditures for specialized treatments Rob. S.D. Notes: aGLM with log-link and gamma distribution. Significance level *** p < 0.01, ** p < 0.05, * p < 0.1 Comparing the results with the findings of previous research [10–16] is only partially useful. This study analyses OOPE. The vast majority of previous studies analyse utilization, measured by visits or total dental expenditure. In addition, the focus of this study is on the household while previous research is based on individual data. Finally, the focus of this study is on socio-economic household characteristics as explanatory variables, and controlling for dental health status and supply-related characteristics in detail is impossible. However, the 'degree of urbanization' was used as a proxy for access to dental service. Therefore, this study abstains from drawing any conclusions related to supply side from the results (see Nguyen and Häkkinen [14]). Previous evidence sometimes points to a U-shaped relationship between age and dental utilization/expenditure. The shown effect of age on OOPE is higher in older age classes, which does not contradict a U-shaped relationship. The reference group consists of adults who are on average below 25 years of age. Children are included only in the household structure. Compared to Choi [10], the study presents new and dissenting findings on the role of public and private insurance characteristics on OOPE. The type of public insurance influences OOPE. Copayment mechanisms for routine dental services and, in particular, for special treatment differ between the public medical insurance funds. This is an essential feature of the Austrian healthcare system in general, although movements to harmonize the remuneration system of public dentists and the copayment schemes for specialized treatment are progressing. Of course, differences in the OOPE levels of members of the different public medical insurance funds might also be caused by unobserved heterogeneity between the members of the different insurance groups. Additional private health insurance is without any effect on OOPE. This could be interpreted as an indication that there is a sufficient public level of coverage against dental expenditure risks. We are cautious in drawing straight conclusions for dental health policy from our empirical findings. In fact we will briefly discuss the question whether data from household budget surveys are an adequate data basis to study the determinants of health care expenditures in general and expenditures for dental care in particular. Reliable data on OOPE are rare, their acquisition is very costly. Health related data sources (ATHIS, SHARE) normally use contacts with the health care system as an indicator for utilization. So the periodically repeated household budget surveys are an alternative information source. Overall the design of household budget surveys offers a high data quality. This is ensured via the level of instructions for the participants and the combination of the diary system and the recall approach. Potential underreporting of the levels of OOPE is reduced by the use of a disaggregated approach that asks for several OOPE categories. On the other hand household budget surveys also have clear limitations. They only include rudimental information on socio-economic characteristics of the household and its members which are important for explaining the utilization of dental services. The same is true for covariates which picture the supply side of dental services (f. e. density of private and public dentists). Additionally, the two modes of data gathering have consequences for the reliability of the empirical methods. In principle, the combination of the diary system for routine expenditures and the recall system for specialized treatment seems to be an adequate strategy of data collection. But we should be aware of the fact that the length of the observation period (2 weeks in the diary system, 1 year in the recall system) has direct consequences for the empirical results in the two steps of the two part model. Finally the information from the household budget survey is period based and does not allow a separation in different steps of the utilization process. Such a separation is necessary to substantiate the two steps of the two part model from an economic point of view. This paper analyses the socio-economic determinants of OOPE of private households in Austria using data from the household budget survey 2009/10. The main conclusions are as follows. The characteristics of the data (household-level data, period-based data, and short observation period) pose several challenges for the choice of empirical estimation procedure. So we decided to compare different econometric procedures (two part model, GLM). Oure estimation reveals highly significant results for several household characteristics (household life cycle, adults' age, adults' education, and income) in explaining the probability of OOPE (first stage). However, in the second stage (expenditure level), only income and education have significant coefficients. The one-part GLM estimation confirms the results of the two-part model. The existence of private health insurance has no influence on the expenditure probability/level while the type of public insurance influences the expenditure probability. The household structure seems to have a strong effect on the expenditures for specialized treatments. The majority of the covariates used to explain expenditures for routine dental services are widely insignificant. The household turns out to be a promising basis to study the determinants of dental expenditure and supplements the previous research, which focuses on the individual level. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Overview of variable specification and corresponding share of observations Percentage of observations Single person 1 Household consists of 1 adult, single. Household consists of 1 adult, either married, divorced or widowed. Household consists of 2 adults, unmarried. Household consists of 2 adults, married, members are below 60 years. Household consists of 2 adults, married, members are above 60 years. Full nest 1 Household consists of 2 adults, members are below 40 years, at least one child. Household consists of 2 adults, members are above 40 years, at least one child. Household consists of more than 3 adults, married, no children. Household consists of one adult, at least one child, Areas with a population of at least 50,000 and more than 500 inhabitants per square kilometer. Areas with a population of at least 50,000 and 100–500 inhabitants per square kilometer. All other areas. Average age of both adults. Refers to householder, if household consists of one adult. Both adults have a primary education level. This also includes households consisting of one adult. Both adults have a mixed or secondary education level. This also includes households consisting of one adult. Both adults have a secondary education level. This also includes households cons/sting of one adult. Workers in the private sector. Refers to householder's insurance type: Public servants. Refers to householder's insurance type. Employers. Refers to householders insurance type. Farmers. Refers to householder's insurance type Additional private health insurance (1) One adult of the household has an additional health insurance. All adults have an additional health insurance. This includes households consisting of one adult. Householder is female. Monthly household Income in Euros. Both authors have no financial and non-financial competing interests Overall both authors contribute in an fair share to the different tasks of the manuscript. AS has the lead in the statistical analysis. ET contributes more to the drafting of the manuscript. Both authors read and approve the final manuscript. Department of Economics and Statistics, University of Innsbruck, Universitätsstrasse 15, A-6020 Innsbruck, Austria OECD. Health at glance. Paris. 2013.Google Scholar Austria S. Gesundheitsausgaben in Österreich. 2014. http://www.statistik.at/web_de/statistiken/menschen_und_gesellschaft/gesundheit/gesundheitsausgaben/index.html. Accessed date: 3 Dec 2014.Google Scholar Bilger M, Chaze J-P. What drives individual health expenditure in Switzerland? Swiss J Econ Stat. 2008;144:337–58.Google Scholar Chaze J-P. Assessing household health expenditure with Box-Cox censoring models. Health Econ. 2005;14:893–907.View ArticlePubMedGoogle Scholar Norton EC, Wang H, Stearns SC. Out-of-pocket health care expenditures. Swiss J Econ Stat. 2006;142:3–11.Google Scholar Jones G, Savage E, Van Gool K. The distribution of household health expenditures in Australia. Econ Rec. 2008;84:S99–114.View ArticleGoogle Scholar Jowett M, Contoyannis P, Vinh ND. The impact of public voluntary health insurance on private health expenditures in Vietnam. Soc Sci Med. 2003;56:333–42.View ArticlePubMedGoogle Scholar Matsaganis M, Mitrakos T, Tsakloglou P. Modelling health expenditure at the household level in Greece. Eur J Health Econ. 2009;10:329–36.View ArticlePubMedGoogle Scholar Leibowitz A, Manning WG, Newhouse JP. The demand for prescription drugs as a function of cost-sharing. Soc Sci Med. 1985;21:1063–9.View ArticlePubMedGoogle Scholar Choi MK. The impact of Medicaid insurance coverage on dental service use. J Health Econ. 2011;30:1020–31.View ArticlePubMedGoogle Scholar Groenewegen P, Postma J. The supply and utilization of dental services. Soc Sci Med. 1984;19:451–9.View ArticlePubMedGoogle Scholar Holtmann AG, Olsen Jr EO. The demand for dental care: a study of consumption and household production. J Hum Resour. 1976;11:546–60.View ArticlePubMedGoogle Scholar Manning Jr WG, Phelps CE. The demand for dental care. Bell J of Econ. 1979;10:503–25.View ArticleGoogle Scholar Nguyen L, Häkkinen U. Choices and utilization in dental care. Eur J of Health Econ. 2006;7:99–106.View ArticleGoogle Scholar Vargas CM, Manski RJ. Dental expenditures and source of payment by race/ethnicity and other sociodemographic characteristics. J Public Health Dent. 1999;59:33–8.View ArticlePubMedGoogle Scholar White BA. Factors influencing demand for dental services: population, demographics, disease, insurance. J Dent Educ. 2012;76:996–1007.PubMedGoogle Scholar Buntin MB, Zaslavsky AM. Too much ado about two-part models and transformation? Comparing methods of modeling Medicare expenditures. J Health Econ. 2004;23:525–42.View ArticlePubMedGoogle Scholar Deb P, Holmes AM. Estimates of use and costs of behavioural health care: a comparison of standard and finite mixture models. Health Econ. 2000;9:475–89.View ArticlePubMedGoogle Scholar Deb P, Trivedi PK. The structure of demand for health care: latent class versus two-part models. J Health Econ. 2002;21:601–25.View ArticlePubMedGoogle Scholar Gerdtham UG. Equity in health care utilization: further tests based on hurdle models and Swedish micro data. Health Econ. 1997;6:303–19.View ArticlePubMedGoogle Scholar Jones AM. Health econometrics. In: Culyer AJ, Newhouse JP, editors. Handbook of health economics. 1st ed. Amsterdam: Elsevier; 2000. p. 265–344.Google Scholar Pohlmeier W, Ulrich V. An econometric model of the two-part decision making process in the demand for health care. J Human Resour. 1995;30(2):339–61.View ArticleGoogle Scholar Santos Silva J, Windmeijer F. Two-part multiple spell models for health care demand. J Econometrics. 2001;104:67–89.View ArticleGoogle Scholar Stoddart GL, Barer ML. Analysis of demand and utilization through episodes of medical service. In: van der Gaag J, Perlman M, editors. Health economics. Amsterdam: North-Holland; 1981. p. 149–70.Google Scholar Grossman M. On the concept of health capital and the demand for health. J Polit Econ. 1972;80:223–55.View ArticleGoogle Scholar Wagstaff A. The demand for health: an empirical reformulation of the Grossman model. Health Econ. 1993;2:189–98.View ArticlePubMedGoogle Scholar Manning WG, Mullahy J. Estimating log models: to transform or not to transform? J Health Econ. 2001;20:461–94.View ArticlePubMedGoogle Scholar
CommonCrawl
The Annals of Statistics Ann. Statist. Volume 4, Number 3 (1976), 629-638. An Improved Estimator of the Generalized Variance R. W. Shorrock and J. V. Zidek More by R. W. Shorrock More by J. V. Zidek A multivariate extension is made of Stein's result (1964) on the estimation of the normal variance. Here the generalized variance $|\Sigma|$ is being estimated from a Wishart random matrix $S: p \times p \sim W(n, \Sigma)$ and an independent normal random matrix $X: p \times k \sim N(\xi, \Sigma \otimes 1_k)$ with $\xi$ unknown. The main result is that the minimax, best affine equivariant estimator $((n + 2 - p)!/(n + 2)!)|S|$ is dominated by $\min\{((n + 2 - p)!/(n + 2)!)|S|, ((n + k + 2 - p)!/(n + k + 2)!)|S + XX'|\}$. It is obtained by a variant of Stein's method which exploits zonal polynomials. Ann. Statist., Volume 4, Number 3 (1976), 629-638. First available in Project Euclid: 12 April 2007 https://projecteuclid.org/euclid.aos/1176343470 doi:10.1214/aos/1176343470 links.jstor.org Primary: 62F10: Point estimation Secondary: 62H99: None of the above, but in this section Equivariant multivariate normal matrix noncentral Wishart matrix zonal polynomials Shorrock, R. W.; Zidek, J. V. An Improved Estimator of the Generalized Variance. Ann. Statist. 4 (1976), no. 3, 629--638. doi:10.1214/aos/1176343470. https://projecteuclid.org/euclid.aos/1176343470 The Institute of Mathematical Statistics First Online Future Papers Estimating the Common Mean of Two Multivariate Normal Distributions Loh, Wei-Liem, The Annals of Statistics, 1991 On predictive density estimation with additional information Marchand, Éric and Sadeghkhani, Abdolnasser, Electronic Journal of Statistics, 2018 Empirical Bayes Estimation of the Multivariate Normal Covariance Matrix Haff, L. R., The Annals of Statistics, 1980 A Monotonicity Property of the Power Functions of Some Invariant Tests for MANOVA Eaton, Morris L. and Perlman, Michael D., The Annals of Statistics, 1974 A Note on the Sphericity Test Gleser, Leon J., The Annals of Mathematical Statistics, 1966 Estimation of the Inverse Covariance Matrix: Random Mixtures of the Inverse Wishart Matrix and the Identity Improved multivariate normal mean estimation with unknown covariance when $p$ is greater than $n$ Chételat, Didier and Wells, Martin T., The Annals of Statistics, 2012 Singular Wishart and multivariate beta distributions Srivastava, M.S., The Annals of Statistics, 2003 Shrinkage Domination in a Multivariate Common Mean Problem George, Edward I., The Annals of Statistics, 1991 Best Equivariant Estimators of a Cholesky Decomposition Eaton, Morris L. and Olkin, Ingram, The Annals of Statistics, 1987 euclid.aos/1176343470
CommonCrawl
Fokker–Planck equation In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well.[1] The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc. It is named after Adriaan Fokker and Max Planck, who described it in 1914 and 1917.[2][3] It is also known as the Kolmogorov forward equation, after Andrey Kolmogorov, who independently discovered it in 1931.[4] When applied to particle position distributions, it is better known as the Smoluchowski equation (after Marian Smoluchowski),[5] and in this context it is equivalent to the convection–diffusion equation. When applied to particle position and momentum distributions, it is known as the Klein–Kramers equation. The case with zero diffusion is the continuity equation. The Fokker–Planck equation is obtained from the master equation through Kramers–Moyal expansion.[6] The first consistent microscopic derivation of the Fokker–Planck equation in the single scheme of classical and quantum mechanics was performed by Nikolay Bogoliubov and Nikolay Krylov.[7][8] One dimension In one spatial dimension x, for an Itô process driven by the standard Wiener process $W_{t}$ and described by the stochastic differential equation (SDE) $dX_{t}=\mu (X_{t},t)\,dt+\sigma (X_{t},t)\,dW_{t}$ with drift $\mu (X_{t},t)$ and diffusion coefficient $D(X_{t},t)=\sigma ^{2}(X_{t},t)/2$, the Fokker–Planck equation for the probability density $p(x,t)$ of the random variable $X_{t}$ is [9] ${\frac {\partial }{\partial t}}p(x,t)=-{\frac {\partial }{\partial x}}\left[\mu (x,t)p(x,t)\right]+{\frac {\partial ^{2}}{\partial x^{2}}}\left[D(x,t)p(x,t)\right].$ Link between the Itô SDE and the Fokker–Planck equation In the following, use $\sigma ={\sqrt {2D}}$. Define the infinitesimal generator Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\mathcal {L}} (the following can be found in Ref.[10]): ${\mathcal {L}}p(X_{t})=\lim _{\Delta t\to 0}{\frac {1}{\Delta t}}\left(\mathbb {E} {\big [}p(X_{t+\Delta t})\mid X_{t}=x{\big ]}-p(x)\right).$ The transition probability $\mathbb {P} _{t,t'}(x\mid x')$, the probability of going from $(t',x')$ to $(t,x)$, is introduced here; the expectation can be written as $\mathbb {E} (p(X_{t+\Delta t})\mid X_{t}=x)=\int p(y)\,\mathbb {P} _{t+\Delta t,t}(y\mid x)\,dy.$ Now we replace in the definition of ${\mathcal {L}}$, multiply by $\mathbb {P} _{t,t'}(x\mid x')$ and integrate over $dx$. The limit is taken on $\int p(y)\int \mathbb {P} _{t+\Delta t,t}(y\mid x)\,\mathbb {P} _{t,t'}(x\mid x')\,dx\,dy-\int p(x)\,\mathbb {P} _{t,t'}(x\mid x')\,dx.$ Note now that $\int \mathbb {P} _{t+\Delta t,t}(y\mid x)\,\mathbb {P} _{t,t'}(x\mid x')\,dx=\mathbb {P} _{t+\Delta t,t'}(y\mid x'),$ which is the Chapman–Kolmogorov theorem. Changing the dummy variable $y$ to $x$, one gets ${\begin{aligned}\int p(x)\lim _{\Delta t\to 0}{\frac {1}{\Delta t}}\left(\mathbb {P} _{t+\Delta t,t'}(x\mid x')-\mathbb {P} _{t,t'}(x\mid x')\right)\,dx,\end{aligned}}$ which is a time derivative. Finally we arrive to $\int [{\mathcal {L}}p(x)]\mathbb {P} _{t,t'}(x\mid x')\,dx=\int p(x)\,\partial _{t}\mathbb {P} _{t,t'}(x\mid x')\,dx.$ From here, the Kolmogorov backward equation can be deduced. If we instead use the adjoint operator of ${\mathcal {L}}$, ${\mathcal {L}}^{\dagger }$, defined such that $\int [{\mathcal {L}}p(x)]\mathbb {P} _{t,t'}(x\mid x')\,dx=\int p(x)[{\mathcal {L}}^{\dagger }\mathbb {P} _{t,t'}(x\mid x')]\,dx,$ then we arrive to the Kolmogorov forward equation, or Fokker–Planck equation, which, simplifying the notation $p(x,t)=\mathbb {P} _{t,t'}(x\mid x')$, in its differential form reads ${\mathcal {L}}^{\dagger }p(x,t)=\partial _{t}p(x,t).$ Remains the issue of defining explicitly ${\mathcal {L}}$. This can be done taking the expectation from the integral form of the Itô's lemma: $\mathbb {E} {\big (}p(X_{t}){\big )}=p(X_{0})+\mathbb {E} \left(\int _{0}^{t}\left(\partial _{t}+\mu \partial _{x}+{\frac {\sigma ^{2}}{2}}\partial _{x}^{2}\right)p(X_{t'})\,dt'\right).$ The part that depends on $dW_{t}$ vanished because of the martingale property. Then, for a particle subject to an Itô equation, using ${\mathcal {L}}=\mu \partial _{x}+{\frac {\sigma ^{2}}{2}}\partial _{x}^{2},$ it can be easily calculated, using integration by parts, that ${\mathcal {L}}^{\dagger }=-\partial _{x}(\mu \cdot )+{\frac {1}{2}}\partial _{x}^{2}(\sigma ^{2}\cdot ),$ which bring us to the Fokker–Planck equation: $\partial _{t}p(x,t)=-\partial _{x}{\big (}\mu (x,t)\cdot p(x,t){\big )}+\partial _{x}^{2}\left({\frac {\sigma (x,t)^{2}}{2}}\,p(x,t)\right).$ While the Fokker–Planck equation is used with problems where the initial distribution is known, if the problem is to know the distribution at previous times, the Feynman–Kac formula can be used, which is a consequence of the Kolmogorov backward equation. The stochastic process defined above in the Itô sense can be rewritten within the Stratonovich convention as a Stratonovich SDE: $dX_{t}=\left[\mu (X_{t},t)-{\frac {1}{2}}{\frac {\partial }{\partial X_{t}}}D(X_{t},t)\right]\,dt+{\sqrt {2D(X_{t},t)}}\circ dW_{t}.$ It includes an added noise-induced drift term due to diffusion gradient effects if the noise is state-dependent. This convention is more often used in physical applications. Indeed, it is well known that any solution to the Stratonovich SDE is a solution to the Itô SDE. The zero-drift equation with constant diffusion can be considered as a model of classical Brownian motion: ${\frac {\partial }{\partial t}}p(x,t)=D_{0}{\frac {\partial ^{2}}{\partial x^{2}}}\left[p(x,t)\right].$ This model has discrete spectrum of solutions if the condition of fixed boundaries is added for $\{0\leq x\leq L\}$: $p(0,t)=p(L,t)=0,$ $p(x,0)=p_{0}(x).$ It has been shown[11] that in this case an analytical spectrum of solutions allows deriving a local uncertainty relation for the coordinate-velocity phase volume: $\Delta x\,\Delta v\geq D_{0}.$ Here $D_{0}$ is a minimal value of a corresponding diffusion spectrum $D_{j}$, while $\Delta x$ and $\Delta v$ represent the uncertainty of coordinate–velocity definition. Higher dimensions More generally, if $d\mathbf {X} _{t}={\boldsymbol {\mu }}(\mathbf {X} _{t},t)\,dt+{\boldsymbol {\sigma }}(\mathbf {X} _{t},t)\,d\mathbf {W} _{t},$ where $\mathbf {X} _{t}$ and ${\boldsymbol {\mu }}(\mathbf {X} _{t},t)$ are N-dimensional random vectors, ${\boldsymbol {\sigma }}(\mathbf {X} _{t},t)$ is an $N\times M$ matrix and $\mathbf {W} _{t}$ is an M-dimensional standard Wiener process, the probability density $p(\mathbf {x} ,t)$ for $\mathbf {X} _{t}$ satisfies the Fokker–Planck equation ${\frac {\partial p(\mathbf {x} ,t)}{\partial t}}=-\sum _{i=1}^{N}{\frac {\partial }{\partial x_{i}}}\left[\mu _{i}(\mathbf {x} ,t)p(\mathbf {x} ,t)\right]+\sum _{i=1}^{N}\sum _{j=1}^{N}{\frac {\partial ^{2}}{\partial x_{i}\,\partial x_{j}}}\left[D_{ij}(\mathbf {x} ,t)p(\mathbf {x} ,t)\right],$ with drift vector ${\boldsymbol {\mu }}=(\mu _{1},\ldots ,\mu _{N})$ and diffusion tensor $ \mathbf {D} ={\frac {1}{2}}{\boldsymbol {\sigma \sigma }}^{\mathsf {T}}$, i.e. $D_{ij}(\mathbf {x} ,t)={\frac {1}{2}}\sum _{k=1}^{M}\sigma _{ik}(\mathbf {x} ,t)\sigma _{jk}(\mathbf {x} ,t).$ If instead of an Itô SDE, a Stratonovich SDE is considered, $d\mathbf {X} _{t}={\boldsymbol {\mu }}(\mathbf {X} _{t},t)\,dt+{\boldsymbol {\sigma }}(\mathbf {X} _{t},t)\circ d\mathbf {W} _{t},$ the Fokker–Planck equation will read:[10]: 129  ${\frac {\partial p(\mathbf {x} ,t)}{\partial t}}=-\sum _{i=1}^{N}{\frac {\partial }{\partial x_{i}}}\left[\mu _{i}(\mathbf {x} ,t)\,p(\mathbf {x} ,t)\right]+{\frac {1}{2}}\sum _{k=1}^{M}\sum _{i=1}^{N}{\frac {\partial }{\partial x_{i}}}\left\{\sigma _{ik}(\mathbf {x} ,t)\sum _{j=1}^{N}{\frac {\partial }{\partial x_{j}}}\left[\sigma _{jk}(\mathbf {x} ,t)\,p(\mathbf {x} ,t)\right]\right\}$ Generalization In general, the Fokker–Planck equations are a special case to the general Kolmogorov forward equation $\partial _{t}\rho ={\mathcal {A}}^{*}\rho $ where the linear operator ${\mathcal {A}}^{*}$ is the Hermitian adjoint to the infinitesimal generator for the Markov process.[12] Examples Wiener process A standard scalar Wiener process is generated by the stochastic differential equation $dX_{t}=dW_{t}.$ Here the drift term is zero and the diffusion coefficient is 1/2. Thus the corresponding Fokker–Planck equation is ${\frac {\partial p(x,t)}{\partial t}}={\frac {1}{2}}{\frac {\partial ^{2}p(x,t)}{\partial x^{2}}},$ which is the simplest form of a diffusion equation. If the initial condition is $p(x,0)=\delta (x)$, the solution is $p(x,t)={\frac {1}{\sqrt {2\pi t}}}e^{-{x^{2}}/({2t})}.$ Ornstein–Uhlenbeck process The Ornstein–Uhlenbeck process is a process defined as $dX_{t}=-aX_{t}dt+\sigma dW_{t}.$ with $a>0$. Physically, this equation can be motivated as follows: a particle of mass $m$ with velocity $V_{t}$ moving in a medium, e.g., a fluid, will experience a friction force which resists motion whose magnitude can be approximated as being proportional to particle's velocity $-aV_{t}$ with $a=\mathrm {constant} $. Other particles in the medium will randomly kick the particle as they collide with it and this effect can be approximated by a white noise term; $\sigma (dW_{t}/dt)$. Newton's second law is written as $m{\frac {dV_{t}}{dt}}=-aV_{t}+\sigma {\frac {dW_{t}}{dt}}.$ Taking $m=1$ for simplicity and changing the notation as $V_{t}\rightarrow X_{t}$ leads to the familiar form $dX_{t}=-aX_{t}dt+\sigma dW_{t}$. The corresponding Fokker–Planck equation is ${\frac {\partial p(x,t)}{\partial t}}=a{\frac {\partial }{\partial x}}\left(x\,p(x,t)\right)+{\frac {\sigma ^{2}}{2}}{\frac {\partial ^{2}p(x,t)}{\partial x^{2}}},$ The stationary solution ($\partial _{t}p=0$) is $p_{ss}(x)={\sqrt {\frac {a}{\pi \sigma ^{2}}}}e^{-{\frac {ax^{2}}{\sigma ^{2}}}}.$ Plasma physics In plasma physics, the distribution function for a particle species $s$, $p_{s}(\mathbf {x} ,\mathbf {v} ,t)$, takes the place of the probability density function. The corresponding Boltzmann equation is given by ${\frac {\partial p_{s}}{\partial t}}+\mathbf {v} \cdot {\boldsymbol {\nabla }}p_{s}+{\frac {Z_{s}e}{m_{s}}}\left(\mathbf {E} +\mathbf {v} \times \mathbf {B} \right)\cdot {\boldsymbol {\nabla }}_{v}p_{s}=-{\frac {\partial }{\partial v_{i}}}\left(p_{s}\langle \Delta v_{i}\rangle \right)+{\frac {1}{2}}{\frac {\partial ^{2}}{\partial v_{i}\,\partial v_{j}}}\left(p_{s}\langle \Delta v_{i}\,\Delta v_{j}\rangle \right),$ where the third term includes the particle acceleration due to the Lorentz force and the Fokker–Planck term at the right-hand side represents the effects of particle collisions. The quantities $\langle \Delta v_{i}\rangle $ and $\langle \Delta v_{i}\,\Delta v_{j}\rangle $ are the average change in velocity a particle of type $s$ experiences due to collisions with all other particle species in unit time. Expressions for these quantities are given elsewhere.[13] If collisions are ignored, the Boltzmann equation reduces to the Vlasov equation. Smoluchowski diffusion equation Consider an overdamped Brownian particle under external force $F(r)$:[14] $m{\ddot {r}}=-\gamma {\dot {r}}+F(r)+\sigma \xi (t)$ where the $m{\ddot {r}}$ term is negligible (the meaning of "overdamped"). Thus, it is just $\gamma dr=F(r)dt+\sigma dW_{t}$. The Fokker–Planck equation for this particle is the Smoluchowski diffusion equation: $\partial _{t}P(r,t|r_{0},t_{0})=\nabla \cdot [D(\nabla -\beta F(r))P(r,t|r_{0},t_{0})]$ Where $D$ is the diffusion constant and $\beta ={\frac {1}{k_{\text{B}}T}}$. The importance of this equation is it allows for both the inclusion of the effect of temperature on the system of particles and a spatially dependent diffusion constant. Derivation of the Smoluchowski Equation from the Fokker–Planck Equation Starting with the Langevin Equation of a Brownian particle in external field $F(r)$, where $\gamma $ is the friction term, $\xi $ is a fluctuating force on the particle, and $\sigma $ is the amplitude of the fluctuation. $m{\ddot {r}}=-\gamma {\dot {r}}+F(r)+\sigma \xi (t)$ At equilibrium the frictional force is much greater than the inertial force, $\left\vert \gamma {\dot {r}}\right\vert \gg \left\vert m{\ddot {r}}\right\vert $. Therefore, the Langevin equation becomes, $\gamma {\dot {r}}=F(r)+\sigma \xi (t)$ Which generates the following Fokker–Planck equation, $\partial _{t}P(r,t|r_{0},t_{0})=\left(\nabla ^{2}{\frac {\sigma ^{2}}{2\gamma ^{2}}}-\nabla \cdot {\frac {F(r)}{\gamma }}\right)P(r,t|r_{0},t_{0})$ Rearranging the Fokker–Planck equation, $\partial _{t}P(r,t|r_{0},t_{0})=\nabla \cdot \left(\nabla D-{\frac {F(r)}{\gamma }}\right)P(r,t|r_{0},t_{0})$ Where $D={\frac {\sigma ^{2}}{2\gamma ^{2}}}$. Note, the diffusion coefficient may not necessarily be spatially independent if $\sigma $ or $\gamma $ are spatially dependent. Next, the total number of particles in any particular volume is given by, $N_{V}(t|r_{0},t_{0})=\int _{V}drP(r,t|r_{0},t_{0})$ Therefore, the flux of particles can be determined by taking the time derivative of the number of particles in a given volume, plugging in the Fokker–Planck equation, and then applying Gauss's Theorem. $\partial _{t}N_{V}(t|r_{0},t_{0})=\int _{V}dV\nabla \cdot \left(\nabla D-{\frac {F(r)}{\gamma }}\right)P(r,t|r_{0},t_{0})=\int _{\partial V}d\mathbf {a} \cdot j(r,t|r_{0},t_{0})$ $j(r,t|r_{0},t_{0})=\left(\nabla D-{\frac {F(r)}{\gamma }}\right)P(r,t|r_{0},t_{0})$ In equilibrium, it is assumed that the flux goes to zero. Therefore, Boltzmann statistics can be applied for the probability of a particles location at equilibrium, where $F(r)=-\nabla U(r)$ is a conservative force and the probability of a particle being in a state $r$ is given as $P(r,t|r_{0},t_{0})={\frac {e^{-\beta U(r)}}{Z}}$. $j(r,t|r_{0},t_{0})=\left(\nabla D-{\frac {F(r)}{\gamma }}\right){\frac {e^{-\beta U(r)}}{Z}}=0$ $\Rightarrow \nabla D=F(r)\left({\frac {1}{\gamma }}-D\beta \right)$ This relation is a realization of the fluctuation–dissipation theorem. Now applying $\nabla \cdot \nabla $ to $DP(r,t|r_{0},t_{0})$ and using the Fluctuation-dissipation theorem, ${\begin{aligned}\nabla \cdot \nabla DP(r,t|r_{0},t_{0})&=\nabla \cdot D\nabla P(r,t|r_{0},t_{0})+\nabla \cdot P(r,t|r_{0},t_{0})\nabla D\\&=\nabla \cdot D\nabla P(r,t|r_{0},t_{0})+\nabla \cdot P(r,t|r_{0},t_{0}){\frac {F(r)}{\gamma }}-\nabla \cdot P(r,t|r_{0},t_{0})D\beta F(r)\end{aligned}}$ Rearranging, $\Rightarrow \nabla \cdot \left(\nabla D-{\frac {F(r)}{\gamma }}\right)P(r,t|r_{0},t_{0})=\nabla \cdot D(\nabla -\beta F(r))P(r,t|r_{0},t_{0})$ Therefore, the Fokker–Planck equation becomes the Smoluchowski equation, $\partial _{t}P(r,t|r_{0},t_{0})=\nabla \cdot D(\nabla -\beta F(r))P(r,t|r_{0},t_{0})$ for an arbitrary force $F(r)$. Computational considerations Brownian motion follows the Langevin equation, which can be solved for many different stochastic forcings with results being averaged (canonical ensemble in molecular dynamics). However, instead of this computationally intensive approach, one can use the Fokker–Planck equation and consider the probability $p(\mathbf {v} ,t)\,d\mathbf {v} $ of the particle having a velocity in the interval $(\mathbf {v} ,\mathbf {v} +d\mathbf {v} )$ when it starts its motion with $\mathbf {v} _{0}$ at time 0. 1-D linear potential example Brownian dynamics in one dimension is simple.[14][15] Theory Starting with a linear potential of the form $U(x)=cx$ the corresponding Smoluchowski equation becomes, $\partial _{t}P(x,t|x_{0},t_{0})=\partial _{x}D(\partial _{x}+\beta c)P(x,t|x_{0},t_{0})$ Where the diffusion constant, $D$, is constant over space and time. The boundary conditions are such that the probability vanishes at $x\rightarrow \pm \infty $ with an initial condition of the ensemble of particles starting in the same place, $P(x,t|x_{0},t_{0})=\delta (x-x_{0})$. Defining $\tau =Dt$ and $b=\beta c$ and applying the coordinate transformation, $y=x+\tau b,\ \ \ y_{0}=x_{0}+\tau _{0}b$ With $P(x,t,|x_{0},t_{0})=q(y,\tau |y_{0},\tau _{0})$ the Smoluchowki equation becomes, $\partial _{\tau }q(y,\tau |y_{0},\tau _{0})=\partial _{y}^{2}q(y,\tau |y_{0},\tau _{0})$ Which is the free diffusion equation with solution, $q(y,\tau |y_{0},\tau _{0})={\frac {1}{\sqrt {4\pi (\tau -\tau _{0})}}}e^{-{\frac {(y-y_{0})^{2}}{4(\tau -\tau _{0})}}}$ And after transforming back to the original coordinates, $P(x,t|x_{0},t_{0})={\frac {1}{\sqrt {4\pi D(t-t_{0})}}}\exp {\left[{-{\frac {(x-x_{0}+D\beta c(t-t_{0}))^{2}}{4D(t-t_{0})}}}\right]}$ Simulation The simulation on the right was completed using a Brownian dynamics simulation.[16][17] Starting with a Langevin equation for the system, $m{\ddot {x}}=-\gamma {\dot {x}}-c+\sigma \xi (t)$ where $\gamma $ is the friction term, $\xi $ is a fluctuating force on the particle, and $\sigma $ is the amplitude of the fluctuation. At equilibrium the frictional force is much greater than the inertial force, $\left|\gamma {\dot {x}}\right|\gg \left|m{\ddot {x}}\right|$. Therefore, the Langevin equation becomes, $\gamma {\dot {x}}=-c+\sigma \xi (t)$ For the Brownian dynamic simulation the fluctuation force $\xi (t)$ is assumed to be Gaussian with the amplitude being dependent of the temperature of the system $ \sigma ={\sqrt {2\gamma k_{\text{B}}T}}$. Rewriting the Langevin equation, ${\frac {dx}{dt}}=-D\beta c+{\sqrt {2D}}\xi (t)$ where $ D={\frac {k_{\text{B}}T}{\gamma }}$ is the Einstein relation. The integration of this equation was done using the Euler–Maruyama method to numerically approximate the path of this Brownian particle. Solution Being a partial differential equation, the Fokker–Planck equation can be solved analytically only in special cases. A formal analogy of the Fokker–Planck equation with the Schrödinger equation allows the use of advanced operator techniques known from quantum mechanics for its solution in a number of cases. Furthermore, in the case of overdamped dynamics when the Fokker–Planck equation contains second partial derivatives with respect to all spatial variables, the equation can be written in the form of a master equation that can easily be solved numerically.[18] In many applications, one is only interested in the steady-state probability distribution $p_{0}(x)$, which can be found from $ {\frac {\partial p(x,t)}{\partial t}}=0$. The computation of mean first passage times and splitting probabilities can be reduced to the solution of an ordinary differential equation which is intimately related to the Fokker–Planck equation. Particular cases with known solution and inversion In mathematical finance for volatility smile modeling of options via local volatility, one has the problem of deriving a diffusion coefficient ${\sigma }(\mathbf {X} _{t},t)$ consistent with a probability density obtained from market option quotes. The problem is therefore an inversion of the Fokker–Planck equation: Given the density f(x,t) of the option underlying X deduced from the option market, one aims at finding the local volatility ${\sigma }(\mathbf {X} _{t},t)$ consistent with f. This is an inverse problem that has been solved in general by Dupire (1994, 1997) with a non-parametric solution.[19][20] Brigo and Mercurio (2002, 2003) propose a solution in parametric form via a particular local volatility ${\sigma }(\mathbf {X} _{t},t)$ consistent with a solution of the Fokker–Planck equation given by a mixture model.[21][22] More information is available also in Fengler (2008),[23] Gatheral (2008),[24] and Musiela and Rutkowski (2008).[25] Fokker–Planck equation and path integral Every Fokker–Planck equation is equivalent to a path integral. The path integral formulation is an excellent starting point for the application of field theory methods.[26] This is used, for instance, in critical dynamics. A derivation of the path integral is possible in a similar way as in quantum mechanics. The derivation for a Fokker–Planck equation with one variable $x$ is as follows. Start by inserting a delta function and then integrating by parts: ${\begin{aligned}{\frac {\partial }{\partial t}}p{\left(x',t\right)}&=-{\frac {\partial }{\partial x'}}\left[D_{1}(x',t)p(x',t)\right]+{\frac {\partial ^{2}}{\partial {x'}^{2}}}\left[D_{2}(x',t)p(x',t)\right]\\[5pt]&=\int _{-\infty }^{\infty }\mathrm {d} x\left(\left[D_{1}\left(x,t\right){\frac {\partial }{\partial x}}+D_{2}\left(x,t\right){\frac {\partial ^{2}}{\partial x^{2}}}\right]\delta \left(x'-x\right)\right)p\!\left(x,t\right).\end{aligned}}$ The $x$-derivatives here only act on the $\delta $-function, not on $p(x,t)$. Integrate over a time interval $\varepsilon $, $p(x',t+\varepsilon )=\int _{-\infty }^{\infty }\,\mathrm {d} x\left(\left(1+\varepsilon \left[D_{1}(x,t){\frac {\partial }{\partial x}}+D_{2}(x,t){\frac {\partial ^{2}}{\partial x^{2}}}\right]\right)\delta (x'-x)\right)p(x,t)+O(\varepsilon ^{2}).$ Insert the Fourier integral $\delta {\left(x'-x\right)}=\int _{-i\infty }^{i\infty }{\frac {\mathrm {d} {\tilde {x}}}{2\pi i}}e^{{\tilde {x}}{\left(x-x'\right)}}$ for the $\delta $-function, ${\begin{aligned}p(x',t+\varepsilon )&=\int _{-\infty }^{\infty }\mathrm {d} x\int _{-i\infty }^{i\infty }{\frac {\mathrm {d} {\tilde {x}}}{2\pi i}}\left(1+\varepsilon \left[{\tilde {x}}D_{1}(x,t)+{\tilde {x}}^{2}D_{2}(x,t)\right]\right)e^{{\tilde {x}}(x-x')}p(x,t)+O(\varepsilon ^{2})\\[5pt]&=\int _{-\infty }^{\infty }\mathrm {d} x\int _{-i\infty }^{i\infty }{\frac {\mathrm {d} {\tilde {x}}}{2\pi i}}\exp \left(\varepsilon \left[-{\tilde {x}}{\frac {(x'-x)}{\varepsilon }}+{\tilde {x}}D_{1}(x,t)+{\tilde {x}}^{2}D_{2}(x,t)\right]\right)p(x,t)+O(\varepsilon ^{2}).\end{aligned}}$ This equation expresses $p(x',t+\varepsilon )$ as functional of $p(x,t)$. Iterating $(t'-t)/\varepsilon $ times and performing the limit $\varepsilon \rightarrow 0$ gives a path integral with action $S=\int \mathrm {d} t\left[{\tilde {x}}D_{1}(x,t)+{\tilde {x}}^{2}D_{2}(x,t)-{\tilde {x}}{\frac {\partial x}{\partial t}}\right].$ The variables ${\tilde {x}}$ conjugate to $x$ are called "response variables".[27] Although formally equivalent, different problems may be solved more easily in the Fokker–Planck equation or the path integral formulation. The equilibrium distribution for instance may be obtained more directly from the Fokker–Planck equation. See also • Kolmogorov backward equation • Boltzmann equation • Vlasov equation • Master equation • Mean-field game theory • Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy of equations • Ornstein–Uhlenbeck process • Convection–diffusion equation • Klein–Kramers equation Notes and references 1. Leo P. Kadanoff (2000). Statistical Physics: statics, dynamics and renormalization. World Scientific. ISBN 978-981-02-3764-6. 2. Fokker, A. D. (1914). "Die mittlere Energie rotierender elektrischer Dipole im Strahlungsfeld". Ann. Phys. 348 (4. Folge 43): 810–820. Bibcode:1914AnP...348..810F. doi:10.1002/andp.19143480507. 3. Planck, M. (1917). "Über einen Satz der statistischen Dynamik und seine Erweiterung in der Quantentheorie". Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin. 24: 324–341. 4. Kolmogorov, Andrei (1931). "Über die analytischen Methoden in der Wahrscheinlichkeitstheorie" [On Analytical Methods in the Theory of Probability]. Mathematische Annalen (in German). 104 (1): 415–458 [pp. 448–451]. doi:10.1007/BF01457949. S2CID 119439925. 5. Dhont, J. K. G. (1996). An Introduction to Dynamics of Colloids. Elsevier. p. 183. ISBN 978-0-08-053507-4. 6. Paul, Wolfgang; Baschnagel, Jörg (2013). "A Brief Survey of the Mathematics of Probability Theory". Stochastic Processes. Springer. pp. 17–61 [esp. 33–35]. doi:10.1007/978-3-319-00327-6_2. ISBN 978-3-319-00326-9. 7. N. N. Bogolyubov Jr. and D. P. Sankovich (1994). "N. N. Bogolyubov and statistical mechanics". Russian Math. Surveys 49(5): 19—49. doi:10.1070/RM1994v049n05ABEH002419 8. N. N. Bogoliubov and N. M. Krylov (1939). Fokker–Planck equations generated in perturbation theory by a method based on the spectral properties of a perturbed Hamiltonian. Zapiski Kafedry Fiziki Akademii Nauk Ukrainian SSR 4: 81–157 (in Ukrainian). 9. Risken, H. (1996), The Fokker–Planck Equation: Methods of Solution and Applications, vol. Second Edition, Third Printing, p. 72 10. Öttinger, Hans Christian (1996). Stochastic Processes in Polymeric Fluids. Berlin-Heidelberg: Springer-Verlag. p. 75. ISBN 978-3-540-58353-0. 11. Kamenshchikov, S. (2014). "Clustering and Uncertainty in Perfect Chaos Systems". Journal of Chaos. 2014: 1–6. arXiv:1301.4481. doi:10.1155/2014/292096. S2CID 17719673. 12. Pavliotis, Grigorios A. (2014). Stochastic Processes and Applications : Diffusion Processes, the Fokker-Planck and Langevin Equations. Springer. pp. 38–40. doi:10.1007/978-1-4939-1323-7_2. ISBN 978-1-4939-1322-0. 13. Rosenbluth, M. N. (1957). "Fokker–Planck Equation for an Inverse-Square Force". Physical Review. 107 (1): 1–6. Bibcode:1957PhRv..107....1R. doi:10.1103/physrev.107.1. 14. Ioan, Kosztin (Spring 2000). "Smoluchowski Diffusion Equation". Non-Equilibrium Statistical Mechanics: Course Notes. 15. Kosztin, Ioan (Spring 2000). "The Brownian Dynamics Method Applied". Non-Equilibrium Statistical Mechanics: Course Notes. 16. Koztin, Ioan. "Brownian Dynamics". Non-Equilibrium Statistical Mechanics: Course Notes. Archived from the original on 2020-01-15. Retrieved 2020-05-18. 17. Kosztin, Ioan. "The Brownian Dynamics Method Applied". Non-Equilibrium Statistical Mechanics: Course Notes. Archived from the original on 2020-01-15. Retrieved 2020-05-18. 18. Holubec Viktor, Kroy Klaus, and Steffenoni Stefano (2019). "Physically consistent numerical solver for time-dependent Fokker–Planck equations". Phys. Rev. E. 99 (4): 032117. arXiv:1804.01285. Bibcode:2019PhRvE..99c2117H. doi:10.1103/PhysRevE.99.032117. PMID 30999402. S2CID 119203025.{{cite journal}}: CS1 maint: multiple names: authors list (link) 19. Bruno Dupire (1994) Pricing with a Smile. Risk Magazine, January, 18–20. 20. Bruno Dupire (1997) Pricing and Hedging with Smiles. Mathematics of Derivative Securities. Edited by M.A.H. Dempster and S.R. Pliska, Cambridge University Press, Cambridge, 103–111. ISBN 0-521-58424-8. 21. Brigo, D.; Mercurio, Fabio (2002). "Lognormal-Mixture Dynamics and Calibration to Market Volatility Smiles". International Journal of Theoretical and Applied Finance. 5 (4): 427–446. CiteSeerX 10.1.1.210.4165. doi:10.1142/S0219024902001511. 22. Brigo, D.; Mercurio, F.; Sartorelli, G. (2003). "Alternative asset-price dynamics and volatility smile". Quantitative Finance. 3 (3): 173–183. doi:10.1088/1469-7688/3/3/303. S2CID 154069452. 23. Fengler, M. R. (2008). Semiparametric Modeling of Implied Volatility, 2005, Springer Verlag, ISBN 978-3-540-26234-3 24. Jim Gatheral (2008). The Volatility Surface. Wiley and Sons, ISBN 978-0-471-79251-2. 25. Marek Musiela, Marek Rutkowski. Martingale Methods in Financial Modelling, 2008, 2nd Edition, Springer-Verlag, ISBN 978-3-540-20966-9. 26. Zinn-Justin, Jean (1996). Quantum field theory and critical phenomena. Oxford: Clarendon Press. ISBN 978-0-19-851882-2. 27. Janssen, H. K. (1976). "On a Lagrangean for Classical Field Dynamics and Renormalization Group Calculation of Dynamical Critical Properties". Z. Phys. B23 (4): 377–380. Bibcode:1976ZPhyB..23..377J. doi:10.1007/BF01316547. S2CID 121216943. Further reading • Frank, Till Daniel (2005). Nonlinear Fokker–Planck Equations: Fundamentals and Applications. Springer Series in Synergetics. Springer. ISBN 3-540-21264-7. • Gardiner, Crispin (2009). Stochastic Methods (4th ed.). Springer. ISBN 978-3-540-70712-7. • Pavliotis, Grigorios A. (2014). Stochastic Processes and Applications: Diffusion Processes, the Fokker–Planck and Langevin Equations. Springer Texts in Applied Mathematics. Springer. ISBN 978-1-4939-1322-0. • Risken, Hannes (1996). The Fokker–Planck Equation: Methods of Solutions and Applications. Springer Series in Synergetics (2nd ed.). Springer. ISBN 3-540-61530-X.
Wikipedia
\begin{definition}[Definition:Artinian Module] Let $A$ be a commutative ring with unity. Let $M$ be an $A$-module Then $M$ is a '''Artinian module''' if either of the following conditions hold: :$(1): \quad$ $M$ satisfies the descending chain condition on submodules :$(2): \quad$ $M$ satisfies the minimal condition on submodules. \end{definition}
ProofWiki
Applied Water Science October 2018 , 8:179 | Cite as Equilibrium, kinetics and thermodynamic studies of cadmium(II) biosorption on Nannochloropsis oculata Jyothi Kaparapu M. Krishna Prasad The marine microalga Nannochloropsis oculata was investigated for its biosorption capacity for the removal of Cd(II) ions from aqueous solution using batch mode experiments. pH (2–5), biomass dosage (0.0191 g/50 mL and 0.392 g/50 mL) and temperature (293–323 K) being the experimental parameters affecting the biosorption process were observed. To describe the experimental equilibrium data, Langmuir and Freundlich isotherms models were applied. The biosorption potential of N. oculata biomass for Cd(II) ions was found to be 232.55 mg/g. The calculated thermodynamic parameters (∆G°, ∆H° and ∆S°) showed that the biosorption of Cd(II) ions onto N. oculata was feasible, spontaneous and exothermic at 298–323 K. Evaluation of experimental data in terms of biosorption kinetics showed that the biosorption of Cd(II) by N. oculata well followed pseudo-second-order kinetics. The FTIR spectra indicated that the functional groups predominantly involved in the biosorption were –OH, COO–, –CH and phosphate groups. The XRD pattern of the biosorbent showed a change in crystallinity of N. oculata biomass after the biosorption. It was concluded that N. oculata can be used as an effective, low-cost and environmentally friendly biosorbent for the removal of Cd(II) from aqueous solution. Biosorption Cadmium(II) FTIR Isotherms Kinetics Nannochloropsis oculata XRD Globally, heavy metal pollution has become a major issue because the heavy metal content in drinking waters and wastewaters often exceeds the permissible standards. Battery manufacturing, metal plating, fertilizer industry, pigment dye stuff, mining operations and textile release heavy metals to environment via their waste effluents (Volesky 1990; Unlu and Ersoz 2006). Cadmium is one of the most toxic, non-biodegradable heavy metals and accumulated by absorption into living organisms (Victor et al. 2007). Cadmium toxicity effects include renal dysfunction, hypertension, hepatic injury, lung damage, anemia and teratogenic effects (Yu et al. 1999; Lodeiro et al. 2006; Kaewsarn and Yu 2001; Cheung et al. 2001). Cadmium pollution is due to metal plating, metallurgic alloying, ceramics, textile printing industry, photograph development, electroplating, alkaline battery manufacturing industries (Kadirvelu et al. 2001; Zhu et al. 2007). Therefore, cadmium level in wastewater, drinking water and water used for agriculture should be limited to the maximum permissible concentration (0.01 mg/L) (WPCRT 2004). The usual methods for the removal of heavy metal ions including cadmium from aqueous solutions can be stated as chemical precipitation, ion exchange, solvent extraction, phytoextraction, ultrafiltration, reverse osmosis, electrodialysis and adsorption (Patterson 1985; Bhattacharya and Mandal 2006). However, technical or economic constraints limit sometimes the feasibility of such processes. Biosorption process is emerging as one of the attractive technologies to remove heavy metals from aqueous solution. Biomasses such as bacteria (Iyer et al. 2005), yeast (Padmavathi et al. 2003), fungi (Goksungur et al. 2005; Anayurt et al. 2009) and algae (Gupta and Rastogi 2008a, b; Gupta and Rastogi 2009) were investigated as biosorbent for the removal of heavy metals. The major advantages of the biosorption technology by the use of inexpensive, naturally abundant algae are its effectiveness in reducing the concentration of colored organic compounds and toxic chemicals with an unpleasant smell (Holan et al. 1993; El-Sikaily et al. 2007). In algae, the biosorption has mainly been attributed to the cell wall structure containing functional groups such as amino, hydroxyl, carboxyl and sulfate, which can act as binding sites for metals via both electrostatic attraction and complexation (Hamdy 2000). Marine algae because of their cheap availability in both fresh and saltwater, relatively high surface area and high binding affinity have been found to be potentially suitable biosorbents (Hamdy 2000). It has been demonstrated that algae biosorbent might be effective in dead cells form. The easier cultivation of microalgae, its higher production yield, higher performance and efficiency because of higher specific biosorption area make microalgae more promising than macroalgae. Different species of algal biomasses (brown, green and red) have been used for the removal of heavy metals from aqueous solution (Xin Sheng et al. 2004; Kumar et al. 2006; Schmitt et al. 2001). However, there is no extensive study on the biosorption of Cd(II) using N. oculata, an unicellular green microalgae. An attempt was made by choosing N. oculata in this study due to its renewable and cost-effective nature. The present study focused on the biosorption behavior of Nannochloropsis biomass for the removal of Cd(II) ions from aqueous solution. Experimental parameters affecting the biosorption process such as pH, contact time, biomass dosage and temperature were evaluated. The equilibrium biosorption data were evaluated by Langmuir and Freundlich isotherm models. Bioadsorbent was characterised by FTIR and XRD studies. The biosorption mechanism was also investigated in terms of thermodynamics and kinetics. Experimental procedures Biomass preparation The microalga N. oculata was collected from the Central Marine Fisheries Research Institute (CMFRI), Visakhapatnam. Nannochloropsis oculata was (4 million/mL) rinsed with distilled water twice, filtered by vacuum filtration and resuspended in distilled water. Sodium alginate (5%) solution was prepared with equal quantities of algal solution and alginate solution at room temperature. This uniform mixture of algae and sodium alginate solution (2.5%) was pumped through the peristaltic pump into the 0.5 M CaCl2 2H2O solution. The beads (2 million/mL) were stored at 4 °C overnight for curing with 0.25 M CaCl2 2H2O solution and washed with distilled water to avoid excess CaCl2 2H2O. These beads were used for equilibrium studies. Each 10 mL of beads contained 0.0191 g of dry N. oculata biomass, and this value was used for further calculation. Reagents and equipments All chemicals used in this work were of analytical reagent grade and were used without further purification. A Perkin-Elmer A Analyst 700 flame atomic absorption spectrometer (AAS) with deuterium background corrector was used. All measurements were carried out in an air acetylene flame. Preparation of adsorbate solutions Cadmium chloride solution was prepared by dissolving 3.6178 g of cadmium chloride salt in 1000-mL standard volumetric flask with deionized water. The primary stock solution thus had about 2222 ppm of Cd(II) in solution. From the stock solutions, experimental test solutions were prepared by diluting the primary stock solution with demineralized water. pH was maintained at 2–5 by addition of approximate amount of 0.1 N HCl. Batch biosorption procedure Biosorption experiments were carried out at the desired pH value, contact time and biomass dosage level using the necessary biomass in a 250-mL stoppered conical flask containing 50 mL of test solution. Initial solutions with different concentrations of Cd(II) were prepared by proper dilution from stock 1000 mg/L Cd(II) standards. Sodium phosphate buffer (0.1 mol/L) was prepared by adding an appropriate amount of phosphoric acid to sodium dihydrogen phosphate solution to result in a solution of pH 2. Ammonium acetate buffers (0.1 mol/L) were prepared by adding an appropriate amount of acetic acid to ammonium acetate solutions to result in solutions of pH 4–6. Ammonium chloride buffer solutions (0.1 mol/L) were prepared by adding an appropriate amount of ammonia to ammonium chloride solution to result in solutions of pH 8. Necessary amount of the biomass was then added, and contents in the flask were shaken for the desired contact time in an electrically thermostatic reciprocating shaker at 200 rpm. The time required for reaching the equilibrium condition was estimated by drawing samples at regular intervals of time till equilibrium was reached. The contents of the flask were filtered through filter paper, and the filtrate was analyzed for metal concentration by using flame AAS. The percent biosorption of metal ion was calculated as follows: $${\text{Biosorption}}\;(\% ) = (C_{O} - C_{T} )/C_{O} \times 100,$$ where CO and CT are the initial and final metal ion concentrations, respectively. Biosorption experiments for the effect of pH were conducted by using a solution having 100 mg/L of Cd(II) concentration with a biomass dosage of 10 g/L. FTIR studies The powdered biomass before and after adsorption was air-dried, and the moisture was removed completely at 60 °C in a humidity control oven. The powder was analyzed by Fourier-transform infrared spectrophotometer (FTIR) by potassium bromide (KBr) pellet method in the wave number range of 400.00–4000.00 cm−1 (Perkin-Elmer No. 72425). X-ray diffraction analysis The XRD of each biomass powder sample was obtained using XRD-6000 Shimadzu, Japan Model. The diffracted X-ray intensities were recorded as a function of 2θ, at a scan speed of 1.2°/min, and pattern was recorded from 10° to 70°. FTIR analysis The FTIR spectra of unloaded biomass and Cd(II)-loaded biomass were taken (Fig. 1) to obtain information on the nature of possible interactions between the functional groups of N. oculata and the metal ions. The broad and strong band at 3523.13–3542.42 cm−1 may be due to the N–H stretching vibration. The broad and strong band from 3201.01 to 3494.20 cm−1 may be due to the stretching vibration of O–H. The band peaks at 3108.42, 3116.14, 3129.64, 3139.28, 3151.82, 3174, 3181.72 cm−1 are assigned to –CH stretching on the biomass surface. Some bands in the fingerprint region 407.06 and 484.15 cm−1 could be attributed to the phosphate groups. The significant changes in the wave numbers of these specific peaks suggested that amido, hydroxyl and phosphate groups could be involved in the biosorption of Cd(II) onto N. oculata. The similar results were reported for the biosorption of different heavy metals on various species of algae (Xin Sheng et al. 2004; Murphy et al. 2007). FT-IR spectrum a Cd(II)-loaded algal biomass and, b unloaded algal biomass XRD patterns of microalgae N. oculata before and after biosorption are depicted in Fig. 2, and they indicated poor crystallinity of pure biomass. Furthermore, the shift in 2θ and d spacing values was observed in Cd(II)-loaded biomass. From these observations, it could be concluded that there was a change in the crystallinity of biomass N. oculata after the biosorption. XRD pattern of a untreated and b treated with Cd(II) N. oculata Effect of pH The pH plays an important role in the biosorption process of heavy metal ions from aqueous solutions. Algal biomasses contain high content of carboxyl groups from mannuronic and glucuronic acids on the cell wall polysaccharides, which suggests that the biosorption process could be affected by pH changes in the solution. To examine the effect of pH on the cadmium ions removal, several experiments were performed at different pH ranges from 2 to 5 as shown in Fig. 3. The biosorption efficiency was obtained as 46%, 25% and 18% at pH 5, 3 and 2. All the biosorption experiments were carried out at pH 5 because the maximum efficiency was obtained as 46% at that pH value. At higher pH values, the biosorption yield for Cd(II) was dramatically decreased. At pH range 2–4, the poor biosorption of Cd(II) could be due to competition with the H+ ions for metal binding sites on the algal cell. With increase in pH, the biosorption of the Cd(II) with positive charge reached a maximum. The decrease in the biosorption efficiency at higher pH (6–8) values may be attributed to the formation of anionic hydroxide complexes and their competition with the active sites (Kumar et al. 2006; Rao et al. 2005). Effect of pH on equilibrium distribution of cadmium metal ion (298 K; 0.091 g/50 mL) Effect of biomass dosage The effect of biomass dosage on the biosorption of Cd(II) ions was studied using different biomass dosages of 0.0191 g/50 mL and 0.392 g/50 mL (Fig. 4). Results showed that the biosorption efficiency is highly dependent on the increase in biomass dosage of the solution. This is expected because the higher dose of adsorbent in the solution, the greater availability of exchangeable sites for the ions. The maximum biosorption of the metal ions was attained at about biomass dosage 0.392 g/50 mL and was almost same at higher dosages. The decrease in biosorption efficiency at higher biomass concentration could be explained as a consequence of a partial aggregation of biomass, which results in a decrease in effective surface area for the biosorption (Karthikeyan et al. 2007). Therefore, the optimum biomass dosage was selected as 0.0191 g/50 mL for further experiments. Effect of biomass dosage (pH = 5; temperature 298 K) Effects of contact time and temperature The contact time was also evaluated as one of the most important factors affecting the biosorption efficiency. Figure 5 shows the biosorption efficiency of Cd(II) ions by N. oculata as a function of contact time and temperature. The biosorption efficiency increases with rise in contact time up to 90 min at 293–323 K, and then, it is almost constant. Therefore, the optimum contact time was selected as 60 min for further experiments. On the other hand, the biosorption yield decreased from 45 to 27% for Cd(II) ion with increasing temperature from 298 to 323 K during a 90-min contact time. This result indicated the exothermic nature of Cd(II) biosorption onto N. oculata. This decrease in biosorption efficiency may be attributed to many parameters: the relative increase in the escaping tendency of the cadmium ions from the solid phase to the bulk phase, deactivating the biosorbent surface or destructing some active sites on the biosorbent surface due to bond ruptures (Meena et al. 2005) or due to the weakness of biosorptive forces between the active sites of the sorbents and the sorbate species and also between the adjacent molecules of sorbed phase. Our results are in agreement with the thermodynamics point of view. Effect of contact time and temperature at pH = 5. a Variation of cadmium metal ion with time at pH = 5 and temperature 298 K. b Variation of cadmium metal ion distribution with temperature at pH = 5 and 0.091 g/50 mL Biosorption isotherm models A biosorption isotherm is characterized by certain constant values, which express the surface properties and affinity of the biosorbent and can also be used to compare the biosorptive capacities of the biosorbent for different pollutants (Dursun et al. 2005). In this study, two important sorption isotherm models were selected to fit experimental data, which are, namely, Langmuir and Freundlich isotherm models. Langmuir isotherm models the single coating layer on sorption surface. This model supposes that the sorption process takes place at a specific sorption surface. The attraction between molecules decreases as getting further from the sorption surface. Langmuir isotherm can be defined according to the following equation (Langmuir 1918): $$\frac{{C_{\text{e}} }}{{q_{\text{e}} }} = \frac{1}{{q_{\text{m}} b}} + \frac{{C_{\text{e}} }}{{q_{\text{m}} }},$$ where qe is the equilibrium metal ion concentration on the adsorbent(mg/g), Ce is the equilibrium metal ion concentration in the solution (mg/L), qm is the monolayer biosorption capacity of the adsorbent (mg/g). b/KL is the Langmuir biosorption constant (L/mg) relating the free energy of biosorption. Figure 5 indicates the linear relationship between the amount (mg) of Cd(II) ions sorbed per unit mass (g) of N. oculata against the concentration of Cd(II) ions remaining in solution (mg/L). The correlation coefficient (R2) was found to be 0.992 for Cd(II) biosorption. The high R2 values indicated that the equilibrium data fitted well to the Langmuir model. In other words, the sorption of metal ions onto N. oculata was taken place at the functional groups/binding sites on the surface of the biomass which is regarded as monolayer biosorption. The KL value was found to be 5.4 × 10−3 L/mg for Cd(II) ion (Fig. 6). Langmuir isotherm plots for biosorption of Cd(II) onto N. oculata biomass (biomass dosage 20 g/L; contact time 60 min; pH 5; temperature 298 K) Freundlich isotherm is used for modeling the adsorption on heterogeneous surfaces. This isotherm can be explained as follows: $$q_{\text{e}} = K_{f} C_{\text{e}}^{{\frac{1}{{n_{f} }}}} ,$$ $$Inq_{e} = \ln K_{f} + \frac{1}{n}_{f} InC_{{\mathop e\nolimits_{{}} }} ,$$ where Kf is a constant relating the biosorption capacity and 1/n is an empirical parameter relating the biosorption intensity, which varies with the heterogeneity of the material (Figs. 7). Freundlich isotherms obtained for the biosorption of Cd(II) ions onto N. oculata biomass using Eq. (3) The values of Kf and 1/n were found to be 4.4 and 0.3, respectively. The 1/n values were between 0 and 1, indicating that the biosorption of Cd(II) onto N. oculata biomass was favorable at studied conditions. However, compared to the R2 values, 0.978 with that obtained from the Langmuir model, it can be remarkably noted that the Langmuir isotherm model is better fitted the equilibrium data. Biosorption kinetics In order to examine the controlling mechanism of the biosorption process, kinetic models are used to test the experimental data. The equilibrium data were analyzed using two simplest kinetic models, pseudo-first-order and pseudo-second-order model. The linear form of the pseudo-first-order rate equation by Lagergren (1898) is given as $$\ln (q_{\text{e}} - q_{t} ) = \ln q_{\text{e}} - k_{1} t,$$ where qt and qe (mg/g) are the amounts of the metal ions biosorbed at equilibrium (mg/g) and t (min), respectively, and k1 is the rate constant of the first-order equation (min−1). The biosorption rate constants (k1) can be determined experimentally by plotting ln(qe − qt) versus t (Fig. 8). Pseudo-first-order kinetic plot for biosorption of Cd(II) at 298 K and pH 5 The pseudo-second-order kinetic model fitted the experimental data which is given in the following form: $$\tfrac{t}{{q_{t} }} = \tfrac{1}{{k_{2} q_{\text{e}}^{2} }} + \tfrac{1}{{q_{\text{e}} }}t,$$ where k2(g/mg min) is the rate constant of the second-order equation, qt and qe (mg/g) are the amounts of the metal ions biosorbed at equilibrium (mg/g) and t (min), respectively. This model is more likely to predict kinetic behavior of biosorption with chemical sorption being the rate-controlling step (Dubinin and Radushkevich 1947). The linear plots of t/qt versus t for the pseudo-second-order model for the biosorption of Cd(II) ions onto the alga biomass at 293 K was shown in Fig. 9. These results suggest that this model successfully describes the kinetics of the biosorption of Cd(II) ions onto N. oculata. This conclusion is in agreement with that obtained by other authors (Martinez et al. 2006; Schmitt et al. 2001). The rate constants (k2), the R2 and the qe values are given in Table 1. It is clear from these results that the R2 values are very high. Pseudo-second-order kinetic for biosorption Cd(II) pH 5 Langmuir and Freundlich isotherm model parameters for Cd(II) biosorption on N. oculata Temp. (°K) Langmuir constants Freundlich constants qm (mg g−1) b (L mmol−1) n f 0.200170466 Biosorption thermodynamics Thermodynamic parameters including the change in free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) were used to describe thermodynamic behavior (Choi et al. 2009) of the biosorption of Cd(II) ions onto N. oculata. These parameters were calculated from the following equations: $$\Delta G^{ \circ } = - RT\ln Kc,$$ where R is the universal gas constant (8.314 J/mol K), T is temperature (K) and K (qe/Ce) is the distribution coefficient (Dubinin and Radushkevich 1947). By considering the following equation, the enthalpy (∆H°) and entropy (∆S°) of biosorption were estimated from the slope and intercept of the plot of ln K versus 1/T (Fig. 10). $$K_{C} = \frac{{C_{S} }}{{C_{\text{e}} }}$$ $$\log \left( {\frac{{C_{S} }}{{C_{\text{e}} }}} \right) = - \frac{{\Delta H^{ \circ } }}{2.303RT} + \frac{{\Delta S^{ \circ } }}{R}$$ Plot of lnKc versus 1/T for the estimation of thermodynamic parameters The free energy change (∆G°) was calculated to be − 45.94, − 44.02 and kJ/mol for the biosorption of Cd(II) at 298, 313 and 323 K, respectively. The negative ∆G° values indicated thermodynamically feasible and spontaneous nature of the biosorption. The decrease in ∆G° values with increase in temperature shows a decrease in feasibility of biosorption at higher temperatures. The enthalpy of biosorption (∆H°) was found to be − 20.15 kJ/mol. The negative ∆H° is indicator of exothermic nature of the biosorption, and also its magnitude gives information on the type of biosorption, which can be either physical or chemical. The enthalpy value (− 31.8 kJ/mol) indicated that the biosorption process of Cd(II) ions onto N. oculata biomass proceeded chemically because it falls into the 20.9–418.4 kJ/mol (Freundlich 1906). The ∆S° parameter was found to be 22.8 J/mol K for Cd(II) biosorption. The negative ∆S° value suggests a decrease in the randomness at the solid/solution interface during the biosorption process. This study focused on the biosorption of Cd(II) ions onto algal biomass (N. oculata) from aqueous solution, and the operating parameters, pH of solution, biomass dosage, contact time and temperature on the biosorption efficiency of Cd(II) were studied. Biosorption equilibrium was better described by the Freundlich isotherm model than the Langmuir isotherm. The monolayer biosorption capacity of N. oculata for Cd(II) was found to be 232.55 mg/g ions. Kinetic examination of the equilibrium data showed that the biosorption of Cd(II) ions onto Nannochloropsis followed well the pseudo-second-order kinetic model. The thermodynamic calculations indicated the feasibility, exothermic and spontaneous nature of the biosorption process at 298–323 K. Based on the results, it can be concluded that the N. oculata is an effective and alternative biomass for the removal of Cd(II) ions from aqueous solution. One of the authors Jyothi Kaparapu gratefully acknowledges financial assistance from the UGC, New Delhi (Post-Doctoral Fellowship for women) No. F.151/2014-15/PDFWM-2014-15-AND-26595(SAII). Anayurt RA, Sari A, Tuzen M (2009) Equilibrium, thermodynamic and kinetic studies on biosorption of Pb(II) and Cd(II) from aqueous solution by macrofungus (Lactarius scrobiculatus) biomass. Chem Eng J 151:255–261CrossRefGoogle Scholar Bhattacharya AKSN, Mandal DS (2006) Adsorption of Zn(II) from aqueous solution by using different adsorbents. Chem Eng J 123:43–51CrossRefGoogle Scholar Cheung CW, Porter JF, Mckay G (2001) Sorption kinetic analysis for the removal of cadmium ions from effluents using bone char. Water Res 35:605–612CrossRefGoogle Scholar Choi HD, Jung WS, Cho JM, Ryu BG, Yang JS, Baek K (2009) Adsorption of Cr(VI) onto cationic surfactant-modified activated carbon. J Hazard Mater 166:642–646CrossRefGoogle Scholar Dubinin MM, Radushkevich LV (1947) Equation of the Characteristic curve of activated charcoal. Chemisches Zentralblatt 1:875–889Google Scholar Dursun G, Cicek H, Dursun AY (2005) Adsorption of phenol from aqueous solution by using carbonized beet pulp. J Hazard Mater 125:175–182CrossRefGoogle Scholar El-Sikaily A, El-Nemr A, Khaled A, Abdelwehab O (2007) Removal of toxic chromium from wastewater using green alga Ulva lactuca and its activated carbon. J Hazard Mater 148:216–228CrossRefGoogle Scholar Freundlich HMF (1906) Uber die adsorption in losungen. Zeitschrift fur Physikalische Chemie (Leipzig). A 57:385–470Google Scholar Goksungur Y, Uren S, Guvenc U (2005) Biosorption of cadmium and lead ions by ethanol treated waste baker's yeast biomass. Bioresour Technol 96(1):103–109CrossRefGoogle Scholar Gupta VK, Rastogi A (2008a) Biosorption of lead(II) from aqueous solutions by non-living algal biomass Oedogonium sp. and Nostoc sp.—a comparative study. Colloids Surf B Biointerface 64:170–178CrossRefGoogle Scholar Gupta VK, Rastogi A (2008b) Biosorption of lead from aqueoussolutions by green algae Spirogyra species: kinetics and equilibrium studies. J Hazard Mater 152:407–414CrossRefGoogle Scholar Gupta VK, Rastogi A (2009) Biosorption of hexavalent chromium by raw and acid-treated green alga Oedogonium hatei from aqueous solutions. J Hazard Mater 163:396–402CrossRefGoogle Scholar Hamdy AA (2000) Removal of Pb2+ by biomass of marine algae. Curr Microbiol 41:239–245CrossRefGoogle Scholar Holan ZR, Volesky B, Prasetyo I (1993) Biosorption of cadmium by biomass of marine algae. Biotechnol Bioeng 41:819–825CrossRefGoogle Scholar Iyer A, Mody K, Jha B (2005) Biosorption of heavy metals by a marine bacterium. Mar Pollut Bull 50(3):340–343CrossRefGoogle Scholar Kadirvelu K, Thamaraiselvi K, Namasivayam C (2001) Removal of heavy metals from industrial wastewaters by adsorption onto activated carbon prepared from an agricultural solid waste. Bioresour Technol 76:63–65CrossRefGoogle Scholar Kaewsarn P, Yu Q (2001) Cadmium(II) removal from aqueous solutions by pretreated biomass of marine alga Padina sp. Environ Pollut 112:209–213CrossRefGoogle Scholar Karthikeyan S, Balasubramanian R, Iyer CSP (2007) Evaluation of the marine algae Ulva fasciata and Sargassum sp. for the biosorption of Cu (II) from aqueous solutions. Biores Technol 98:452–455CrossRefGoogle Scholar Kumar YP, King P, Prasad VSRK (2006) Removal of copper from aqueous solution using Ulva fasciata sp. A marine green algae. J Hazard Mater 137:367–373CrossRefGoogle Scholar Lagergren S (1898) Zur theorie der sogenannten adsorption gelöster stoffe, K. Sven. Vetenskapsakad. Handl. 24: 1–39Google Scholar Langmuir I (1918) The adsorption of gases on plane surfaces of glass, mica and platinum. J Am Chem Soc 40:1361–1403CrossRefGoogle Scholar Lodeiro P, Barriada JL, Herrero R, Sastre de Vicente ME (2006) The marine macroalga Cystoseira baccata as biosorbent for cadmium(II) and lead(II) removal: kinetic and equilibrium studies. Environ Pollut 142:264–273CrossRefGoogle Scholar Martinez M, Miralles N, Hidalgo S, Fiol N, Villaescusa I, Poch J (2006) Removal Of Lead (II) and cadmium (II) from aqueous solutions using grape stalk waste. J Hazard Mater B 133:203–211CrossRefGoogle Scholar Meena AK, Mishra GK, Rai PK, Rajagopal C, Nagar PN (2005) Removal of heavy metal ions from aqueous solutions using carbon aerogel as an adsorbent. J Hazard Mater 122:161–170CrossRefGoogle Scholar Murphy V, Hughes H, McLoughlin P (2007) Cu(II) binding by dried biomass of red, green and brown macroalgae. Water Res 41:731–740CrossRefGoogle Scholar Padmavathy V, Vasudevan P, Dhingra SC (2003) Biosorption of nickel(II) ions on Baker's yeast. Process Biochem 38(10):1389–1395CrossRefGoogle Scholar Patterson JW (1985) Industrial wastewater treatment technology, 2nd edn. Butterworth-Heinemann, LondonGoogle Scholar Rao PS, Kalyani S, Suresh Reddy KVN, Krishnaiah A (2005) Comparison of biosorption of nickel(II) and copper(II) ions from aqueous solution by sphaeroplea algae and acid treated sphaeroplea algae. Sep Sci Technol 40:3149–3165CrossRefGoogle Scholar Schmitt D, Mue Ller A, Csoe Goe Z, Frimmel FH, Posten C (2001) The adsorption kinetics of metal ions onto different microalgae and siliceous earth. Water Res 35:779–785CrossRefGoogle Scholar Unlu N, Ersoz M (2006) Adsorption characteristics of heavy metal ions onto a low cost biopolymeric sorbent from aqueous solutions. J Hazard Mater 136:272–280CrossRefGoogle Scholar Victor JPV, Cidalia MSB, Rui ARB (2007) Chromium and zinc uptake by algae Gelidium and agar extraction agal waste: kinetics and equilibrium. J Hazard Mater 149:643–649CrossRefGoogle Scholar Volesky B (1990) Removal and recovery of heavy metals by biosorption. CRC Press, Boca Raton, pp 3–43Google Scholar WPCRT (2004) Water pollution control regulation of Turkish Authorities. Turkish Official Gazette, Turkey, p 25687Google Scholar Xin Sheng P, Ting YP, Paul Chen J, Hong L (2004) Sorption of lead, copper, cadmium, zinc, and nickel by marine algal biomass: characterization of biosorptive capacity and investigation of mechanisms. J Colloid Interface Sci 275:131–141CrossRefGoogle Scholar Yu Q, Matheickal JT, Yin P, Kaewsarn P (1999) Heavy metal uptake capacities of common marine macroalgal biomass. Water Res 33:1534–1537CrossRefGoogle Scholar Zhu C, Luan Z, Shan X (2007) Removal of cadmium from aqueous solution by adsorption on granular red mud (GRM). Sep Purif Technol 57:161–169CrossRefGoogle Scholar 1.Department of BotanyAndhra UniversityVisakhapatnamIndia 2.Department of Chemical EngineeringGMR Institute of TechnologyRajam, Srikakulam DistrictIndia Kaparapu, J. & Krishna Prasad, M. Appl Water Sci (2018) 8: 179. https://doi.org/10.1007/s13201-018-0810-y DOI https://doi.org/10.1007/s13201-018-0810-y Not logged in Not affiliated 35.153.135.60
CommonCrawl
\begin{document} \title{Entropic Quantization of Scalar Fields\thanks{Presented at MaxEnt 2014, the 34th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (September 21--26, 2014, Amboise, France). }} \author{Selman Ipek, Ariel Caticha\\{\small Physics Department, University at Albany-SUNY, Albany, NY 12222, USA.}} \date{} \maketitle \begin{abstract} Entropic Dynamics is an information-based framework that seeks to derive the laws of physics as an application of the methods of entropic inference. The dynamics is derived by maximizing an entropy subject to constraints that represent the physically relevant information that the motion is continuous and non-dissipative. Here we focus on the quantum theory of scalar fields. We provide an entropic derivation of Hamiltonian dynamics and using concepts from information geometry derive the standard quantum field theory in the Schr\"{o}dinger representation. \end{abstract} \section{Introduction} In the entropic dynamics (ED) framework quantum theory is derived as an application of entropic methods of inference. The framework has been applied to non-relativistic particles \cite{Caticha 2010} and to relativistic scalar fields \cite{Caticha 2012} leading to several new insights on foundational issues such as the nature of time and the problem of measurement in quantum mechanics. For example, just as in other forms of dynamics, time is defined so that motion looks simple. In the ED of fields entropic time is defined so that the fields undergo equal fluctuations in equal times---the field fluctuations play the role of a clock. The spatial uniformity of these fluctuations guarantees that time flows at the same rate everywhere. Another appealing insight is the new light entropic methods cast on the interpretation of the infinities typical of quantum field theory. We find that the divergences are not physical but epistemic effects; they are indications that the information that is relevant for the prediction of certain quantities is very incomplete. In recent work the ED framework has been substantially improved in several respects. The early formulations involved assumptions that seemed ad hoc. Their justification was purely pragmatic --- they worked; they led to the right answers. For example, use was made of auxiliary variables the physical interpretation of which remained obscure, and there were further assumptions about the configuration space metric and the form of the quantum potential. In \cite{Caticha 2014} it was shown that the auxiliary variables were in fact unnecessary and could be eliminated. More recently, in \cite{Caticha et al 2014b}, we have shown that ED can lead to a Hamiltonian dynamics and that the tools of information geometry can be used to provide natural justifications for both the metric of configuration space and for the particular form of the quantum potential. In this paper these improvements --- the elimination of auxiliary variables, the derivation of Hamiltonian dynamics, and the introduction of information geometry methods --- are extended to the ED of quantum scalar fields. \section{Entropic Dynamics} We wish to study the quantum dynamics of a single scalar field. A field configuration $\phi(x)$ associates one real degree of freedom to each spatial point $x$ in three-dimensional Euclidean space. Such a configuration is represented as a point $\phi\in\mathcal{C}$ in an $\infty$-dimensional configuration space $\mathcal{C}$ and it is convenient to represent the point $\phi$ as a vector with infinitely many components denoted $\phi_{x}=\phi (x)$.\footnote{$\infty$-infinite dimensional spaces are complicated objects. We make no claim of mathematical rigor and follow the standard assumptions, notation, and practices of the subject. To be definite we can, for example, assume that the fields are initially defined on a discrete lattice (which makes the dimension of $\mathcal{C}$ infinite but countable) and that the continuum is eventually reached in the limit of vanishing lattice constant.} In the ED framework the field $\phi_{x}$ has definite values which indicates a major departure from the standard Copenhagen interpretation. However, in general, the values $\phi_{x}$ are unknown and the objective is to determine how their probability distribution $\rho\lbrack\phi]$ changes over time. The first goal will be to use the method of maximum entropy to find the probability $P[\phi^{\prime}|\phi]$ that the field configuration makes a transition from a configuration $\phi$ to a neighboring configuration $\phi^{\prime}$. \paragraph*{The prior} We start from a prior for the transition probability distribution $Q[\phi^{\prime}|\phi]$ that expresses extreme ignorance: before any information is taken into account the knowledge of how the field changes at one point $x$ tells us nothing about how it changes at other points $x^{\prime}$. This state of ignorance is represented by a prior that is a product over all space points, \begin{equation} Q[\phi^{\prime}|\phi]\sim {\textstyle\prod\limits_{x}} Q(\phi_{x}^{\prime}|\phi_{x})~. \label{prior} \end{equation} Furthermore, we assume that for every point $x$ knowledge about the initial $\phi_{x}$ tells us nothing about the final $\phi_{x}^{\prime}$. This is represented by $Q(\phi_{x}^{\prime}|\phi_{x})\sim$ constant. Since such constants have no effect on entropy maximization we can set $Q[\phi^{\prime }|\phi]=1$. \paragraph*{The constraints} The actual information about evolution is introduced through constraints. The first piece of information is that the evolution of the fields is continuous. This means that at first we need only consider a small change; later we will consider how a large change is achieved as a result of many small changes. For each $x$ the field will change by a small amount from $\phi_{x}$ to $\phi _{x}^{\prime}=\phi_{x}+\Delta\phi_{x}$ and we impose that the expected squared change is \begin{equation} \left\langle \Delta\phi_{x}^{2}\right\rangle =\int D\phi^{\prime}\,P\left[ \phi^{\prime}|\phi\right] \,\left( \Delta\phi_{x}\right) ^{2}=\kappa_{x}~, \label{1Constr} \end{equation} where $ {\textstyle\int} D\phi$ denotes a functional integration over $\mathcal{C}$. This is an infinite number of constraints; one for each point $x$. The constant $\kappa_{x}$ is some small number and a continuous motion will be eventually achieved by letting $\kappa_{x}\rightarrow0$. To reflect the translational invariance of three-dimensional Euclidean space we will set $\kappa_{x} =\kappa$ independent of $x$. The constraints (\ref{1Constr}) lead to an evolution that is completely isotropic in $\mathcal{C}$. Directionality is introduced assuming the existence of a \textquotedblleft potential\textquotedblright\ $\Lambda =\Lambda\lbrack\phi]$ and imposing a constraint on the expected displacement $\left\langle \Delta\phi\right\rangle $ along the functional gradient of $\Lambda$, \begin{equation} \left\langle \Delta\phi\right\rangle \cdot\nabla\Lambda\left[ \phi\right] \equiv\int D\phi^{\prime}\,P\left[ \phi^{\prime}|\phi\right] \,\int d^{3}x\,\Delta\phi_{x}\frac{\delta\Lambda}{\delta\phi_{x}}=\kappa^{\prime}~, \label{2Constr} \end{equation} where $\delta/\delta\phi_{x}$ denotes the functional derivative and $\kappa^{\prime}$ is a constant independent of $\phi$. \paragraph*{Entropy maximization} We seek the transition probability distribution $P\left[ \phi^{\prime} |\phi\right] $ that maximizes the relative entropy \begin{equation} S\left[ P,Q\right] =-\int D\phi^{\prime}\,P\left[ \phi^{\prime} |\phi\right] \log\frac{P\left[ \phi^{\prime}|\phi\right] }{Q\left[ \phi^{\prime}|\phi\right] }~ \label{Entropy} \end{equation} \qquad subject to the constraints (\ref{1Constr}), (\ref{2Constr}), and normalization. For $Q[\phi^{\prime}|\phi]=1$ the resulting distribution is Gaussian, \[ P\left[ \phi^{\prime}|\phi\right] =\frac{1}{\zeta}\exp\left[ -\int d^{3}x\left( \frac{\alpha_{x}}{2}\left( \Delta\phi_{x}\right) ^{2} -\alpha^{\prime}\frac{\delta\Lambda}{\delta\phi_{x}}\Delta\phi_{x}\right) \right] ~, \] where $\alpha_{x}$ and $\alpha^{\prime}$ are Lagrange multipliers, and $\zeta$ is a normalization constraint. Since by translation invariance we had $\kappa_{x}=\kappa$, the corresponding multipliers $\alpha_{x}$ must also be independent of $x$ so that $\alpha_{x}=\alpha$. Furthermore, since both the potential $\Lambda$ and the constant $\kappa^{\prime}$ are so far unspecified we can, without loss of generality, absorb $\alpha^{\prime}$ into $\Lambda$ which amounts to setting $\alpha^{\prime}=1$. The resulting transition probability is \begin{equation} P\left[ \phi^{\prime}|\phi\right] =\frac{1}{Z}\exp\left[ -\frac{\alpha} {2}\int d^{3}x\left( \Delta\phi_{x}-\frac{1}{\alpha}\frac{\delta\Lambda }{\delta\phi_{x}}\right) ^{2}\right] \label{trans prob} \end{equation} where $Z$ is a new normalization constant. In eq.(\ref{trans prob}) we see that $\kappa\rightarrow0$ is recovered as $\alpha\rightarrow\infty$. \paragraph*{Drift and fluctuations} The transition probability (\ref{trans prob}) shows that a small change $\Delta\phi_{x}$ can be written as an expected drift plus a fluctuation, $\Delta\phi_{x}=\left\langle \Delta\phi_{x}\right\rangle +\Delta w_{x}$. The expected drift is given by \begin{equation} \left\langle \Delta\phi_{x}\right\rangle =\int D\phi^{\prime}\Delta\phi _{x}P\left[ \phi^{\prime}|\phi\right] =\frac{1}{\alpha}\frac{\delta\Lambda }{\delta\phi_{x}}~. \label{drift} \end{equation} The expected fluctuations are such that \begin{equation} \left\langle \Delta w_{x^{\prime}}\right\rangle =0\quad\text{and} \quad\left\langle \Delta w_{x}\Delta w_{x^{\prime}}\right\rangle =\frac {1}{\alpha}\delta_{xx^{\prime}}~, \label{fluct} \end{equation} where $\delta_{xx^{\prime}}=\delta(x-x^{\prime})$. Since $\Delta w_{x} \sim\alpha^{-\frac{1}{2}}$ while $\left\langle \Delta\phi_{x}\right\rangle \sim\alpha^{-1}$ we see that for large $\alpha$ the fluctuations dominate the dynamics. \paragraph*{Entropic Time} In ED time is introduced as a book-keeping device to keep track of the accumulation of small changes. This involves introducing a notion of instants that are distinct and ordered, and defining the interval or duration between them. For details see \cite{Caticha 2010}\cite{Caticha 2010b}. The result is that if $\rho_{t}[\phi]$ refers to a probability distribution at a given instant, which we label $t$, then entropic time is constructed by defining the \emph{next} instant, labelled $t^{\prime}$, in terms of a distribution $\rho_{t^{\prime}}[\phi^{\prime}]$ given by \begin{equation} \rho_{t^{\prime}}\left[ \phi^{\prime}\right] =\int D\phi\,P\left[ \phi^{\prime}|\phi\right] \rho_{t}\left[ \phi\right] \label{Inst2} \end{equation} where $P\left[ \phi^{\prime}|\phi\right] $ is given by (\ref{trans prob}). This definition readily lends itself to an iterative process in which time is constructed instant by instant: $\rho_{t^{\prime}}$ is constructed from $\rho_{t}$, $\rho_{t^{\prime\prime}}$ is constructed from $\rho_{t^{\prime}}$, and so on. This process defines the dynamics. It remains to specify the interval $\Delta t$ between two successive instants $t$ and $t^{\prime}$ and the idea is captured by Wheeler's slogan: \emph{time is defined so that motion }(or, in our case, the evolution of the fields)\emph{\ looks simple}. For small changes the dynamics is dominated by the fluctuations, eq.(\ref{fluct}). It is therefore convenient to define duration so that the fluctuations are simple. Let \begin{equation} \alpha=\frac{1}{\eta\Delta t}\quad\text{so that}\quad\left\langle \Delta w_{x}\Delta w_{x^{\prime}}\right\rangle =\eta\Delta t\,\delta_{xx^{\prime}}\,, \label{alpha fluct} \end{equation} where $\eta$ is a constant (which will eventually be regraduated into $\hbar$) that fixes the units of time relative to those of $\phi$. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Thus, just as in Newtonian mechanics time is defined so that a free particle travels equal distances in equal times, in the ED of fields time is defined so that the fields undergo equal fluctuations in equal times. The translation invariance ($\alpha_{x}=\alpha$) guarantees that time flows at the same rate everywhere. \paragraph*{The information geometry of configuration space} To each point $\phi\in\mathcal{C}$ we can associate a probability distribution $P[\phi^{\prime}|\phi]$. Therefore $\mathcal{C}$ is a statistical manifold and up to an arbitrary global scale factor its geometry is uniquely determined by the information metric, \begin{equation} \gamma_{xx^{\prime}}=C\int D\phi^{\prime}\,P[\phi^{\prime}|\phi]\frac {\delta\log P[\phi^{\prime}|\phi]}{\delta\phi_{x}}\frac{\delta\log P[\phi^{\prime}|\phi]}{\delta\phi_{x^{\prime}}}~, \label{gamma C} \end{equation} where $C$ is an arbitrary positive constant. (See \emph{e.g.},\cite{Caticha 2012}.) For short steps ($\alpha\rightarrow\infty$) a straightforward substitution of (\ref{trans prob}) using (\ref{alpha fluct}) yields \begin{equation} \gamma_{xx^{\prime}}=\frac{C}{\eta\Delta t}\delta_{xx^{\prime}}~. \label{gamma} \end{equation} We see that as $\Delta t\rightarrow0$ we have $\gamma_{xx^{\prime}} \rightarrow\infty$. The reason is that as the distributions $P[\phi^{\prime }|\phi]$ and $P[\phi^{\prime}|\phi+\Delta\phi]$ become more sharply peaked it becomes increasingly easier to distinguish one from the other which means the information distance between them diverges. To define a distance that remains meaningful for arbitrarily small $\Delta t$ it is convenient to choose $C=\eta\Delta t$. Thus the metric $\gamma_{xx^{\prime}}=\delta_{xx^{\prime}}$ of the configuration space $\mathcal{C}$ is a straightforward generalization of the metric $\delta_{ij}$ of Euclidean space and the distance $\Delta\ell$ between two slightly different configurations $\phi$ and $\phi+\Delta\phi$ is \begin{equation} \Delta\ell^{2}=\int d^{3}xd^{3}x^{\prime}\,\delta_{xx^{\prime}}\Delta\phi _{x}\Delta\phi_{x^{\prime}}=\int d^{3}x\,(\Delta\phi_{x})^{2}~. \label{distance} \end{equation} In \cite{Caticha 2012} this choice of distance was merely postulated; here it is justified from information geometry, the assumptions implicit in (\ref{prior}), (\ref{1Constr}), and translation invariance. \paragraph*{The Fokker-Planck equation} The dynamics expressed by the integral equation (\ref{Inst2}) can be rewritten in differential form. The result is a functional Fokker-Planck equation (see \emph{e.g.}, \cite{Caticha 2010b}) that takes the form of a continuity equation, \begin{equation} \partial_{t}\rho_{t}\left[ \phi\right] =-\int d^{3}x\frac{\delta}{\delta \phi_{x}}\left( \rho_{t}\left[ \phi\right] v_{x}\left[ \phi\right] \right) ~. \label{FP} \end{equation} (The combination $\int d^{3}x\frac{\delta}{\delta\phi_{x}}$ is the functional version of the divergence.) The velocity $v_{x}\left[ \phi\right] $ with which probabilities propagate in configuration space is called the current velocity. It is given by \begin{equation} v_{x}\left[ \phi\right] =b_{x}\left[ \phi\right] +u_{x}\left[ \phi\right] \ , \label{CurrVel1} \end{equation} where \begin{equation} b_{x}\left[ \phi\right] =\frac{\left\langle \Delta\phi_{x}\right\rangle }{\Delta t}=\eta\frac{\delta\Lambda}{\delta\phi_{x}}\quad\text{and}\quad u_{x}\left[ \phi\right] =-\eta\frac{\delta\log\rho^{1/2}}{\delta\phi_{x}}~, \end{equation} are the drift and the osmotic velocities. The current velocity $v_{x}\left[ \phi\right] $ can be written as the functional gradient of a scalar functional $\Phi$, \begin{equation} v_{x}\left[ \phi\right] =\frac{\delta\Phi}{\delta\phi_{x}}\text{\quad where\quad}\frac{\Phi\lbrack\phi]}{\eta}=\Lambda\left[ \phi\right] -\log \rho^{1/2}\left[ \phi\right] ~. \label{CurrVel2} \end{equation} Incidentally, it is convenient to introduce a functional $H[\rho,\Phi]$ on $\mathcal{C}$ in order to write the Fokker-Planck equation as a functional derivative in $\mathcal{C}$, \begin{equation} \partial_{t}\rho\left[ \phi\right] =\frac{\Delta H\left[ \rho,\Phi\right] }{\Delta\Phi\lbrack\phi]}~. \label{Hamilton a} \end{equation} (For a useful and brief description of functional calculus in configuration space see \cite{Hall et al 2003}.) Using (\ref{FP}), equation (\ref{Hamilton a}) is easily integrated. The result is \begin{equation} H\left[ \rho,\Phi\right] =\int D\phi\,\int d^{3}x\frac{1}{2}\rho\left( \frac{\delta\Phi}{\delta\phi_{x}}\right) ^{2}+F[\rho]~, \label{Hamiltonian} \end{equation} where $F[\rho]$ is an integration constant. In what follows we will assume that $F[\rho]$ is independent of time. We emphasize that eq.(\ref{Hamilton a}) does not reflect a new assumption or a new dynamical principle; it is merely a rewriting of (\ref{FP}). \section{Non-dissipative Diffusion} The Fokker-Planck equation (\ref{FP}) describes a standard diffusion process, it does not describe quantum systems. As discussed in \cite{Caticha 2010}\cite{Caticha et al 2014b} the solution to this problem is to modify the constraints: instead of $\Lambda\lbrack\phi]$ being an externally prescribed potential we allow it to represent a dynamical field on $\mathcal{C}$. The appropriate constraint consists in demanding that at each instant of time the potential $\Lambda$, or equivalently the related quantity $\Phi$ in (\ref{CurrVel2}), is updated in such a way that a certain functional -- that we will call \textquotedblleft energy\textquotedblright\ -- remains constant. It turns out that the appropriate \textquotedblleft energy\textquotedblright \ is the functional $H[\rho,\Phi]$ given in eq.(\ref{Hamiltonian}). Thus, the dynamics consists of the coupled non-dissipative evolution of $\rho\lbrack \phi]$ and $\Phi\lbrack\phi]$. \paragraph*{The ensemble Hamiltonian and its conservation} To impose a non dissipative diffusion we demand the conservation of the functional $H\left[ \rho,\Phi\right] $, \begin{equation} \frac{dH\left[ \rho,\Phi\right] }{dt}=\int D\phi\,\left[ \frac{\Delta H}{\Delta\Phi}\partial_{t}\Phi+\frac{\Delta H}{\Delta\rho}\partial_{t} \rho\right] =0~. \end{equation} Using (\ref{Hamilton a}) \begin{equation} \frac{dH\left[ \rho,\Phi\right] }{dt}=\int D\phi\,\left[ \partial_{t} \Phi+\frac{\Delta H}{\Delta\rho}\right] \partial_{t}\rho=0~. \label{HCons} \end{equation} This condition must be satisfied at all times $t$ and for arbitrary choices of the initial values of $\rho$ and $\Phi$. From (\ref{FP}) this means that (\ref{HCons}) must hold for arbitrary choices of $\partial_{t}\rho$ which implies that the integrand of (\ref{HCons}) must vanish. Therefore, \begin{equation} \partial_{t}\Phi=-\frac{\Delta H}{\Delta\rho}\text{ \ \ \ and \ \ } \partial_{t}\rho=\frac{\Delta H}{\Delta\Phi}~, \label{Ham eqs} \end{equation} which we recognize as a functional form of Hamilton's equations with the conserved functional $H[\rho,\Phi]$ playing the role of Hamiltonian. \paragraph*{The Schr\"{o}dinger functional equation} The Fokker-Planck equation together with the conservation of $H\left[ \rho,\Phi\right] $ leads to a Hamiltonian structure regardless of the choice of $F[\rho]$. However, as discussed in \cite{Caticha et al 2014b}, quantum theory is reproduced only for a special choice of $F[\rho]$, \begin{equation} F[\rho]=\int D\phi\,\int d^{3}x\left[ \rho V\left( \phi_{x},\nabla\phi _{x}\right) +\frac{\xi}{\rho}\left( \frac{\delta\rho}{\delta\phi_{x} }\right) ^{2}\right] ~. \label{F[rho]} \end{equation} In the first term $V\left( \phi_{x},\nabla\phi_{x}\right) $ is a potential energy density to be discussed further below. The second term is usually called the \textquotedblleft quantum\textquotedblright\ potential. It is the functional trace of the Fisher information and its origin in information geometry is discussed in \cite{Caticha et al 2014b}. $\xi$ is a positive constant that controls the effect of the quantum potential. As a matter of convenience we can combine the two variables $\rho\lbrack\phi]$ and $\Phi\lbrack\phi]$ into a single complex variable, $\Psi_{k}[\phi ]=\rho^{1/2}e^{ik\Phi/\eta}$, where $k$ is an arbitrary positive constant. The pair of Hamilton's equations (\ref{Ham eqs}) can then be combined into a single non-linear equation for the wave functional $\Psi_{k}\left[ \phi\right] $, \[ i\frac{\eta}{k}\partial_{t}\Psi_{k}\left[ \phi\right] =\int d^{3}x\left[ -\frac{\eta^{2}}{2k^{2}}\frac{\delta^{2}}{\delta\phi_{x}^{2}}+\left( \frac{\eta^{2}}{2k^{2}}-4\xi\right) \frac{1}{|\Psi_{k}|}\frac{\delta^{2} |\Psi_{k}|}{\delta\phi_{x}^{2}}+V\right] \Psi_{k}\left[ \phi\right] ~. \] Different choices of the arbitrary $k$ lead to different but equivalent descriptions of the same theory. Let us therefore take advantage of the arbitrariness of $k$ and choose the simplest and most convenient description. This is achieved for the value $\hat{k}=(\eta^{2}/8\xi)^{1/2}$ that leads to the linear Schr\"{o}dinger equation, \begin{equation} i\hbar\partial_{t}\Psi\left[ \phi\right] =\int d^{3}x\left[ -\frac {\hbar^{2}}{2}\frac{\delta^{2}}{\delta\phi_{x}^{2}}+V\right] \Psi\left[ \phi\right] ~, \end{equation} where we have identified $\eta/\hat{k}=\hbar$ and dropped the index $k$ so that $\Psi=\rho^{1/2}e^{i\Phi/\hbar}$. This is quantum field theory in the Schr\"{o}dinger representation and one can now proceed in the usual way to introduce a Hilbert space, operators, and all the standard machinery of quantum mechanics. For example, the commutator of the field $\phi_{x}$ and its conjugate momentum is \[ \lbrack\phi_{x},\frac{\hbar}{i}\frac{\delta}{\delta\phi_{x^{\prime}}} ]=i\hbar\delta_{xx^{\prime}}~. \] At this point the potential $V(\phi_{x},\nabla\phi_{x})$ is essentially arbitrary. A useful form is obtained by doing a Taylor expansion about weak fields and gradients and then imposing the rotational and Lorentz symmetries required by the experimental evidence, \begin{equation} V(\phi_{x},\nabla\phi_{x})=\frac{1}{2}(\nabla\phi_{x})^{2}+\frac{1}{2} m^{2}\phi_{x}^{2}+\lambda^{\prime}\phi_{x}^{3}+\lambda^{\prime\prime}\phi _{x}^{4}+\ldots\label{potential} \end{equation} The various coefficients represent mass and other coupling constants. We conclude that the ED framework reproduces the Schr\"{o}dinger representation of the standard relativistic quantum theory of scalar fields.\cite{Jackiw 1989} \section{Discussion} Setting $\lambda^{\prime}=\lambda^{\prime\prime}=\ldots=0$ the Schr\"{o}dinger equation, \begin{equation} i\hbar\partial_{t}\Psi=\frac{1}{2}\int d^{3}x\left[ -\hbar^{2}\frac {\delta^{2}}{\delta\phi_{x}^{2}}+\left( \partial\phi_{x}\right) ^{2} +m^{2}\phi_{x}^{2}\right] \Psi~, \end{equation} reproduces the quantum theory of free real scalar fields \cite{Jackiw 1989} and all the standard results can now be obtained using conventional methods (see \emph{e.g.}, \cite{Long Shore 1998}). For example, choosing units such that $\hbar=c=1$, a standard calculation of the ground state gives a Gaussian functional, \begin{equation} \Psi_{0}\left[ \phi\right] =\frac{1}{Z_{0}^{1/2}}e^{-iE_{0}t}\exp\left[ -\frac{1}{2}\int d^{3}x\int d^{3}y\,\,\phi\left( \vec{x}\right) G\left( \vec{x},\vec{y}\right) \phi\left( \vec{y}\right) \right] ~, \end{equation} where \begin{equation} G(\vec{x},\vec{y})=\int\frac{d^{3}k}{(2\pi)^{3}}\omega_{k}\,e^{i\vec{k} \cdot(\vec{x}-\vec{y})}~,\quad\text{with\quad}\omega_{k}=(\vec{k}^{2} +m^{2})^{1/2}~. \end{equation} The energy of the ground state is \begin{equation} E_{0}=\left\langle H\right\rangle _{0}=\frac{1}{2}\int d^{3}x\,G\left( \vec{x},\vec{x}\right) =\int d^{3}x\int\frac{d^{3}k}{\left( 2\pi\right) ^{3}}\frac{1}{2}\omega_{k} \end{equation} is both infrared and ultraviolet divergent. The vacuum expectation value of the field at any point $\vec{x}$ vanishes while its variance diverges, \begin{equation} \left\langle \phi\left( \vec{x}\right) \right\rangle =0\quad\text{and} \quad\text{Var}\left[ \phi\left( \vec{x}\right) \right] =\langle\phi ^{2}\left( \vec{x}\right) \rangle_{0}=\int\frac{d^{3}k}{\left( 2\pi\right) ^{3}}\frac{1}{2\omega_{k}}~. \end{equation} Note, however, that what diverges here are not the physical fields but the uncertainty in our predictions. ED recognizes the role of incomplete information: the theory is completely unable to predict the field value at a sharply localized point. The theory does, however, offer meaningful predictions for other quantities. For example, the equal time correlations between two field variables $\phi\left( \vec{x}\right) $ and $\phi\left( \vec{y}\right) $ are \cite{Long Shore 1998}, \begin{equation} \left\langle \phi\left( \vec{x}\right) \phi\left( \vec{y}\right) \right\rangle _{0}=\int\frac{d^{3}k}{\left( 2\pi\right) ^{3}}\frac {e^{i\vec{k}\cdot\left( \vec{x}-\vec{y}\right) }}{2\omega_{k}}=\frac{m} {4\pi^{2}\left\vert \vec{x}-\vec{y}\right\vert }K_{1}\left( m\left\vert \vec{x}-\vec{y}\right\vert \right) \end{equation} where $K_{1}$ is a modified Bessel function. \paragraph*{Conclusion} Entropic dynamics provides an alternative method of quantization --- entropic quantization. In the ED framework a quantum theory is a non-dissipative diffusion in the configuration space. The entropic quantization of scalar fields yields the standard predictions of quantum field theory. At this early point in the development the advantages of the entropic approach do not lie in any new predictions (at least not yet) but rather in the suitability of the formalism to be extended beyond the domain in which ED is equivalent to the current quantum field theory and in the new insights it offers on matters of interpretation. More specifically, concerning entropic time: In the ED of fields, the field fluctuations provide the clock and entropic time is defined so that field fluctuations are uniform in space and time. Concerning the nature of particles: fields are real, particles are just some peculiar spatial correlations in the field.\ Concerning the divergences: they are the expected consequence of handling incomplete information. Some predictions will be certain, some will be uncertain, and some may even be infinitely uncertain. \paragraph*{Acknowledgments} We would like to thank D. Bartolomeo, C. Cafaro, N. Caticha, S. DiFranzo, A. Giffin, P. Goyal, D.T. Johnson, K. Knuth, S. Nawaz, M. Reginatto, C. Rodr\'{\i}guez, and J. Skilling for many discussions on entropy, inference and quantum mechanics. \end{document}
arXiv
IFN-γ and TNF-α drive a CXCL10+ CCL2+ macrophage phenotype expanded in severe COVID-19 lungs and inflammatory diseases with tissue inflammation Fan Zhang1,2,3,4,5, Joseph R. Mears1,2,3,4,5, Lorien Shakib6, Jessica I. Beynor1,2,3,4,5, Sara Shanaj7, Ilya Korsunsky1,2,3,4,5, Aparna Nathan1,2,3,4,5, Accelerating Medicines Partnership Rheumatoid Arthritis and Systemic Lupus Erythematosus (AMP RA/SLE) Consortium, Laura T. Donlin6,7 na1 & Soumya Raychaudhuri ORCID: orcid.org/0000-0002-1901-82651,2,3,4,5,8 na1 Immunosuppressive and anti-cytokine treatment may have a protective effect for patients with COVID-19. Understanding the immune cell states shared between COVID-19 and other inflammatory diseases with established therapies may help nominate immunomodulatory therapies. To identify cellular phenotypes that may be shared across tissues affected by disparate inflammatory diseases, we developed a meta-analysis and integration pipeline that models and removes the effects of technology, tissue of origin, and donor that confound cell-type identification. Using this approach, we integrated > 300,000 single-cell transcriptomic profiles from COVID-19-affected lungs and tissues from healthy subjects and patients with five inflammatory diseases: rheumatoid arthritis (RA), Crohn's disease (CD), ulcerative colitis (UC), systemic lupus erythematosus (SLE), and interstitial lung disease. We tested the association of shared immune states with severe/inflamed status compared to healthy control using mixed-effects modeling. To define environmental factors within these tissues that shape shared macrophage phenotypes, we stimulated human blood-derived macrophages with defined combinations of inflammatory factors, emphasizing in particular antiviral interferons IFN-beta (IFN-β) and IFN-gamma (IFN-γ), and pro-inflammatory cytokines such as TNF. We built an immune cell reference consisting of > 300,000 single-cell profiles from 125 healthy or disease-affected donors from COVID-19 and five inflammatory diseases. We observed a CXCL10+ CCL2+ inflammatory macrophage state that is shared and strikingly abundant in severe COVID-19 bronchoalveolar lavage samples, inflamed RA synovium, inflamed CD ileum, and UC colon. These cells exhibited a distinct arrangement of pro-inflammatory and interferon response genes, including elevated levels of CXCL10, CXCL9, CCL2, CCL3, GBP1, STAT1, and IL1B. Further, we found this macrophage phenotype is induced upon co-stimulation by IFN-γ and TNF-α. Our integrative analysis identified immune cell states shared across inflamed tissues affected by inflammatory diseases and COVID-19. Our study supports a key role for IFN-γ together with TNF-α in driving an abundant inflammatory macrophage phenotype in severe COVID-19-affected lungs, as well as inflamed RA synovium, CD ileum, and UC colon, which may be targeted by existing immunomodulatory therapies. Tissue inflammation is a unifying feature across disparate diseases. While tissue- and disease-specific factors shape distinct inflammatory microenvironments, seemingly unrelated diseases can respond to the same therapy. For example, anti-tumor necrosis factor (TNF) therapies have revolutionized treatment for joint inflammation in autoimmune rheumatoid arthritis (RA) [1], while patients with intestinal inflammation due to Crohn's disease (CD) and ulcerative colitis (UC), collectively known as inflammatory bowel disease (IBD), also respond to anti-TNF medications [2]. Here, we posit that the deconstruction of tissues to the level of individually characterized cells and subsequent integration of these cells from various types of inflamed tissues could provide a platform to identify shared pathologic features across diseases and provide rationale for repurposing medications in outwardly dissimilar conditions. Recent studies have detailed features of local tissue inflammation and immune dysfunction in COVID-19 and related diseases caused by SARS and MERS coronaviruses [3]. Consensus is building that extensive unchecked inflammation involving so-called "cytokine storm" is a driver of severe late-stage disease. A single-cell study of bronchoalveolar lavage fluid (BALF) in intubated COVID-19 patients identified two inflammatory macrophage subsets—one characterized by CCL2, CCL3, and CXCL10 expression and a second by FCN1 and S100A8—as potential mediators of pathology in this late-stage disease [4]. The presence of these macrophage subsets in the lung correlated with elevated circulating cytokines and extensive damage to the lung and vascular tissue. Reports looking at peripheral blood from large numbers of COVID-19 patients have consistently documented lymphopenia (reduced lymphocyte frequency) paired with increased levels of CD14+ monocytes and inflammatory cytokines, such as IL1B, TNF-α, IFN-α, and IFN-γ [5,6,7]. These factors are ineffective in lowering viral load while possibly contributing to cytokine release syndrome (CRS) [7]. Together, these studies indicate the importance of uncovering the full extent of cell states present in COVID-19 patients including within affected tissues, and in particular among macrophages. Further, the extent to which these cell states are shared between COVID-19 and other inflammatory diseases and their disease association may further clarify disease mechanisms and precisely define therapeutic targets. Macrophages are pervasive throughout the body and pivotal to tissue homeostasis, where they tailor their function to the parenchymal functions of each tissue type. In inflammation, tissue-resident macrophages and infiltrating monocytes are activated not only by factors from the unique tissue microenvironment, but also by disease-associating factors such as byproducts of deregulated tissue homeostasis, tissue damage, gene expression differences due to genetic variants, immune reactions, and in some cases, infecting pathogens. The unprecedented plasticity and robust reactivity of macrophages and monocytes generates a spectrum of phenotypes yet to be fully defined in human disease that mediate clearance of noxious elements, but in some cases, such as in cytokine storm, aggravate disease pathology. These phenotypes include a range of pro-inflammatory and anti-microbial states that secrete key cytokines (e.g., TNF and IL-1B) and chemokines (e.g., CXCL10 and CXCL11) and other functional states geared towards debris clearance, dampening inflammation, and tissue reconstruction, as well as a variety of intermediate states [8,9,10,11]. Meta-analysis of reactive macrophage phenotypes in inflamed tissues across diseases may further refine our understanding of the complexity of human macrophage functions, identifying subsets potentially shared across immune disorders, and thereby providing a promising route towards repurposing therapeutic strategies. Single-cell RNA-seq (scRNA-seq) has provided an opportunity to interrogate inflamed tissues and identify expanded and potentially pathogenic immune cell types [12]. We recently defined a distinct CD14+ IL1B+ pro-inflammatory macrophage population that is markedly expanded in RA compared to osteoarthritis (OA), a non-inflammatory disease [13, 14]. Likewise, scRNA-seq studies on inflamed colonic tissues have identified inflammatory macrophage and fibroblast phenotypes with high levels of Oncostatin M (OSM) signaling factors that are associated with resistance to anti-TNF therapies [15]. Only very recently, developments in computational methods have made it possible to meta-analyze an expansive number of cells across various tissue states, while mitigating experimental and cohort-specific artifacts [16,17,18,19,20,21,22], therein assessing shared and distinct cell states in disparate inflamed tissues. To define the key shared immune cell compartments between inflammatory diseases with COVID-19, we meta-analyzed and integrated tissue-level single-cell profiles from five inflammatory diseases and COVID-19. We created an immune cell reference consisting of 307,084 single-cell profiles from 125 donor samples from RA synovium, systemic lupus erythematosus (SLE) kidney, UC colon, CD ileum, interstitial lung disease, and COVID-19 BALF. This single-cell reference represents comprehensive immune cell types from different disease tissues with different inflammation levels, which can be used to investigate inflammatory diseases and their connections with COVID-19 in terms of immune cell responses. Using our meta-dataset reference, we identified major immune cell lineages including macrophages, dendritic cells, T cells, B cells, NK cells, plasma cells, mast cells, and cycling lymphocytes. Among these, we found two inflammatory CXCL10+ CCL2+ and FCN1+ macrophage states that are shared between COVID-19 and several of the inflammatory diseases we analyzed. To understand the factors driving these phenotypes, we stimulated human blood-derived macrophages with eight different combinations of inflammatory disease-associated cytokines and tissue-associating stromal cells. We demonstrated that the CXCL10+ CCL2+ macrophages from severe COVID-19 lungs share a transcriptional phenotype with macrophages stimulated by TNF-α plus IFN-γ. Notably, the other two conditions wherein these macrophages are most abundant are RA and CD. As patients with RA and CD show response to anti-TNF therapies, this finding supports the approach of identifying shared cellular states in unrelated inflamed tissues to define shared responses to medications. Furthermore, janus kinase (JAK) inhibitors have also proved effective in RA, presumably in large part through targeting IFN-γ responses [8, 23, 24]. Our data collectively support the potential efficacy of JAK inhibitors and anti-TNF therapies in inflammatory macrophage responses in COVID-19 due to cellular phenotype associations with select inflammatory tissue diseases already proven to respond to these medications. Integration of scRNA-seq profiles from multiple datasets scRNA-seq data collection, remapping, and aggregation To build a multi-tissue immune cell reference, we obtained the raw FASTQ files and raw count matrices from the following publicly available scRNA-seq datasets: RA synovial cells from dbGaP (Zhang, et al, 2019; phs001457.v1.p1) [13] and dbGaP (Stephenson, et al, 2018; phs001529.v1.p1) [25], SLE kidney cells from dbGaP (Arazi, et al, 2019; phs001457.v1.p1) [26], UC colon cells from Single Cell Portal (Smillie, et al, 2019; SCP259) [15], CD ileum cells from GEO (Martin, et al, 2019; GSE134809) [27], interstitial and pulmonary lung disease from GEO (Reyfman, et al, 2019; GSE122960) [28], and COVID-19 and healthy BALF cells from GEO (Liao, et al, 2020; GSE145926) [4]. We also use the datasets from Grant et al. (GSE155249) [29] and Xue et al. (GSE47189) [11] as additional validations. For the FASTQs that we obtained, we used Kallisto [30] to map the raw reads to the same kallisto index generated from GRCh38 Ensembl v100 FASTA files. We pseudo-aligned FASTQ files to this reference, corrected barcodes, sorted BUS files, and counted unique molecular identifiers (UMIs) to generate UMI-count matrices. We aggregated all the cell barcodes from 125 donor samples into one matrix. We performed consistent QC to remove the cells that expressed fewer than 500 genes or with more than 20% of the number of UMIs mapping to the mitochondrial genes, resulting in 307,084 cells in total. The number of donor samples and cells that passed QC for each tissue source, disease status, technology, and clinical data are shown in Additional file 1: Table S1. Normalization, scaling, and feature selection We aggregated all samples on the overlapped 17,054 genes. We then normalized each cell to 10,000 reads and log-transformed the normalized data. We then selected the top 1,000 most highly variable genes based on dispersion within each donor sample and combined these genes to form a variable gene set. Based on the pooled highly variable genes, we then scaled the aggregated data matrix to have mean 0 and variance 1. We normalized the expression matrix using the L2 norm. Dimensionality reduction and batch effect correction To minimize the effect from multiple datasets with different cell numbers during an unbiased scRNA-seq data integration, we performed weighted principal component analysis (PCA) and used the first 20 weighted PCs for follow-up analysis. The summation of the weights for cells from each separate single-cell dataset is equal so that each dataset contributed equally to the analysis. For all cell-type integration, we corrected batch effects on three different levels (sequencing technology, tissue source, and donor sample) simultaneously using Harmony [16]. We use default parameters and also specified theta = 2 for each batch variable, max.iter.cluster = 30, and max.iter.harmony = 20. For Harmony batch correction, we use the same weights from the weighted PCA. For macrophage only integration, we corrected the effect from donors for the 10X data, and dataset for the CEL-seq2 data since each donor generated from CEL-seq2 data only has less than 100 cells. As outputs, we obtained batch-corrected PC embeddings where the effects from different single-cell datasets and donors are removed in low-dimensional PC space. Quantitative evaluation of batch correction and dataset integration Variance explained from different sources: To quantitatively measure the mixture of batch effects after correction, we estimated the sources of variance explained from gene expression on the first ten principal component embeddings. We show the proportion of variance explained from the original pre-defined immune cell type, tissue origin, technology, and donor sample. We used the R package limma [31] to fit the model and ANOVA to compute the percentage of variance explained: $$ \mathrm{principal}\ \mathrm{component}\sim \mathrm{celltype}+\mathrm{tissue}+\mathrm{technology}+\mathrm{sample}. $$ LISI score: Meanwhile, we used a LISI (local inverse Simpson's Index) metric to measure the mixture levels of batch labels based on local neighbors chosen at a specific perplexity [16, 22]. Specifically, we built Gaussian distribution of neighborhoods and computed these local distributions of batch probabilities p(b) using perplexity 30 on the first 20 principal components. B is the number of batches. Then, we calculated the inverse Simpson's index: $$ 1/{\sum}_{b=1}^Bp(b). $$ An iLISI (integration LISI) score ranges from 1.0, which denotes no mixing, to B (the maximum score is the total number of levels in the categorical batch variable) where higher scores indicate better mixing of batches. Here batch can be tissue source, donor sample, and sequencing technology. We also calculated the cLISI (cell-type LISI), which measures integration accuracy of pre-defined cell-type annotations instead of using the same formulation. An accurate embedding has a cLISI close to 1 for every cell neighborhood, reflecting separation of distinct cell types. Graph-based clustering We then applied unbiased graph-based clustering (Louvain [32]) on the top 20 batch-corrected PCs at various resolution levels (0.2, 0.4, 0.6, 0.8, 1.0). We use 0.4 as the final resolution value to gain the biological interpretations that make most sense. Then, we furthermore performed dimensionality reduction using UMAP [33]. Pseudo-bulk differential expression analysis To identify robust single-cell cluster marker genes that are shared between diseases, we performed pseudo-bulk analysis by summing the raw UMI counts for each gene across cells from the same donor sample, tissue source, and cluster assignment. We modeled raw count as a negative binomial (NB) distribution and fitted a generalized linear model (GLM) for each gene accounting for tissue, sample, and nUMI using DESeq2 [34]. We also computed AUC and P using the Wilcoxon rank-sum test by comparing pseudo-bulk samples from one cluster to the others. We use several criteria to decide statistically significant marker genes: (1) GLM-β, (2) fold change, (3) AUC, and (4) Wilcoxon rank-sum test and Bonferroni-corrected P (threshold 10−5, 0.05/5,000 tested highly variable genes). We tested all genes that were detected in more than 100 cells with non-zero UMI counts. Identification of major immune cell-type clusters We carefully annotated each identified immune cell-type cluster in two ways. First, we mapped the original published annotation labels [4, 13, 15, 26, 27] to our UMAP embeddings when applicable. We are able to reproduce the original cell-type subsets in our cross-disease integrative analysis. Second, we annotated the identified clusters using cell-type lineage marker genes: T cells (CD3D), NK cells (NCAM1), B cells (MS4A1), plasma cells (MZB1), macrophages (FCGR3A/CD14), dendritic cells (DCs, CD1C), mast cells (TPSAB1), and cycling cells (MKI67). Cell culture for human blood-derived macrophages and synovial fibroblasts We obtained human leukocyte-enriched whole blood samples from 4 healthy blood donors from the New York Blood Center and purified peripheral blood mononuclear cells (PBMC) from each using Ficoll gradient centrifugation. We isolated CD14+ monocytes from each sample using human CD14 microbeads (Miltenyi Biotec) and differentiated these cells into blood-derived macrophages for 1 day at 37 °C in Macrophage-Colony Stimulating Factor (M­-CSF); 10 ng/mL) (PeproTech) and RPMI 1640 medium (Corning) supplemented with 10% defined fetal bovine serum (FBS) (HyClone), 1% penicillin-streptomycin (Thermo Fisher Scientific), and 1% l-glutamine (Thermo Fisher Scientific) in a 6-well plate at a concentration of 1.2 million cells/mL. In parallel, we obtained human synovial fibroblasts derived from deidentified synovial tissues from RA patients undergoing arthroplasty (HSS IRB 14­033). Two unique fibroblast lines were used, each paired with two distinct blood-derived macrophage donor samples. We cultured fibroblasts in alpha minimum essential medium (a­MEM) (Gibco) supplemented with 10% premium FBS (R&D Systems Inc), 1% penicillin-streptomycin (Thermo Fisher Scientific), and 1% l-glutamine (Thermo Fisher Scientific) for 4 to 6 passages. To create each transwell, we seeded the mesh of polyester chambers with 0.4­μm pores (Corning) with either 200,000 synovial fibroblasts or without fibroblasts for 1 day at 37 °C. The following day, we suspended each transwell—3 with fibroblasts and 6 without fibroblasts per donor—above one well of cultured macrophages. Those transwells with fibroblasts had a fibroblast-­to-­macrophage ratio of 1:15. In total, we created 9 wells per donor. Next, we added IFN-β (200 pg/mL), IL-4 (20 ng/ mL), TNF-α (20 ng/mL), and/or IFN-γ (5 ng/mL) to each transwell and underlying plate per donor. All plates were incubated at 37 °C for 19 h. RNA library preparation and sequencing We applied a modified version of the staining protocol from CITE-seq, using only Totalseq™-A Hashing antibodies from Biolegend [35]. We harvested macrophages from each well and aliquoted one fifth of the cells, ~ 750,000 cells per condition, for staining in subsequent steps. We washed the cells in filtered labeling buffer (PBS with 1% BSA) and resuspended in 50 μL of labeling buffer with Human TruStain FcX™ (Biolegend Cat #422302, 5 μL per stain) for 10 min at 4 °C. Next, we added 50 μL of labeling buffer for a final concentration of 1.6 ng/μL of a total-seq hashtag (1, 2, 4–9, or 12) per condition per donor for 25 min at 4 °C. Next, we washed all samples in 2 mL, 1 mL, and 1 mL of labeling buffer, sequentially. We counted the remaining cells using a cellometer (Nexcelom Cellometer Auto 1000) and aliquoted the equivalent of 60,000 cells from each condition into one Eppendorf tube per donor. From here, we filtered through a 40-μm mesh and resuspended in PBS with 0.04% BSA to a concentration of 643.7 cells/μl. We followed the Chromium Single Cell 3′ v3 kit (10x Genomics) processing instructions and super-loaded 30,000 cells per lane. We used one lane per donor, with 9 conditions multiplexed per donor sample. After cDNA generation, samples were shipped to the Brigham and Women's Hospital Single Cell Genomics Core for cDNA amplification and sequencing. Pairs of libraries were pooled and sequenced per lane on an Illumina NovaSeq S2 with paired-end 150 base-pair reads. Processing FASTQ reads into gene expression matrices and cell hashing We quantified mRNA and antibody UMI counts, respectively. Cellranger v3.1.0 was used to process the raw BCL files and produce a final gene by cell barcode UMI count matrix. First, raw BCL files were demultiplexed using cellranger mkfastq to generate FASTQ files with default parameters. Then, these FASTQ files were aligned to the GRCh38 human reference genome. Gene/antibody reads were quantified simultaneously using cellranger count. Cell barcodes and UMIs were extracted for gene/hashtag antibodies for each run. For quality control of the cells, we first performed mRNA-level cell QC and then hashtag-level QC. For the mRNA-level QC, we removed the cells that expressed fewer than 1,000 genes or more than 10% of UMIs mapping to the mitochondrial genes. For the hashtag QC, we removed the cells whose proportion of UMIs for the most abundant hashing antibody is less than 90%, and removed the cells whose ratio of the second most-abundant and first most-abundant antibody is greater than 0.10. After filtering, each cell was assigned a hashing antibody and donor sample on the most abundant hashing antibody barcode. After QC, we obtained 9,399, 8,775, 4,622, and 3,027 cells for the 4 donor samples. We then normalized UMI counts from each cell based on the total number of UMIs and log-transformed the normalized counts. Linear modeling for experimental stimulation-specific genes from cell culture single-cell profiles To more accurately identify gene signatures that are specific to each of the eight stimulatory conditions, we used linear models to test each gene for differential normalized gene expression across contrasts of interest. Specifically, we fit the following models: $$ \mathrm{gene}\_\mathrm{expression}\sim \mathrm{stim}+1\mid \mathrm{sample}+\mathrm{nUMI}, $$ where stim is a categorical variable that represents eight stimuli and an untreated status, 1 ∣ sample is the random effect of the 4 replicated donor samples, and nUMI (number of unique molecular identifiers) represents the technical cell-level fixed effect. We obtained the fold change, T and P value, and Bonferroni-corrected P value as measurements for each tested gene signature for each applied condition. We then generated a list of differentially expressed genes whose fold change is greater than 2 and P is smaller than the Bonferroni correction threshold 10−7 (0.05/7,000 highly variable genes × 9 conditions) for each stimulatory condition. Testing integrative macrophage clusters for association with severe/inflamed status We tested the association of each macrophage cluster with severe/inflamed status compared to healthy with MASC (mixed-effects modeling of associations of single cells) [36]. We fit a logistic regression model for each identified cluster within one tissue and set the nUMIs and percent MT (% MT) content as cell-level fixed effects, and donor sample as a random effect: $$ \log \left[\frac{Y_{i,c}}{1-{Y}_{i,c}}\right]={\beta}_{\mathrm{case}}{X}_{i,\mathrm{case}}+{\beta}_{\mathrm{tech}1}{X}_{i,\mathrm{tech}1}+{\beta}_{\mathrm{tech}2}{X}_{i,\mathrm{tech}2}+\left({\varphi}_d|\ d\ \right), $$ where Yi,c is the odds of cell i in cluster c, βcase is the effect log (odds ratio) for case (severe COVID-19)-control (healthy) status, βtech1 is a vector of technical cell-level (nUMIs) covariate, βtech2 is a vector of technical cell-level (% mitochondrial genes) covariate, Xi is the values for cell i in technology as appropriate, and (φd| d ) is the random effect of donor d. Thus, we used this logistic regression model to test for differentially abundant macrophage clusters associated with severe COVID-19 by correcting for the technical cell-level and donor-level covariates. Similarly, we also tested for differentially abundant macrophage clusters associated with inflamed CD compared to non-inflamed CD, RA compared to OA, and inflamed UC compared to healthy colon, accounting for technical cell-level and donor-specific covariates. We generated log likelihood-ratio test MASC P values and odds ratios for each tested cluster and used Bonferroni correction to report the macrophage clusters that are statistically significantly more abundant in severe/inflamed samples compared to healthy or non-inflamed controls. Gene score calculation We calculated a CXCL10+ CCL2+ gene score for each single-cell profile from an external single-cell RNA-seq dataset from severe COVID-19 BALF [29]. The gene score was calculated as the sum of counts for CXCL10+ CCL2+ genes (n = ~ 70) as a percent of total gene counts for each cell. Pathway enrichment analysis For pathway gene set enrichment, we use the msigdbr R package on 4872 genesets including C5 (Gene Ontology), C7 (immunologic signature), and H (Hallmarks) from MSigDB [37] to calculate enriched pathways of macrophage states for each disease tissue. For all the analysis and plots, sample sizes and measures of center and confidence intervals (mean ± SD or SEM), and statistical significance are presented in the figures, figure legends, and in the text. Results were considered statistically significant when P < 0.05 by Bonferroni correction as is indicated in figure legends and text. A reference of > 300,000 immune single-cell profiles across inflammatory diseases and COVID-19 To compare hematopoietic cells across inflammatory diseases and COVID-19 in an unbiased fashion, we aggregated 307,084 single-cell RNA-seq profiles from 125 healthy or inflammatory disease-affected tissues spanning six disorders: (1) colon from healthy individuals and patients with inflamed or non-inflamed UC [15]; (2) terminal ileum from patients with inflamed or non-inflamed CD [27]; (3) synovium from patients with RA or OA [13, 25]; (4) kidney from patients with SLE or healthy controls [26], (5) lung from patients with interstitial lung disease [28], and (6) BALF from healthy individuals and those with mild or severe COVID-19 [4] (Fig. 1a, b, Additional file 2: Figure S1a, Additional file 1: Table S1). We developed a pipeline for multi-tissue integration and disease association at the single-cell level (Fig. 1a, "Methods"). Where feasible, we obtained raw reads and re-mapped them to the GRCh38 genome assembly. We then aggregated raw counts for 17,054 shared genes across studies into a single matrix, performed consistent quality control (QC), library size normalization, and principal component analysis [38] (PCA) ("Methods"). To account for different cell numbers from different datasets, we performed weighted PCA, assigning higher weights to cells from datasets with a relatively small number of cells and vice versa. In the integrated PCA embedding, we modeled and removed the effects of technology, tissue, and donor with Harmony [16] to identify shared cell states across studies and diseases ("Methods"). Before Harmony, cells grouped primarily based on tissue source (Additional file 2: Figure S1b). After Harmony, < 1% of the variation explained by PC1 and PC2 was attributable to tissue source and sample, while > 60% was attributable to previously defined cell types (Fig. 1c). Importantly, rare pathogenic cell types within tissue, such as germinal center B cells in inflamed UC colon and age-associated B cells in RA synovium, were identifiable in the integrated space (Additional file 2: Figure S1c). We confirmed the degree of cross sample, tissue, technology, and cell-type mixing with an independent measure of single-cell integration: LISI [16, 22] (Local Inverse Simpson's Index). An increased iLISI (integration LISI) score after batch correction compared to before batch correction indicates a better mixing of batches after correction (Fig. 1d and Additional file 2: Figure S2a). Integrative analysis of > 300,000 single-cell profiles from five inflammatory disease tissues and COVID-19 BALF. a Overall study design and single-cell analysis, including the integrative pipeline, a single-cell reference dataset, fine-grained analysis to identify shared macrophage states, and disease association analysis. b Number of cells and donor samples from each healthy and disease tissue. c Percent of variance explained in the gene expression data by pre-defined broad cell type, tissue, sample, and technology for the first and second principal component (PC1 and PC2) before and after batch effect correction. d iLISI score before and after batch correction to measure the mixing levels of donor samples and tissue sources. An iLISI (integration LISI) score of 1.0 denotes no mixing while higher scores indicate better mixing of batches. e Integrative clustering of 307,084 cells reveals common immune cell types from different tissue sources. f Immune cells from separate tissue sources in the same UMAP coordinates. Cells from the same cell types are projected next to each other in the integrative UMAP space. g Heatmap of cell-type lineage marker genes. Gene signatures were selected based on AUC > 0.6 and P < 0.05 by Bonferroni correction comparing cells from one cell type to the others In this integrated space, we performed graph-based clustering [32] and visualization with UMAP (Uniform Manifold Approximation and Projection) [33]. We identified 9 major cell-type clusters (Fig. 1e) present in all six tissues (Fig. 1f) and diseases (Additional file 2: Figure S2b). We labeled the clusters with canonical markers (Fig. 1g, Additional file 3: Table S2): CD3D+ T cells, NCAM1+ NK cells, MS4A1+ B cells, MZB1+ plasma cells, FCGR3A+/CD14+ macrophages, CD1C+ dendritic cells (DCs), TPSAB1+ mast cells, and MKI67+ cycling T and B cells. While the proportion of these immune populations differed substantially among tissues, macrophages represented a major component in each tissue (Additional file 2: Figure S2c). For example, samples obtained from lung tissues and BALF, whether from healthy controls or patients with ILD and COVID-19, contained the highest proportion of macrophages (74.8% of total hematopoietic cells) (Fig. 1f, Additional file 2: Figure S2c). In contrast, while RA synovium, SLE kidney, and CD ileum contained 9.4% macrophages, T lymphocytes comprised the majority of cells in these tissues (55.7%). The UC colon samples contained 8.3% macrophages, but had a distinctively high abundance of plasma cells (42.4%) (Additional file 2: Figure S2c). Identification of shared inflammatory macrophage states across inflammatory disease tissues and COVID-19 lungs To resolve the heterogeneity within the macrophage compartment, we analyzed 74,373 macrophages from 108 donors and performed weighted PCA and fine clustering analysis to define shared and distinct states across diseases (Fig. 2a, Additional file 2: Figure S3a, Additional file 4: Table S3). We identified four shared macrophage states defined by different marker sets: (1) CXCL10+ CCL2+ cells, (2) FCN1+ cells, (3) MRC1+ FABP4+ cells, and (4) C1QA+ cells (Fig. 2a, b, Additional file 2: Figure S3b). The CXCL10+ CCL2+ cells and the FCN1+ cells expressed classic inflammatory genes [15] including IL1B, S100A8, CCL3, CXCL11, STAT1, IFNGR1, and NFKB1 (Fig. 2b, c). A higher proportion of inflammatory macrophages in severe COVID-19 expressed these inflammation-associated genes compared to healthy BALF (Additional file 2: Figure S3c). We detected the gene signature for the CXCL10+ CCL2+ inflammatory macrophage state in a higher proportion of macrophages from severe COVID-19 BALF than from other inflamed tissues (Fig. 2c). Integrative analysis of tissue-level macrophages reveals shared CXCL10+ CCL2+ and FCN1+ inflammatory macrophage states. a Integrative clustering of 74,373 macrophages from individuals from BALF, lung, kidney, colon, ileum, and synovium. b Density plot of cells with non-zero expression of marker genes in UMAP. c Proportion of inflammatory macrophages that express cytokines and inflammatory genes in severe COVID-19 compared to those in inflamed RA, CD, and UC. Orange represents CXCL10+ CCL2+ state-specific genes. d Previously defined inflammatory macrophages from diseased tissues are clustered with the majority of the macrophages from severe COVID-19. e Z-score of the pseudo-bulk expression of marker genes (AUC > 0.6 and Bonferroni-adjusted P < 10−5) for the CXCL10+ CCL2+ and FCN1+ macrophages. Columns show pseudo-bulk expression. f The proportions of CXCL10+ CCL2+ macrophages of total macrophages per donor sample are shown from healthy BALF (n = 3), mild (n = 3), and severe (n = 6) COVID-19, non-inflamed CD (n = 10) and inflamed CD (n = 12), OA (n = 2) and RA (n = 15), and healthy colon (n = 12), non-inflamed UC (n = 18), and inflamed UC (n = 18). Box plots summarize the median, interquartile, and 75% quantile range. P is calculated by Wilcoxon rank-sum test within each tissue. The association of each cluster with severe/inflamed compared to healthy control was tested. 95% CI for the odds ratio (OR) is given. MASC P is calculated using one-sided F tests conducted on nested models with MASC [36]. The clusters above the dashed line (Bonferroni correction) are statistically significant. Clusters that have fewer than 30 cells are removed. g GSEA analysis for each tissue revealed shared enriched pathways for CXCL10+ CCL2+ macrophages: TNF-α signaling via NF-kB (Hallmark gene set), response to interferon gamma (GO:0034341), Covid-19 SARS-CoV-2 infection calu-3 cells (GSE147507 [39]), positive regulation of cytokine production (GO:0001819), response to tumor necrosis factor (GO:0034612), regulation of innate immune response (GO:0045088), and defense response to virus (GO: 0051607) Liao et al. [4] previously identified CXCL10+ CCL2+ and FCN1+ populations as inflammatory states in the COVID-19 BALF samples used in this integrated analysis. In our multi-disease clustering, the inflammatory macrophages from inflamed RA synovium and UC and CD intestinal tissue largely mapped to the same two inflammatory macrophages seen in severe COVID-19 (Fig. 2d, Additional file 2: Figure S3d-e). In most tissue types, we found all four states represented in all six tissues, and we quantified this overlap with LISI and estimated the variance explained in the PC space (Additional file 2: Figure S3f, g). Strikingly, we observed that the FCN1+ inflammatory macrophage state dominated in SLE kidney, with few in the CXCL10+ CCL2+ macrophages (Fig. 2d), suggesting that our integrative analysis was effective in identifying both shared inflammatory states while maintaining distinct patterns in a subset of tissues. To comprehensively define markers for the two inflammatory tissue macrophage states shared across COVID-19, RA, UC, and CD, we performed a pseudo-bulk differential expression analysis ("Methods," Additional file 5: Table S4, fold change > 2, AUC > 0.6, Bonferroni-adjusted P < 10−5). The CXCL10+ CCL2+ inflammatory macrophages displayed significantly higher expression of CXCL10, CXCL11, CCL2, CCL3, GBP1, and IDO1 in severe COVID-19, inflamed RA, and CD compared to the FCN1+ macrophages (Fig. 2e). In contrast, the FCN1+ macrophages displayed high expression of FCN1 (Ficolin-1) and a series of alarmins such as S100A8 and S100A9 in most of the inflamed tissues (Fig. 2e). Both inflammatory macrophage states showed high expression of transcription factors that promote a pro-inflammatory macrophage phenotype, STAT1 and IRF1, in inflamed RA, UC, CD, and COVID-19 BALF relative to healthy or non-inflamed tissues (Fig. 2e). Within the CXCL10+ CCL2+ state, there was notable heterogeneity across cells in terms of IL1B expression indicating the possibility of further delineation of this macrophage state (Additional file 2: Figure S4a-b). Moreover, the effect size of all genes in CXCL10+ CCL2+ and FCN1+ subsets compared with the MRC1+ FABP4+ macrophages for each tissue further highlighted a similar set of inflammatory genes with greatest fold changes across all diseases for each subset (Additional file 2: Figure S5). As validation, we assessed the macrophage phenotypes found in a recent analysis of single cells from severe COVID-19 BALF [29]. Notably, we observed a significant correlation between the cross-disease shared CXCL10+ CCL2+ macrophages and two monocyte-derived alveolar macrophage (MoAM) inflammatory phenotypes from this independent severe COVID-19 cohort (wherein they were referred to as MoAM1 and MoAM2) [29] (Additional file 2: Figure S6a-d). We further examined CXCL10+ CCL2+ macrophage-associated genes with CD14+ cells from inflamed (leukocyte-rich) RA, non-inflamed (leukocyte-poor) RA, and OA [13]; we observed significant enrichment of CXCL10+ CCL2+ state-specific genes (CXCL10, CXCL9, CCL3, GBP1, and IDO1), FCN1+ state-specific genes (FCN1, S100A9, CD300E, IFITM3, and CFP), and genes (IRF1, BCL2A1, and STAT1) associated with both states in the macrophages from inflamed RA compared to non-inflamed RA and OA (Additional file 2: Figure S6e). By integrating macrophages across multiple inflamed tissues, we show that inflammatory subsets identified in COVID-19 may share common phenotypes with macrophages from other inflammatory conditions. To elucidate cell states that were phenotypically associated, we tested the association of each state with severe COVID-19 compared to healthy BALF using a logistic regression model accounting for technical cell-level and donor-specific effects [36] ("Methods"). We observed the CXCL10+ CCL2+ and FCN1+ states are abundant in severe COVID-19 compared to healthy BALF (Fig. 2f). The CXCL10+ CCL2+ inflammatory state was also expanded in inflamed CD compared to non-inflamed CD, RA compared to non-inflammatory OA, and inflamed UC compared to healthy colon, respectively (Fig. 2f). We indeed observed significant enrichment of the TNF-alpha signaling via nuclear factor-κB (NF-kB) pathway and the response to interferon gamma pathway in the CXCL10+ CCL2+ cells from examined inflamed tissues (Fig. 2g). Consistent with this result, we also observed reduced frequencies of MRC1+ FABP4+ macrophages in each inflamed tissue (Fig. 2f). Taken together, these results indicate that the shared CXCL10+ CCL2+ inflammatory macrophage phenotype is expanded in inflamed tissues and severe COVID-19 BALF. Tissue inflammatory conditions that drive distinct macrophage phenotypes To define the factors that shape disease-associated macrophage states in affected tissues, we generated human blood-derived macrophages from four donors and activated them with eight defined mixtures of inflammatory factors, focusing particularly on the effects of antiviral interferons (IFN-β and IFN-γ) and pro-inflammatory cytokines such as TNF that mediate CRS and tissue pathology in RA and IBD [40] (Fig. 3a, Additional file 2: Figure S7a, "Methods"). Co-cultured fibroblasts were a component in some conditions to generate factors produced by resident stroma. To reduce confounding batch effects during scRNA-seq barcode labeling, we used a single-cell antibody-based hashing strategy [41] to multiplex samples from different stimulatory conditions in one sequencing run (Additional file 6: Table S5, Additional file 7: Table S6). We obtained 25,823 post-QC cells after applying 10X Genomics droplet-based single-cell assay (Additional file 2: Figure S7b-d, "Methods"). In the UMAP space, a strong response to IFN-γ drove much of the observed variation; cells treated with IFN-γ clustered well apart from all other conditions (Fig. 3b). All conditions containing IFN-γ (Type II interferon) resulted in macrophages with high expression levels of the transcription factor STAT1, interferon-stimulated genes CXCL9 and CXCL10, and inflammatory receptors such as FCGR1A [42] (Fig. 3c). Consistent with well-established effects, macrophages stimulated by TNF induced MMP9, IL1B, and PLAUR expression while IL-4 stimulation increased expression of CCL23, MRC1, and LIPA (Fig. 3c). Human blood-derived macrophages stimulated by eight mixtures of inflammatory factors reveal heterogeneous macrophage phenotypes. a Schematic representation of the single-cell cell hashing experiment on human blood-derived macrophages stimulated by eight mixtures of inflammatory factors from 4 donors. A single-cell antibody-based hashing strategy was used to multiplex samples from different stimulatory conditions in one sequencing run. Here fibro denotes fibroblasts. b The 25,823 stimulated blood-derived macrophages from 4 donors are colored and labeled in UMAP space. c Log-normalized expression of genes that are specific to different conditions are displayed in violin plots. Mean of normalized gene expression is marked by a line and each condition by individual coloring. CPM denotes counts per million. d Stimulation effect estimates of genes that are most responsive to conditions with IFN-γ or TNF-α with fibroblasts comparing to untreated macrophages are obtained using linear modeling. Fold changes with 95% CI are shown. e Fold changes in gene expression after TNF-α and IFN-γ stimulation vs. TNF-α stimulation (left), and TNF-α and IFN-γ vs. IFN-γ stimulation (right) for each gene. Genes in red have fold change > 2, Bonferroni-adjusted P < 10−7, and a ratio of TNF-α and IFN-γ fold change to TNF-α fold change greater than 1 (left) or a ratio of TNF-α and IFN-γ fold change to IFN-γ fold change greater than 1 (right). Genes that are most responsive to either IFN-γ (left) or TNF-α (right) are labeled Using linear models, we identified the genes with the greatest changes in expression after each stimulation and estimated the effect sizes ("Methods"). We found that 403 genes (fold change > 2, FDR < 0.05) were significantly enriched in the TNF-α and IFN-γ stimulation compared to untreated macrophages. All conditions with IFN-γ resulted in similar effect sizes for induction of CCL2, CXCL9, CXCL10, SLAMF7, and STAT1 expression—indicating a robust IFN-γ driven macrophage signature (Fig. 3d left, Additional file 2: Figure S7e). This included robust induction by IFN-γ in macrophages co-treated with TNF (Fig. 3e left). Collectively, the TNF-driven gene expression patterns appeared more modifiable by co-stimulatory factors than IFN-γ. For example, co-cultured fibroblasts further increased TNF-induced MMP9, PLAUR, and VCAN expression, while co-stimulating with IFN-γ repressed TNF induction of these genes (Fig. 3d right). Nonetheless, a portion of the TNF effect was well preserved in TNF plus IFN-γ co-stimulated cells, including genes such as CCL2, CCL3, IL1B, and NFKBIA (Fig. 3e right). TNF-α and IFN-γ ultimately generated a macrophage phenotype with increased expression of NF-kB targets such as NFKBIA, IL1B, and HLA-DRA together with STAT1 targets such as CXCL9 and CXCL10, and GBP1 and GBP5 (Fig. 3d, e). Identification of an IFN-γ and TNF-α synergistically driven inflammatory macrophage phenotype expanded in severe COVID-19 lungs and other inflamed disease tissues Our cross-tissue integrative analysis revealed two shared inflammatory macrophage states (Fig. 2). To further understand these cell states and the in vivo inflammatory tissue factors driving them, we integrated the single-cell transcriptomes of both the tissue macrophages and our experimentally stimulated macrophages. After combining and correcting for tissue, technology, and donor effects, we identified 7 distinct macrophage clusters (Fig. 4a). We evaluated the robustness of the clustering and observed that our clusters were stable to the choice of the variable genes used in the analysis (Additional file 2: Figure S8a). The tissue CXCL10+ CCL2+ inflammatory macrophages from UC colon, CD ileum, RA synovium, and COVID-19 BALF were transcriptionally most similar to macrophages stimulated by the combination of TNF-α plus IFN-γ in cluster 1 (Fig. 4b, c, Additional file 2: Figure S8b-c). The blood-derived macrophages in cluster 1 included macrophages stimulated by four different conditions all including IFN-γ, of which the most abundant population (37.5%) were macrophages stimulated by TNF-α with IFN-γ (Fig. 4c, d). Comparing our results to a previously reported macrophage spectrum with 28 unique stimulatory conditions [11], we observed the highest expression of cluster 1-associated genes in their macrophages exposed to conditions including both TNF and IFN-γ (Additional file 2: Figure S9a). TNF-α and IFN-γ driven CXCL10+ CCL2+ macrophages are expanded in severe COVID-19 and other inflamed tissues. a Integrative clustering of stimulated blood-derived macrophages with tissue-level macrophages from COVID-19 BALF, UC colon, CD ileum, and RA synovium. b The previously identified tissue-level CXCL10+ CCL2+ state corresponds to cluster 1 (orange), and the FCN1+ inflammatory macrophage state corresponds to cluster 2 (yellow). Macrophages from each tissue source are displayed separately in the same UMAP coordinates as in a. c Heatmap indicates the concordance between stimulatory conditions and integrative cluster assignments. Z-score of the number of cells from each stimulatory condition to the integrative clusters is shown. d For the blood-derived stimulated macrophages, the proportions of CXCL10+ CCL2+ macrophages of total macrophages per stimulated donor are shown. e PCA analysis on the identified inflammatory macrophages. The first PC captures a gradient from the FCN1+ state to the CXCL10+ CCL2+ state. f Upon this, macrophages from severe COVID-19 mapped to PC1 present a shift in cell frequency between the FCN1+ and CXCL10+ CCL2+ (Wilcoxon rank-sum test P = 1.4e−07). The TNF-α stimulated macrophages (mean − 0.27) were projected to the left of the FCN1+ tissue macrophages (mean − 0.14), while the IFN-γ (mean 0.10), and TNF-α and IFN-γ (mean 0.23), stimulated macrophages were projected to the right of the CXCL10+ CCL2+ tissue macrophages (− 0.03). g Genes associated with CXCL10+ CCL2+ driven by PC1 show high expression levels on the severe COVID-19 macrophages and also TNF-α and IFN-γ stimulated blood-derived macrophages. We recapitulate the gradient observed in vivo across multiple diseases by stimulating macrophages ex vivo with synergistic combinations of TNF-α and IFN-γ We further identified a principal component (PC1) that captures a gradient from the FCN1+ state to the CXCL10+ CCL2+ state by applying PCA analysis to the tissue-level inflammatory macrophages (Fig. 4e), suggesting a potential continuum between the inflammatory FCN1+ and CXCL10+ CCL2+ states. Aligning cells from separate tissues along PC1, we found that the majority of inflammatory macrophages in RA, UC, and CD align more closely with the FCN1+ state (Additional file 2: Figure S9b). In severe COVID-19, we observed a shift in cell frequency between the FCN1+ and CXCL10+ CCL2+ macrophages (Wilcoxon rank-sum test P = 1.4e−07, Fig. 4f). Furthermore, we mapped the experimentally stimulated blood-derived macrophages to PC1 based on the top 50 genes with the largest and smallest PC1 gene loadings. Strikingly, the TNF-α stimulated macrophages (mean − 0.27) map to the left of the FCN1+ tissue macrophages (mean − 0.14), while the IFN-γ (mean 0.10), and TNF-α and IFN-γ (mean 0.23), stimulated macrophages map to the right of the CXCL10+ CCL2+ tissue macrophages (− 0.03) (Fig. 4f). This suggests the importance of IFN-γ stimulation in order to drive a phenotype most similar to the CXCL10+ CCL2+ state, with the addition of TNF stimulation resulting in further pushing of the macrophage phenotype along the PC1 trajectory. We observed higher expression levels of PC1-associated genes, for example CXCL10, STAT1, CCL2, CCL3, NFKBIA, and GBP1, in CXCL10+ CCL2+ severe COVID-19 compared to FCN1+ cells, and higher induced expression levels of these same genes in TNF-α and IFN-γ stimulation compared to TNF-α stimulation alone (Fig. 4g). Taken together, these results suggest we are able to recapitulate the gradient observed in vivo across multiple diseases by stimulating macrophages ex vivo with synergistic combinations of IFN-γ and TNF-α. Our study demonstrates the power of a multi-disease reference dataset to interpret cellular phenotypes and tissue states, while placing them into a broader context that may provide insights into disease etiology and rationale for repurposing medications. Such meta-datasets can increase the resolution of cell states and aid understanding of shared cellular states found in less well-understood diseases such as COVID-19. Amassing diverse tissues from > 120 donors with a wide range of diseases, we built a human tissue inflammation single-cell reference. Applying powerful computational strategies, we integrated > 300,000 single-cell transcriptomes and corrected for factors that interfere with resolving cell-intrinsic expression patterns. In particular, we have identified a CXCL10+ CCL2+ inflammatory macrophage phenotype shared between tissues affected in autoimmune disease (RA), inflammatory diseases (CD and UC), and infectious disease (COVID-19). We observed that the abundance of this population is associated with inflammation and disease severity. With integrated analysis of an ex vivo dataset, we elucidated its potential cytokine drivers: IFN-γ together with TNF-α. Macrophages are ideal biologic indicators for the in vivo state of a tissue due to their dynamic nature, robust responses to local factors, and widespread presence in most tissues. Through our cross-disease analysis, we defined two inflammatory macrophage states that can be found in selected groups of seemingly unrelated tissues and diseases. Most notably, the CXCL10+ CCL2+ inflammatory macrophages predominate in the bronchoalveolar lavage of patients with severe COVID-19, and are also detected in synovial tissue affected by RA and inflamed intestine from patients with IBD. These cells are distinguished by high levels of CXCL10 and CXCL11, STAT1, IFNGR1, and IFNGR2, as well as CCL2 and CCL3, NFKB1, TGFB1, and IL1B. This gene expression pattern of the JAK/STAT and NF-kB-dependent cytokines implicates induction by an intriguing combination of both the IFN-induced JAK/STAT and TNF-induced NF-kB pathways and, in conjunction, the overall transcriptome program most closely aligns with macrophages stimulated by IFN-γ plus TNF-α. As both JAK inhibitors and anti-TNF medications have outstanding efficacy in treating RA and anti-TNFs are the most common medications treating inflammatory bowel disease, including Crohn's Disease [2], these therapies may target the inflammatory macrophages in severe COVID-19 lung during the phase involving cytokine release syndrome [43]. Infection with SARS-CoV2 triggers local immune response and inflammation in the lung compartment, recruiting macrophages that release and respond to inflammatory cytokines and chemokines [6]. This response may change with disease progression, in particular during the transition towards the cytokine storm associated with severe disease. Intriguingly, our cross-disease tissue study strongly suggests that IFN-γ is an essential component in the inflammatory macrophage phenotype in severe COVID-19. Most studies on interferons and coronaviruses have focused on Type I interferons, such as IFN-β, due to their robust capacity to interfere with viral replication [44]. Indeed, ongoing research into the administration of recombinant IFN-β has shown promise in reducing the risk of severe COVID-19 disease [45]. However, other studies have indicated that targeting IFN-γ may be an effective treatment for cytokine storm, a driver of severe disease in COVID-19 patients [46, 47]. Additionally, several studies have indicated that targeting IFN-γ using JAK inhibitors such as ruxolitinib, baricitinib, and tofacitinib offers effective therapeutic effects in treating severe COVID-19 patients [43, 48,49,50,51]. Clinical trials of Type II interferon inhibitors in COVID-19 are under way (NCT04337359, NCT04359290, and NCT04348695) [43]. Recent research has also identified that the synergism of TNF-α and IFN-γ can trigger inflammatory cell death, tissue damage, and mortality in SARS-CoV-2 infection [52], and shown increased levels of IFN-γ, TNF-α, CXCL10, and CCL2 in the serum of severe COVID-19 patients [53]. In agreement with these studies, our findings indicate that IFN-γ is an important mediator together with TNF-α of severe disease, in part through activating the inflammatory CXCL10+ CCL2+ macrophage subset. We hypothesize that anti-Type II interferon (like JAK inhibitors) and anti-TNF combinatorial treatment might prove effective at inhibiting the cytokine storm driving acute respiratory distress syndrome in patients with severe COVID-19. We are aware of the limited number of longitudinal BALFs from COVID-19 patients involved in our across-tissue study due to the current crisis situation, so we expect to replicate our findings in a broader generalization of COVID-19 patients in the future. Of course, the presence of an IFN-γ and TNF phenotype is an association that may not be causal. Whether targeting these cytokines is reasonable or not will depend on additional clinical investigation. In this study, we built a single-cell immune reference from multiple inflamed disease tissues and identified two inflammatory macrophage states, CXCL10+ CCL2+ and FCN1+ inflammatory macrophages, that were shared between COVID-19 and inflammatory diseases such as RA, CD, and UC. We demonstrated that the CXCL10+ CCL2+ macrophages are transcriptionally similar to human blood-derived macrophages stimulated by IFN-γ and TNF-α and were expanded in severe COVID-19 lungs and inflamed RA, CD, and UC tissues. This finding indicates that Type II interferon and TNF responses may be involved in late-stage cytokine storm-driven severe COVID-19 and inhibiting these responses in the inflammatory macrophages may be a promising treatment. Our cross-tissue single-cell integrative strategy along with our disease association analysis provides a proof-of-principle that identifying shared pathogenic features across human inflamed tissues and COVID-19 lungs has the potential to guide drug repurposing. The single-cell RNA-seq data for blood-derived macrophages are available in the Gene Expression Omnibus database with accession number GSE168710, https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE168710 [54]. Source code repository to reproduce analyses is located at https://github.com/immunogenomics/inflamedtissue_covid19_reference [55]. The publicly available datasets analyzed during the study are available from the GEO repository: GSE134809 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE134809) [27] GSE145926 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE145926) [4] GSE47189 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE47189) [11] dbGap repository: phs001457.v1.p1 (https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs001457.v1.p1) [13] Single Cell Portal: SCP259 (https://singlecell.broadinstitute.org/single_cell/study/SCP259/intra-and-inter-cellular-rewiring-of-the-human-colon-during-ulcerative-colitis) [15] McInnes IB, Schett G. The pathogenesis of rheumatoid arthritis. N Engl J Med. 2011;365(23):2205–19. https://doi.org/10.1056/NEJMra1004965. Neurath MF. Cytokines in inflammatory bowel disease. Nat Rev Immunol. 2014;14(5):329–42. https://doi.org/10.1038/nri3661. Liu J, Zheng X, Tong Q, Li W, Wang B, Sutter K, Trilling M, Lu M, Dittmer U, Yang D. Overlapping and discrete aspects of the pathology and pathogenesis of the emerging human pathogenic coronaviruses SARS-CoV, MERS-CoV, and 2019-nCoV. J Med Virol. 2020;92(5):491–4. https://doi.org/10.1002/jmv.25709. Liao M, Liu Y, Yuan J, Wen Y, Xu G, Zhao J, Cheng L, Li J, Wang X, Wang F, Liu L, Amit I, Zhang S, Zhang Z. Single-cell landscape of bronchoalveolar immune cells in patients with COVID-19. Nat Med. 2020;26(6):842–4. https://doi.org/10.1038/s41591-020-0901-9. Wen W, Su W, Tang H, le W, Zhang X, Zheng Y, Liu X, Xie L, Li J, Ye J, Dong L, Cui X, Miao Y, Wang D, Dong J, Xiao C, Chen W, Wang H. Immune cell profiling of COVID-19 patients in the recovery stage by single-cell sequencing. Cell Discov. 2020;6(1):31. https://doi.org/10.1038/s41421-020-0168-9. Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, Zhang L, Fan G, Xu J, Gu X, Cheng Z, Yu T, Xia J, Wei Y, Wu W, Xie X, Yin W, Li H, Liu M, Xiao Y, Gao H, Guo L, Xie J, Wang G, Jiang R, Gao Z, Jin Q, Wang J, Cao B. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497–506. https://doi.org/10.1016/S0140-6736(20)30183-5. Lucas C, et al. Longitudinal analyses reveal immunological misfiring in severe COVID-19. Nature. 2020;584(7821):463–9. https://doi.org/10.1038/s41586-020-2588-y. He W, Kapate N, Shields CW 4th, Mitragotri S. Drug delivery to macrophages: a review of targeting drugs and drug carriers to macrophages for inflammatory diseases. Adv Drug Deliv Rev. 2019;165-166:15–40. https://doi.org/10.1016/j.addr.2019.12.001. Kinne RW, Bräuer R, Stuhlmüller B, Palombo-Kinne E, Burmester GR. Macrophages in rheumatoid arthritis. Arthritis Res. 2000;2(3):189–202. https://doi.org/10.1186/ar86. Ma W-T, Gao F, Gu K, Chen D-K. The role of monocytes and macrophages in autoimmune diseases: a comprehensive review. Front Immunol. 2019;10:1140. https://doi.org/10.3389/fimmu.2019.01140. Xue J, Schmidt SV, Sander J, Draffehn A, Krebs W, Quester I, de Nardo D, Gohel TD, Emde M, Schmidleithner L, Ganesan H, Nino-Castro A, Mallmann MR, Labzin L, Theis H, Kraut M, Beyer M, Latz E, Freeman TC, Ulas T, Schultze JL. Transcriptome-based network analysis reveals a spectrum model of human macrophage activation. Immunity. 2014;40(2):274–88. https://doi.org/10.1016/j.immuni.2014.01.006. Papalexi E, Satija R. Single-cell RNA sequencing to explore immune cell heterogeneity. Nat Rev Immunol. 2018;18(1):35–45. https://doi.org/10.1038/nri.2017.76. Zhang F, et al. Defining inflammatory cell states in rheumatoid arthritis joint synovial tissues by integrating single-cell transcriptomics and mass cytometry. Nat Immunol. 2019;20(7):928–42. https://doi.org/10.1038/s41590-019-0378-1. Kuo D, Ding J, Cohn IS, Zhang F, Wei K, Rao DA, Rozo C, Sokhi UK, Shanaj S, Oliver DJ, Echeverria AP, DiCarlo EF, Brenner MB, Bykerk VP, Goodman SM, Raychaudhuri S, Rätsch G, Ivashkiv LB, Donlin LT. HBEGF+ macrophages in rheumatoid arthritis induce fibroblast invasiveness. Sci Transl Med. 2019;11(491):eaau8587. https://doi.org/10.1126/scitranslmed.aau8587. Smillie CS, et al. Intra- and inter-cellular rewiring of the human colon during ulcerative colitis. Cell. 2019;178:714–730.e22. Korsunsky I, Millard N, Fan J, Slowikowski K, Zhang F, Wei K, Baglaenko Y, Brenner M, Loh PR, Raychaudhuri S. Fast, sensitive and accurate integration of single-cell data with harmony. Nat Methods. 2019;16(12):1289–96. https://doi.org/10.1038/s41592-019-0619-0. Stuart T, Satija R. Integrative single-cell analysis. Nat Rev Genet. 2019;20(5):257–72. https://doi.org/10.1038/s41576-019-0093-7. Hie B, Bryson B, Berger B. Efficient integration of heterogeneous single-cell transcriptomes using Scanorama. Nat Biotechnol. 2019;37(6):685–91. https://doi.org/10.1038/s41587-019-0113-3. Haghverdi L, Lun ATL, Morgan MD, Marioni JC. Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors. Nat Biotechnol. 2018;36(5):421–7. https://doi.org/10.1038/nbt.4091. Butler A, Hoffman P, Smibert P, Papalexi E, Satija R. Integrating single-cell transcriptomic data across different conditions, technologies, and species. Nat Biotechnol. 2018;36(5):411–20. https://doi.org/10.1038/nbt.4096. Polański K, Young MD, Miao Z, Meyer KB, Teichmann SA, Park JE. BBKNN: fast batch alignment of single cell transcriptomes. Bioinformatics. 2020;36(3):964–5. https://doi.org/10.1093/bioinformatics/btz625. Tran HTN, Ang KS, Chevrier M, Zhang X, Lee NYS, Goh M, Chen J. A benchmark of batch-effect correction methods for single-cell RNA sequencing data. Genome Biol. 2020;21(1):12. https://doi.org/10.1186/s13059-019-1850-9. Ivashkiv LB. IFNγ: signalling, epigenetics and roles in immunity, metabolism, disease and cancer immunotherapy. Nat Rev Immunol. 2018;18(9):545–58. https://doi.org/10.1038/s41577-018-0029-z. Barrat FJ, Crow MK, Ivashkiv LB. Interferon target-gene expression and epigenomic signatures in health and disease. Nat Immunol. 2019;20(12):1574–83. https://doi.org/10.1038/s41590-019-0466-2. Stephenson W, Donlin LT, Butler A, Rozo C, Bracken B, Rashidfarrokhi A, Goodman SM, Ivashkiv LB, Bykerk VP, Orange DE, Darnell RB, Swerdlow HP, Satija R. Single-cell RNA-seq of rheumatoid arthritis synovial tissue using low-cost microfluidic instrumentation. Nat Commun. 2018;9(1):791. https://doi.org/10.1038/s41467-017-02659-x. Arazi A, et al. The immune cell landscape in kidneys of patients with lupus nephritis. Nat Immunol. 2019;20(7):902–14. https://doi.org/10.1038/s41590-019-0398-x. Martin JC, et al. Single-cell analysis of Crohn's disease lesions identifies a pathogenic cellular module associated with resistance to anti-TNF therapy. Cell. 2019;178:1493–1508.e20. Reyfman PA, Walter JM, Joshi N, Anekalla KR, McQuattie-Pimentel AC, Chiu S, Fernandez R, Akbarpour M, Chen CI, Ren Z, Verma R, Abdala-Valencia H, Nam K, Chi M, Han SH, Gonzalez-Gonzalez FJ, Soberanes S, Watanabe S, Williams KJN, Flozak AS, Nicholson TT, Morgan VK, Winter DR, Hinchcliff M, Hrusch CL, Guzy RD, Bonham CA, Sperling AI, Bag R, Hamanaka RB, Mutlu GM, Yeldandi AV, Marshall SA, Shilatifard A, Amaral LAN, Perlman H, Sznajder JI, Argento AC, Gillespie CT, Dematte J, Jain M, Singer BD, Ridge KM, Lam AP, Bharat A, Bhorade SM, Gottardi CJ, Budinger GRS, Misharin AV. Single-cell transcriptomic analysis of human lung provides insights into the pathobiology of pulmonary fibrosis. Am J Respir Crit Care Med. 2018;199(12):1517–36. https://doi.org/10.1164/rccm.201712-2410OC. Grant RA, et al. Circuits between infected macrophages and T cells in SARS-CoV-2 pneumonia. Nature. 2021;590(7847):635–41. https://doi.org/10.1038/s41586-020-03148-w. Bray NL, Pimentel H, Melsted P, Pachter L. Near-optimal probabilistic RNA-seq quantification. Nat Biotechnol. 2016;34(5):525–7. https://doi.org/10.1038/nbt.3519. Ritchie ME, et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015;43:e47. Blondel VD, Guillaume J-L, Lambiotte R, Lefebvre E. Fast unfolding of communities in large networks. arXiv [physics.soc-ph]. 2008;(10):P10008. https://arxiv.org/abs/0803.0476. McInnes L, Healy J, Melville J. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. 2018. https://arxiv.org/abs/1802.03426. Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014;15(12):550. https://doi.org/10.1186/s13059-014-0550-8. Stoeckius M, Hafemeister C, Stephenson W, Houck-Loomis B, Chattopadhyay PK, Swerdlow H, Satija R, Smibert P. Simultaneous epitope and transcriptome measurement in single cells. Nat Methods. 2017;14(9):865–8. https://doi.org/10.1038/nmeth.4380. Fonseka CY, Rao DA, Teslovich NC, Korsunsky I, Hannes SK, Slowikowski K, Gurish MF, Donlin LT, Lederer JA, Weinblatt ME, Massarotti EM, Coblyn JS, Helfgott SM, Todd DJ, Bykerk VP, Karlson EW, Ermann J, Lee YC, Brenner MB, Raychaudhuri S. Mixed-effects association of single cells identifies an expanded effector CD4+ T cell subset in rheumatoid arthritis. Sci Transl Med. 2018;10(463):eaaq0305. https://doi.org/10.1126/scitranslmed.aaq0305. Liberzon A, Birger C, Thorvaldsdóttir H, Ghandi M, Mesirov JP, Tamayo P. The molecular signatures database (MSigDB) hallmark gene set collection. Cell Syst. 2015;1(6):417–25. https://doi.org/10.1016/j.cels.2015.12.004. Raychaudhuri S, Stuart JM, Altman RB. Principal components analysis to summarize microarray experiments: application to sporulation time series. Pac Symp Biocomput. 2000:455–66. https://pubmed.ncbi.nlm.nih.gov/10902193/. Blanco-Melo D, et al. Imbalanced Host Response to SARS-CoV-2 Drives Development of COVID-19. Cell. 2020;181:1036–1045.e9. Robinson PC, Liew DFL, Liew JW, Monaco C, Richards D, Shivakumar S, Tanner HL, Feldmann M. The Potential for Repurposing Anti-TNF as a Therapy for the Treatment of COVID-19. Med. 2020;1(1):90–102. https://doi.org/10.1016/j.medj.2020.11.005. Stoeckius M, Zheng S, Houck-Loomis B, Hao S, Yeung BZ, Mauck WM III, Smibert P, Satija R. Cell Hashing with barcoded antibodies enables multiplexing and doublet detection for single cell genomics. Genome Biol. 2018;19(1):224. https://doi.org/10.1186/s13059-018-1603-1. Dallagi A, Girouard J, Hamelin-Morrissette J, Dadzie R, Laurent L, Vaillancourt C, Lafond J, Carrier C, Reyes-Moreno C. The activating effect of IFN-γ on monocytes/macrophages is regulated by the LIF-trophoblast-IL-10 axis via Stat1 inhibition and Stat3 activation. Cell Mol Immunol. 2015;12(3):326–41. https://doi.org/10.1038/cmi.2014.50. Luo W, Li YX, Jiang LJ, Chen Q, Wang T, Ye DW. Targeting JAK-STAT signaling to control cytokine release syndrome in COVID-19. Trends Pharmacol Sci. 2020;41(8):531–43. https://doi.org/10.1016/j.tips.2020.06.007. Wang BX, Fish EN. Global virus outbreaks: Interferons as 1st responders. Semin Immunol. 2019;43:101300. https://doi.org/10.1016/j.smim.2019.101300. Davoudi-Monfared E, Rahmani H, Khalili H, Hajiabdolbaghi M, Salehi M, Abbasian L, Kazemzadeh H, Yekaninejad MS. Efficacy and safety of interferon β-1a in treatment of severe COVID-19: A randomized clinical trial. Antimicrobial Agents and Chemotherapy. 2020. https://aac.asm.org/content/64/9/e01061-20. Nile SH, Nile A, Qiu J, Li L, Jia X, Kai G. COVID-19: pathogenesis, cytokine storm and therapeutic potential of interferons. Cytokine Growth Factor Rev. 2020;53:66–70. https://doi.org/10.1016/j.cytogfr.2020.05.002. Ye Q, Wang B, Mao J. The pathogenesis and treatment of the `cytokine storm' in COVID-19. J Inf Secur. 2020;80:607–13. Cao Y, et al. Ruxolitinib in treatment of severe coronavirus disease 2019 (COVID-19): A multicenter, single-blind, randomized controlled trial. J Allergy Clin Immunol. 2020;146:137–146.e3. Ahmed A, Merrill SA, Alsawah F, Bockenstedt P, Campagnaro E, Devata S, Gitlin SD, Kaminski M, Cusick A, Phillips T, Sood S, Talpaz M, Quiery A, Boonstra PS, Wilcox RA. Ruxolitinib in adult patients with secondary haemophagocytic lymphohistiocytosis: an open-label, single-centre, pilot trial. Lancet Haematol. 2019;6(12):e630–7. https://doi.org/10.1016/S2352-3026(19)30156-5. Zizzo G, Cohen PL. Imperfect storm: is interleukin-33 the Achilles heel of COVID-19? Lancet Rheumatol. 2020;2(12):e779–90. https://doi.org/10.1016/S2665-9913(20)30340-4. Kalil AC, Patterson TF, Mehta AK. Baricitinib plus remdesivir for hospitalized adults with COVID-19. N Engl J Med. 2021;384(9):795–807. https://doi.org/10.1056/NEJMoa2031994. Karki R, Sharma BR, Tuladhar S, Williams EP, Zalduondo L, Samir P, Zheng M, Sundaram B, Banoth B, Malireddi RKS, Schreiner P, Neale G, Vogel P, Webby R, Jonsson CB, Kanneganti TD. Synergism of TNF-α and IFN-γ triggers inflammatory cell death, tissue damage, and mortality in SARS-CoV-2 infection and cytokine shock syndromes. Cell. 2021;184(1):149–68. Garcia-Beltran WF, et al. COVID-19-neutralizing antibodies predict disease severity and survival. Cell. 2021;184:476–488.e11. Zhang F, Mears JR, Shakib L, Beynor JI, Shanaj S, Korsunsky I, Nathan A, Accelerating Medicines Partnership Rheumatoid Arthritis and Systemic Lupus Erythematosus (AMP RA/SLE) Consortium, Donlin LT, Raychaudhuri S. IFN-γ and TNF-α drive a CXCL10+ CCL2+ macrophage phenotype expanded in severe COVID-19 lungs and inflammatory diseases with tissue inflammation. GSE168710, Gene Expression Omnibus, https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE168710 (2021). Zhang F, Mears JR, Shakib L, Beynor JI, Shanaj S, Korsunsky I, Nathan A, Accelerating Medicines Partnership Rheumatoid Arthritis and Systemic Lupus Erythematosus (AMP RA/SLE) Consortium, Donlin LT, Raychaudhuri S. IFN-γ and TNF-α drive a CXCL10+ CCL2+ macrophage phenotype expanded in severe COVID-19 lungs and inflammatory diseases with tissue inflammation. Github, https://github.com/immunogenomics/inflamedtissue_covid19_reference (2021). We thank the Brigham and Women's Hospital Single Cell Genomics Core for assistance in the single-cell hashing experiment. We thank members of the Raychaudhuri Laboratory for discussions. Accelerating Medicines Partnership Rheumatoid Arthritis & Systemic Lupus Erythematosus (AMP RA/SLE) Consortium: Jennifer Albrecht9, Jennifer H. Anolik9, William Apruzzese5, Brendan F. Boyce9, Christopher D. Buckley10, David L. Boyle11, Michael B. Brenner5, S. Louis Bridges Jr12, Jane H. Buckner13, Vivian P. Bykerk7, Edward DiCarlo14, James Dolan15, Andrew Filer10, Thomas M. Eisenhaure4, Gary S. Firestein10, Susan M. Goodman7, Ellen M. Gravallese5, Peter K. Gregersen16, Joel M. Guthridge17, Nir Hacohen4, V. Michael Holers18, Laura B. Hughes12, Lionel B. Ivashkiv19,20, Eddie A. James13, Judith A. James17, A. Helena Jonsson5, Josh Keegan15, Stephen Kelly21, Yvonne C. Lee22, James A. Lederer15, David J. Lieb4, Arthur M. Mandelin II22, Mandy J. McGeachy23, Michael A. McNamara7, Nida Meednu9, Larry Moreland23, Jennifer P. Nguyen15, Akiko Noma4, Dana E. Orange24, Harris Perlman22, Costantino Pitzalis25, Javier Rangel-Moreno9, Deepak A. Rao5, Mina Ohani-Pichavant26,27, Christopher Ritchlin9, William H. Robinson26,27, Karen Salomon-Escoto28, Anupamaa Seshadri15, Jennifer Seifert18, Darren Tabechian9, Jason D. Turner10, Paul J. Utz26,27, Kevin Wei5. 9Division of Allergy, Immunology and Rheumatology, Department of Medicine, University of Rochester Medical Center, Rochester, NY, USA. 10Rheumatology Research Group, Institute for Inflammation and Aging, NIHR Birmingham Biomedical Research Center and Clinical Research Facility, University of Birmingham, Queen Elizabeth Hospital, Birmingham, UK. 11Department of Medicine, Division of Rheumatology, Allergy and Immunology, University of California, San Diego, La Jolla, CA, USA. 12Division of Clinical Immunology and Rheumatology, Department of Medicine, Translational Research University of Alabama at Birmingham, Birmingham, AL, USA. 13Translational Research Program, Benaroya Research Institute at Virginia Mason, Seattle, WA, USA. 14Department of Pathology and Laboratory Medicine, Hospital for Special Surgery, New York, NY, USA. 15Department of Surgery, Brigham and Women's Hospital and Harvard Medical School, Feinstein Boston, MA, USA. 16Feinstein Institute for Medical Research, Northwell Health, Manhasset, NY, USA. 17Department of Arthritis & Clinical Immunology, Oklahoma Medical Research Foundation, Oklahoma City, OK, USA. 18Division of Rheumatology, University of Colorado School of Medicine, Aurora, CO, USA. 19Graduate Program in Immunology and Microbial Pathogenesis, Weill Cornell Graduate School of Medical Sciences, New York, NY, USA. 20David Z. Rosensweig Genomics Research Center, Hospital for Special Surgery, New York, NY, USA. 21Department of Rheumatology, Barts Health NHS Trust, London, UK. 22Division of Rheumatology, Department of Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL, USA. 23Division of Rheumatology and Clinical Immunology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA. 24The Rockefeller University, New York, NY, USA. 25Centre for Experimental Medicine & Rheumatology, William Harvey Research Institute, Queen Mary University of London, London, UK. 26Division of Immunology and Rheumatology, Department of Medicine, Stanford University School of Medicine, Palo Alto, CA, USA. 27Immunity, Transplantation, and Infection, Stanford University School of Medicine, Stanford, CA, USA. 28Division of Rheumatology, Department of Medicine, University of Massachusetts Medical School, Worcester, MA, USA. This work is supported in part by funding from the National Institutes of Health (NIH) Grants UH2AR067677, U01HG009379, and R01AR063759 (to S.R.) and NIH R01AI148435, UH2 AR067691, Carson Family Trust, and Leon Lowenstein Foundation (to L.T.D.). Laura T. Donlin and Soumya Raychaudhuri jointly supervised this work. Center for Data Sciences, Brigham and Women's Hospital, Boston, MA, 02115, USA Fan Zhang, Joseph R. Mears, Jessica I. Beynor, Ilya Korsunsky, Aparna Nathan & Soumya Raychaudhuri Division of Genetics, Department of Medicine, Brigham and Women's Hospital, Boston, MA, 02115, USA Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA Broad Institute of MIT and Harvard, Cambridge, MA, 02142, USA Division of Rheumatology, Inflammation, and Immunity, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, 02115, USA Graduate Program in Physiology, Biophysics and Systems Biology, Weill Cornell Graduate School of Medical Sciences, New York, NY, 10065, USA Lorien Shakib & Laura T. Donlin Arthritis and Tissue Degeneration, Hospital for Special Surgery, New York, NY, USA Sara Shanaj & Laura T. Donlin Arthritis Research UK Centre for Genetics and Genomics, Centre for Musculoskeletal Research, The University of Manchester, Manchester, UK Soumya Raychaudhuri Fan Zhang Joseph R. Mears Lorien Shakib Jessica I. Beynor Sara Shanaj Ilya Korsunsky Aparna Nathan Laura T. Donlin Accelerating Medicines Partnership Rheumatoid Arthritis and Systemic Lupus Erythematosus (AMP RA/SLE) Consortium F.Z. and S.R. conceptualized the study and designed the statistical strategy. F.Z. and J.R.M performed the analyses. J.R.M. collected public single-cell datasets. F.Z., J.R.M., and S.R. wrote the initial manuscript. L.T.D., A.N., I.K., J.I.B., L.S., and S.S. edited the draft. L.T.D obtained blood samples from human subjects. L.T.D, L.S., J.I.B., and S.S. organized processing, transportation, and experiment of the blood samples. S.R. and L.T.D. supervised the work. All authors read and approved the final manuscript. Correspondence to Laura T. Donlin or Soumya Raychaudhuri. Healthy blood samples were purchased from the New York Blood Center (NYBC), provided by volunteer donors who consented for the blood to be used in biomedical research and other uses at the discretion of NYBC. The samples are deidentified by the NYBC, the research study investigators had no access to identifiable private information. As per the NIH guidelines, this does not constitute Human Subjects research. For the stimulated blood-derived macrophage experiment, co-cultures with synovial fibroblast involved synovial fibroblast lines generated from patients with RA undergoing arthroplasty (HSS IRB 14-033). Patients provided informed consent and all appropriate measures were taken for compliance with the Helsinki Declaration. Basic information and demography of multiple single-cell datasets. Overall integration of immune cells from multiple scRNA-seq datasets. Figure S2. Quantification of the performance of all cell type multi-disease tissue integration. Figure S3. Tissue-level macrophage integrative analysis of multiple scRNA-seq datasets. Figure S4. Heterogeneity of shared inflammatory macrophages from multiple tissues. Figure S5. Single-cell differential gene expression analysis of comparing inflammatory macrophages with non-inflammatory macrophages within each individual tissue source. Figure S6. Examination of the CXCL10+ CCL2+ macrophage marker genes in additional diseased cohort studies. Figure S7. Experimental design and quality control of human blood-derived macrophages stimulated by different conditions. Figure S8. Integrative analysis of tissue-level macrophages and human blood-derived macrophages. Figure S9. Assessment of previously reported stimulated macrophage spectrum analysis and alignment of macrophages from different disease tissues to a trajectory. Cell type marker genes and statistics. Number of cells per cluster, per disease and tissue for macrophage integration analysis. Macrophage cluster marker genes and relative statistics. Hashtag antibodies for the 10X single-cell cell hashing experiment. Details for the 10X single-cell cell hashing experiment. Zhang, F., Mears, J.R., Shakib, L. et al. IFN-γ and TNF-α drive a CXCL10+ CCL2+ macrophage phenotype expanded in severe COVID-19 lungs and inflammatory diseases with tissue inflammation. Genome Med 13, 64 (2021). https://doi.org/10.1186/s13073-021-00881-3 Single-cell transcriptomics Single-cell multi-disease tissue integration Inflammatory diseases Macrophage stimulation Macrophage heterogeneity Coronavirus Resource
CommonCrawl
\begin{definition}[Definition:Pythagorean Equation] The '''Pythagorean equation''' is the Diophantine equation: :$x^2 + y^2 = z^2$ where $x, y, z$ are integers such that $x, y, z > 0$. Solutions of this equation are known as Pythagorean triples. If, in addition, $\left({x, y, z}\right)$ is a primitive Pythagorean triple, then $\left({x, y, z}\right)$ is known as a '''primitive solution''' of $x^2 + y^2 = z^2$. \end{definition}
ProofWiki
Fontaine–Mazur conjecture In mathematics, the Fontaine–Mazur conjectures are some conjectures introduced by Fontaine and Mazur (1995) about when p-adic representations of Galois groups of number fields can be constructed from representations on étale cohomology groups of a varieties.[1][2] Some cases of this conjecture in dimension 2 were already proved by Dieulefait (2004). References 1. Koch, Helmut (2013). "Fontaine-Mazur Conjecture". Galois theory of p-extensions. Springer Science & Business Media. p. 180. ISBN 9783662049679. 2. Calegari, Frank (2011). "Even Galois representations and the Fontaine–Mazur conjecture" (PDF). Inventiones Mathematicae. 185 (1): 1–16. arXiv:1012.4819. Bibcode:2011InMat.185....1C. doi:10.1007/s00222-010-0297-0. S2CID 8937648. arXiv preprint • Fontaine, Jean-Marc; Mazur, Barry (1995), "Geometric Galois representations", in Coates, John; Yau., S.-T. (eds.), Elliptic curves, modular forms, & Fermat's last theorem (Hong Kong, 1993), Series in Number Theory, vol. 1, Int. Press, Cambridge, MA, pp. 41–78, ISBN 978-1-57146-026-4, MR 1363495 • Dieulefait, Luis V. (2004). "Existence of families of Galois representations and new cases of the Fontaine-Mazur conjecture". Journal für die reine und angewandte Mathematik (Crelle's Journal). 2004 (577). arXiv:math/0304433. Bibcode:2003math......4433D. doi:10.1515/crll.2004.2004.577.147. S2CID 16949796. External links • Robert Coleman's lectures on the Fontaine–Mazur conjecture
Wikipedia
Mancala is a family of board games played around the world, sometimes called sowing games, or count-and-capture games, which describes the game play. One simple variant is a solitaire game called Tchoukaillon which was described by Véronique Gautheron. Tchoukaillon is played on a board with an arbitrary number of bins numbered $1, 2, \ldots $, containing $b, b, \ldots $ counters respectively and an extra empty bin called the Roumba on the left. A single play consists on choosing a bin, $n$, for which $b[n] = n$ (indicated by the darker circles in the diagram) and distributing the counters one per bin to the bins to the left including the Roumba (getting the next diagram below in the figure above). If there is no bin where $b[n] = n$, then the board is a losing board. If there is a sequence of plays which takes the initial board distribution to one in which every counter is in the Roumba, the initial distribution is called a winnable board. In the example above, $0,1,3,\ldots $ is a winnable board (the "$\ldots $" indicates all the bins to the right of bin $3$ contain $0$). For each total number of counters, there is a unique distribution of the counters to bins to make a winnable board for that total count (so $0,1,3,\ldots $ is the only winnable board with $4$ counters). Write a program which finds the winnable board for a total count input. The first line of input contains a single integer $P$, ($1 \le P \le 200$), which is the number of data sets that follow. Each data set should be processed identically and independently. Each data set consists of a single line of input. It contains the data set number, $K$, followed by a single space, followed by the total count $N$ ($1 \le N \le 2000$) of the winnable board to be found. For each data set there will be multiple lines of output. The first line of output contains the data set number, $K$, followed by a single space, followed by the index of the last bin, $B$, with a non-zero count. Input will be chosen so that $B$ will be no more than $80$. The first line of output for each dataset is followed by the bin counts $b, b, \ldots , b[B]$, 10 per line separated by single spaces.
CommonCrawl
Health literacy among different age groups in Germany: results of a cross-sectional survey Eva-Maria Berens ORCID: orcid.org/0000-0001-9181-27061, Dominique Vogt1, Melanie Messer1, Klaus Hurrelmann2 & Doris Schaeffer1 Health literacy is of increasing importance in public health research. It is a necessary pre-condition for the involvement in decisions about health and health care and related to health outcomes. Knowledge about limited health literacy in different age groups is crucial to better target public health interventions for subgroups of the population. However, little is known about health literacy in Germany. The study therefore assesses the prevalence of limited health literacy and associated factors among different age groups. The Health Literacy Survey Germany is a cross-sectional study with 2,000 participants aged 15 years or older in private households. Perceived health literacy was assessed via computer-assisted personal interviews using the HLS-EU-Q-47 questionnaire. Descriptive analyses, chi-square tests and odds ratios were performed stratified for different age groups. The population affected by limited perceived health literacy increases by age. Of the respondents aged 15–29 years, 47.3 % had limited perceived health literacy and 47.2 % of those aged 30–45 years, whereas 55.2 % of the respondents aged 46–64 years and 66.4 % aged 65 years and older showed limited perceived health literacy. In all age groups, limited perceived health literacy was associated with limited functional health literacy, low social status, and a high frequency of doctor visits. The results suggest a need to further investigate perceived health literacy in all phases of the life-course. Particular attention should be devoted to persons with lower social status, limited functional health literacy and/or a high number of doctor visits in all age groups. Health literacy is the competence to access, understand, appraise, and apply health information in order to take decisions in everyday life concerning healthcare, disease prevention, and health promotion [1]. This definition goes beyond functional literacy [2]. Health literacy is associated with the effectiveness of the use of preventive and other health services and has consequences for the subjective health status and the mortality of a population [3–6]. Socioeconomic factors such as a low educational level, low social status and migrant background are associated with limited health literacy [7, 8]. Internationally, health literacy has been an important topic in public health research in the past decades. However, in Germany, the biggest country of the European Union, data about health literacy in the general population are still scarce. International studies have shown that limited health literacy affects large parts of the population [9–11]. According to the European Health Literacy Survey (HLS-EU) almost every second EU citizen had limited health literacy and thus perceived difficulties accessing, understanding and using health information [8, 9]. In Great Britain, more than 50 % showed marginal or low health literacy-skills [7]. Results for health literacy in Germany are scarce. They focus on certain subgroups only [9, 12, 13] or use a short version to measure health literacy [14]. There is also evidence that health literacy declines with increasing age [8]. The decline in health literacy in older age groups is associated with decreasing cognitive functionality and potential health impairments [15, 16]. Nevertheless, adequate health literacy and knowledge about associated factors are relevant in all phases of the life course in order to maintain health and getting involved in decisions about health and health care. However, to date perceived health literacy has not been sufficiently addressed among different age groups and data about factors associated with limited health literacy stratified by different age groups is scarce, especially for middle-aged adults. The aim of this study is therefore to fill this gap and provide data on perceived health literacy stratified for different age groups and analysing its relation to possible determinants such as socio-economic factors or doctor visits in Germany for the first time. Study population and design For these analyses, data of the Health Literacy Survey Germany (HLS-GER) were used. In total, 2,000 respondents aged 15 years or older were included in the representative sample in July and August 2014. 258 sample points were randomly selected from a total of 53,000 across Germany, each containing about 700 households. For each sample point, a starting address was selected at random and every third household was selected by a random-walk procedure excluding the starting address. In each of these households, the person who had the most recent birthday was selected. Computer-assisted personal interviews (CAPI) were conducted in German language. The mean interview duration was 53 min. The response rate was 64.9 %. Health literacy is based on a multidimensional concept taking into account the self-perceived difficulty to perform health information tasks. It was assessed by the HLS-EU-Q-47 questionnaire [17]. Respondents were asked to rate the perceived difficulty of various aspects concerning accessing, understanding, appraising, and applying health information. Item examples are: on a scale from very easy to very difficult, how easy would you say it is to find information about symptoms of illnesses that concern you or to judge which health screenings you should have [8]. In total, the questionnaire comprises 47 items. The degree of difficulty was assessed on a four-point Likert scale from very easy to very difficult. The health literacy score was calculated for respondents with at least 80 % valid items of perceived health literacy. The index was transformed as recommended by the European Health Literacy Project using the following formula: $$ \operatorname{I}=\left(\operatorname{X}-1\right)\ast \frac{50}{3} $$ The health literacy index ranges from 1 to 50; higher values indicate better perceived health literacy. Internal consistency of the instrument has shown to be good (Cronbach's alpha: 0.97). For the analyses, different levels of perceived health literacy were defined as recommended by the HLS-EU-consortium [8]. A health literacy index of 0 to 25 was defined as 'inadequate' perceived health literacy, values from > 25 to 33 points as 'problematic'. Further, health literacy scores of > 33 to 42 were defined as 'sufficient', and the remaining interval (> 42 to 50) as 'excellent' perceived health literacy. Questionnaire development and criteria for thresholds are described in detail elsewhere [8, 17]. Social status, gender, education, migrant background, functional health literacy and frequency of doctor visits were included in the analyses as socio-demographic covariates. Following the HLS-EU [17] functional health literacy was defined as basic objective numeracy and literacy skills in a health-related context in this study. It was operationalized by the Newest Vital Sign Test measuring the ability to read and apply information from an ice cream nutrition label. It comprises six questions testing numeracy and literacy skills [18]. The test was developed and validated in English and Spanish [18]. For this study the validated UK-Version was used [19]. A score of 0 to 3 was defined as limited functional health literacy level while a score of 4 to 6 was categorized as adequate functional health literacy. The UK-Version of the NVS was translated by two independent professional translators into German and then verified in a panel with the German speaking research team of the HLS-EU, the HLS-EU Survey Coordinator, the translators and other relevant health professionals [17]. Face validity and cognitive pre-tests for less educated young people, older people and migrants were conducted in a previous German study [20]. Internal consistency of the German Version of the NVS in this study was acceptable (Cronbach's alpha 0.73) and comparable to the original UK-Version (Cronbach's alpha 0.74) [19]. Social status was assessed using a 10-point scale ranging from 1 (lowest position in society) to 10 (highest position in society) [8]. An index of 1 to 4 was defined as low social status; values from 5 to 7 were categorized as medium social status; a score greater than 8 was defined as high social status. Gender was categorized as 'female' or 'male'. Educational level was assessed using the International Standard Classification of Education (ISCED-97) [21], which allows for cross-national comparisons of educational levels [22]. ISCED classifies seven levels of educational training, including vocational training. A detailed description of the levels is given by Schneider and Kogan [22]. For the present analysis, educational level was categorized into three groups. Low educational level comprises ISCED levels 0 to 2. This covers education up to lower secondary stage, which often coincides with the end of compulsory education. Medium educational level comprises ISCED levels 3 and 4, which covers upper secondary (level 3) and post-secondary (level 4) education. High educational level comprises ISCED levels 5 and 6, both of which describe tertiary education [22]. Migrant background was assessed by the respondents own and parental country of birth. Respondents born abroad (first generation) or with at least one parent born abroad (second generation) were categorized as having a migrant background. Respondents born in Germany, and whose parents both also were born in Germany, were categorized as not having a migrant background. This conceptualization of migrant background reflects to some extent a person's personal, cultural and language background regardless of their citizenship. Frequency of doctor visits was assessed as the number of contacts with a general practitioner during the last 12 months. The answers were categorized into 0 to 2, 3 to 5 and 6 or more contacts. Age was recorded in years and categorized into four age groups for the analyses. The youngest age group comprised individuals aged 15 to 29 years and is referred to as 'adolescents' given that they are in transition into consumer role, political role, labour force, and having an own family and thus gradually becoming financially and emotionally independent [23]. Individuals aged 30 to 45 years represent a population group with increasing obligations in family organisation, labour market and political and civil engagement. They are labelled as 'young adults'. Respondents between 46 and 64 years were grouped into 'middle-aged adults'. Their status can be defined by complex obligations, stabilisation of life plans and saturation of growth [24]. Individuals aged 65 years and older represent the seniors. Most of them are in retirement, face a stepwise reduction of opportunities and physical possibilities and experience at least some severe problems of health management. Data were analysed using SPSS 23.0. The data were weighted by using Iterative Proportional Fitting to be representative for age, sex, and federal state as compared to the German Microcensus [25]. Descriptive analyses were performed to characterize the study population (Table 1) and describe the distribution of levels of health literacy and scores stratified by age groups. One-way ANOVA was calculated for mean differences between age groups (Table 2). For further analyses, inadequate and problematic levels of health literacy were categorized as 'limited health literacy'. The association of the covariates with limited health literacy was assessed for each of the different age groups using chi-square tests (Table 3) and multivariate logistic regression (Table 4). There was no multicollinearity between covariates in any of the models. Table 1 Study population of the HLS-GER (n = 2,000) Table 2 Health literacy* scores and levels stratified by age groups Table 3 Factors associated with limited health literacy* stratified by age groups - results of bivariate analyses Table 4 Factors associated with limited health literacy* stratified by age groups – results of the multivariate logistic regression** The mean age of the respondents was 48.2 years. In terms of the distribution of age groups, 19.7 % were adolescents, 24.9 % were young adults, 31.6 % were middle-aged adults, and 23.8 % were seniors. All sample characteristics are displayed in Table 1. Health literacy scores could be calculated for 1,946 respondents. While adolescents have an average health literacy score of 33.8 and young adults of 34.0, middle-aged adults have a mean score of 32.8 and seniors of 30.7. Scores and levels of perceived health literacy decrease with increasing age (Table 2). There is great variation in health literacy levels between age groups. While 6.8 % of the adolescents and 7.0 % of the young adults were classified having inadequate perceived health literacy, 9.4 % of the middle-aged adults and 15.2 % of the seniors were placed in this category. Only 3.0 % of the seniors were classified as having excellent perceived health literacy, compared to 10.3 % of the adolescents (Table 2). Limited perceived health literacy was found among 47.3 % of the adolescents and 47.2 % of the young adults. Furthermore, 55.2 % of the middle-aged adults and 66.4 % of the seniors had limited perceived health literacy (Table 3). Among all age groups, limited functional health literacy, low social status, a migrant background and a high number of doctor visits were associated with limited perceived health literacy. For example, 67 % of the adolescents with limited functional health literacy had limited perceived health literacy compared to 44 % of those with adequate functional health literacy. Furthermore, 69 % of the adolescents with low social status and 37 % of those with high social status were of limited perceived health literacy. Among the subsample of adolescents with migrant background, 63 % were limited in their perceived health literacy, whereas this was the case for 46 % of those without migrant background. Similarly, more than 60 % of those with more than two doctor visits had limited perceived health literacy compared to 43% of the adolescents with a maximum of two doctor visits. Education was associated with limited perceived health literacy only among adults. No statistically significant relation of gender and limited perceived health literacy could be observed in the bivariate analyses (Table 3). More than two doctor visits and low social status were statistically significantly associated with limited perceived health literacy among all age groups in the multivariate model (Table 4). Limited functional health literacy was associated with limited perceived health literacy among adults and seniors in the adjusted model. Migrant background was associated with limited perceived health literacy only among adults. Taking other socio-demographic determinants, functional health literacy and doctor visits into account, there was no statistically significant effect of education on limited perceived health literacy in any of the age groups (Table 4). This study describes perceived health literacy and socio-demographic factors associated with it stratified by different age groups among a representative sample in Germany for the first time. The most important finding of this study is the different proportion of limited perceived health literacy among the four age groups representing groups of the population in various stages of their life course in Germany, which is in line with previous international studies [7, 15, 26]. Adolescents and young adults have higher levels of perceived health literacy compared to the two older age groups. Anyhow, almost half of the adolescents and young adults in our study – and thus in an early phase of their life – possess limited perceived health literacy. This is of special importance with regard to findings reporting an association between limited health literacy and lower use of preventive health services [27, 28]. Furthermore, our results show that there are subgroups among each age group having high proportions of limited perceived health literacy. For example, our study shows that, almost 70 % of adolescents with low social status have limited perceived health literacy. Among young adults, more than 70 % of those with a high number of doctor visits are also at special risk of having limited perceived health literacy. A high proportion of persons with migrant background also has limited perceived health literacy. Among middle-aged adults, for example, almost 75 % of those with migrant background are limited in their perceived health literacy. Functional health literacy skills play an important role especially among seniors. About 80 % of those with limited functional health literacy skills have a low perceived health literacy. Thus, our study indicates that levels of perceived health literacy are highly diverse among different subgroups in the same phase of the life course. The persistent relation of perceived health literacy with functional health literacy and social status has been indicated by previous research [8, 29] but not described in detail for different age groups. Our survey holds an interesting detail: Having more than two doctor visits in the last 12 months is statistically significantly associated with limited perceived health literacy among all age groups. A possible explanation for this might be that persons being confronted with decisions about health and health care perceived more difficulties in information tasks and thus show lower levels of perceived health literacy than persons hypothetically thinking about potential difficulties in health information processing but not having been confronted with such situations. Another reason for this could be that persons with limited perceived health literacy more often seek help from their doctor, as they are uncertain or have lower self-efficacy or external locus of control. In addition, the result can be confounded by (perceived) health status, which could not be included in our analyses but is known to be related to health literacy [3–6]. Interestingly, the odds to have limited perceived health literacy when having more than two doctor visits was highest among young adults. This might be explained by an increasing number and complexity of health problems at this age [30] that is typically associated with many different life course decisions in the family, work and community area ('rush hour of life'). We did not find a statistically significant effect of education on limited perceived health literacy when also considering functional health literacy. This is not surprising, as functional health literacy subsumes current health related numeracy and literacy skills instead of unspecific skills gained through formal (vocational) education possibly acquired decades ago. Therefore, the ability to use education measured by functional health literacy may overpower the effect of education alone. We found an association of migrant background with limited perceived health literacy, but only among adults. However, among adolescents and seniors we did not find an effect. An explanation could be that adolescents with migrant background are mostly second-generation and thus differ little from autochthonous young people concerning their general literacy and German language skills [25, 31]. Persons with migrant background in the other age groups may have lower proficiency in German language and therefore use of health information might be perceived more difficult. In addition, their expectations towards information and communication [31] and their skills regarding this might be different from the autochthonous population. An explanation for not showing a relation of migrant background and limited perceived health literacy among the subsample of seniors might be that respondents with migrant background in the oldest age group in our study may not have been representative according their language skills Especially older persons with migrant background with poor German language skills may have been underrepresented as they are difficult to recruit for research [32, 33]. Another reason could be that they have been living in Germany much longer and thus have better German language skills and assimilated to the health care system. However, we could not include any variable verifying this, such as time since immigration, language proficiency or scientific or media literacy. The differences in perceived health literacy among the subgroups may also be explained by the use of different information resources and its reliability and clarity. Persons with migrant background, for example, are known to rely on doctors and family members when seeking for health information and making decisions about health and health care [31]. Higher social class, not having a migrant background and high frequency of health care service use are for example associated with using the Internet for health-related matters [34]. The present study is the first representative survey in German households. Overall, our study shows a higher proportion of limited perceived health literacy compared to European average [8] which might be explained by different demands and complexity of the health care systems. Our study shows a higher proportion of limited perceived health literacy than other findings from Germany [8, 12, 14]. However, these studies were restricted to the German state of North Rhine-Westphalia, a single region that is not representative for Germany [8, 9], or only used a short version HLS-EU questionnaire to measure health literacy which comprises only a selection of items of the full version [12, 14]. Other studies were restricted to people with statutory health insurance [13], or older populations [12] only. A strength of this study is the use of the full version of the HLS-EU-Q-47 questionnaire and its performance by computer-assisted personal interviews, which might explain lower health literacy results compared to other studies using pen and paper survey methods [8]. The face-to-face interview situation used in the present study even enabled persons with inadequate reading abilities to take part. Furthermore, the HLS-EU-Q-47 instrument is a subjective self-report measurement tool reflecting perceived health literacy and designed to assess widespread competencies in dealing with health information. Thus, the results do not reflect functional health literacy. However, considering this important concept a measure of functional health literacy was included in the analyses. However, there are also certain limitations associated with the present study. The necessary dichotomization of health literacy certainly leads to loss of precision in calculations. The numbers in the subsamples among the different age groups (e.g. persons with migrant background) are small and our results do not allow drawing definite conclusions. However, this study gives interesting insights into subgroups and topic relevant for further investigation. The number of doctor visits and social status in our study are self-reported and therefore might be imprecise and underlie social desirability. However, perceived measures are important as they represent the user perspective on a topic. Due to the cross-sectional design, it is not possible to determine whether health literacy decreases with age or whether cohort effects such as the use of different information resources affect the health literacy level in different age groups. Our results suggest that there is a need to further investigate perceived health literacy in all phases of the life-course. In these age-specific studies, special attention needs to be devoted to subgroups such as people with low social status, limited functional health literacy and/or a high number of doctor visits. Further research is also needed taking into account the association of different information habits and resources as well as psychological concepts as health literacy was measured as a self-assessed measure. The relation of doctor visits and perceived health literacy should be further assessed considering objective measurements of the frequency of doctor visits and including different types of contacts to the health care system. There is also need for further investigation of factors relevant among persons with migrant background such as duration of stay, language proficiency and information culture. Our findings suggest that it is important to take into account age-specific differences in health literacy in future research. Instruments to measure perceived health literacy should be evaluated among different age groups and adapted accordingly. 95 % CI: 95 % Confidence interval CAPI: Computer-assisted personal interviews HLS-EU: European Health Literacy Survey HLS-EU-Q-47: Long version of the European Health Literacy Questionnaire HLS-GER: Health Literacy Survey Germany ISCED: International Standard Classification of Education Sørensen K, van den Broucke S, Fullam J, Doyle G, Pelikan J, Slonska Z, Brand H. Health literacy and public health: a systematic review and integration of definitions and models. BMC Public Health. 2012;12:80. doi:10.1186/1471-2458-12-80. Parker RM, Baker DW, Williams MV, Nurss JR. The test of functional health literacy in adults: a new instrument for measuring patients' literacy skills. J Gen Intern Med. 1995;10:537–41. Bostock S, Steptoe A. Association between low functional health literacy and mortality in older adults: longitudinal cohort study. Br Med J. 2012;344:e1602. doi:10.1136/bmj.e1602. Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011;155:97–107. doi:10.7326/0003-4819-155-2-201107190-00005. Advisory Council on the Assessment of Developments in the Healthcare System. Health literacy and functional health status among older adults. Arch Intern Med. 2005;165:1946–52. doi:10.1001/archinte.165.17.1946. DeWalt DA, Berkman ND, Sheridan S, Lohr K, Pignone M. Literacy and health outcomes: a systematic review of the literature. J Gen Intern Med. 2004;19:1228–39. Protheroe J, Whittle R, Bartlam B, Estacio EV, Clark L, Kurth J. Health literacy, associated lifestyle and demographic factors in adult population of an English city: a cross-sectional survey. 2016. doi:10.1111/hex.12440. Sørensen K, Pelikan JM, Röthlin F, Ganahl K, Slonska Z, Doyle G, et al. Health literacy in Europe: comparative results of the European health literacy survey (HLS-EU). Eur J Pub Health. 2015;25(6):1053–8. HLS-EU Consortium. Comparative Report of Health Literacy in eight EU member states. The European Health Literacy Survey HLS-EU (second revised and extended version). 2012. Rootman I, Gordon-El-Bihbety D. A Vision for a Health Literate Canada: Report of the Expert Panel on Health Literacy: Canadian Public Health Association. 2008. Kutner M, Greenberg E, Jin Y, Paulsen C. The health literacy of America's adults: results from the 2003 national assessment of adult literacy. Washington, DC: National Center for Education; 2006. Tiller D, Herzog B, Kluttig A, Haerting J. Health literacy in an urban elderly East-German population – results from the population-based CARLA study. BMC Public Health. 2015;15(1):259. Zok K. [Differences in health literacy: results of a representative survey among adults in the statutory health insurance system in Germany]. WIdO monitor. 2014. (article in German). Jordan S, Hoebel J. Health literacy of adults in Germany: findings from the German health update (GEDA) study. Bundesgesundheitsblatt, Gesundheitsforschung, Gesundheitsschutz. 2015;58(9):942–50. doi:10.1007/s00103-015-2200-z. (article in German). Kobayashi LC, Wardle J, Wolf MS, Wagner CV. Aging and functional health literacy: a systematic review and meta-analysis. J Gerontol B Psychol Sci Soc Sci. 2016;71:445–57. doi:10.1093/geronb/gbu161. Kobayashi LC, Smith SG, O'Conor R, Curtis LM, Park D, Wagner C, et al. The role of cognitive function in the relationship between age and health literacy: a cross-sectional analysis of older adults in Chicago, USA. BMJ Open. 2015;5:e007222. doi:10.1136/bmjopen-2014-007222. Sørensen K, van den Broucke S, Pelikan J, Fullam J, Doyle G, Slonska Z, et al. Measuring health literacy in populations: illuminating the design and development process of the European Health Literacy Survey Questionnaire (HLS-EU-Q). BMC Public Health. 2013;13:948. doi:10.1186/1471-2458-13-948. Weiss BD, Mays MZ, Merriam Castro K, Dewalt DA, Pignone MP, Mockbee J, Hale FA. Quick assessment of literacy in primary care: the newest vital sign. Ann Fam Med. 2005;3:514–22. doi:10.1370/afm.405. Rowlands G, Khazaezadeh N, Oteng-Ntim E, Seed P, Barr S, et al. Development and validation of a measure of health literacy in the UK: the newest vital sign. BMC Public Health. 2013;13:116. doi:10.1186/1471-2458-13-116. Messer M, Vogt D, Quenzel G, Schaeffer D. Health literacy among vulnerable target groups. Development and design of the HLS-NRW-Q questionnaire. Präv Gesundheitsf. 2016;11(2):110–6. doi:10.1007/s11553-016-0532-7 (article in German). OECD. Classifying educational programmes. 1999. http://www.oecd.org/edu/1841854.pdf. Accessed 03 May 2016. Schneider S, Kogan I. The International Standard Classification of Education 1997: challenges in the application to national data and the implementation in cross‐national surveys. Mannheim: MZES; 2008. Hurrelmann K, Quenzel G. Lebensphase Jugend: Eine Einführung in die sozialwissenschaftliche Jugendforschung. 11th ed. Beltz Juventa: Weinheim; 2012. Faltermaier T, Mayring P, Saup W, Strehmel P, Leplow B, Salisch MV. Entwicklungspsychologie des Erwachsenenalters. 3rd ed. Stuttgart: Kohlhammer Verlag; 2013. Statistisches Bundesamt. Bevölkerung und Erwerbstätigkeit. Bevölkerung mit Migrationshintergrund – Ergebnisse des Mikrozensus 2013 – Fachserie 1, Reihe 2.2. Wiesbaden: Statistisches Bundesamt; 2014. Kaphingst KA, Goodman MS, MacMillan WD, Carpenter CR, Griffey RT. Effect of cognitive dysfunction on the relationship between age and health literacy. Patient Educ Couns. 2014;95:218–25. doi:10.1016/j.pec.2014.02.005. Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Viera A, Crotty K, et al. Health literacy interventions and outcomes: an updated systematic review. Evid Rep Technol Assess (Full Rep). 2011;199:1–941. Scott TL, Gazmararian JA, Williams MV, Baker DW. Health literacy and preventive health care use among medicare enrolees in a managed care organization. Med Care. 2002;40:395–404. van der Heide I, Rademakers J, Schipper M, Droomers M, Sorensen K, Uiters E. Health literacy of Dutch adults: a cross sectional survey. BMC public health. 2013;13. doi:10.1186/1471-2458-13-179. World Health Organization (WHO). Summary - world report on ageing and health. World Health Organization (WHO)th ed. 2015. http://apps.who.int/iris/bitstream/10665/186468/1/WHO_FWC_ALC_15.01_eng.pdf. Accessed 23 May 2016. Berens E, Yilmaz-Aslan Y, Spallek J, Razum O. Determinants of mammography screening participation among Turkish immigrant women in Germany—a qualitative study reflecting key informants' and women's perspectives. Eur J Cancer Care. 2016;25:38–48. doi:10.1111/ecc.12334. Yilmaz-Aslan Y, Glodny S, Razum O. Soziale Netzwerkarbeit als alternatives Konzept für die Rekrutierung türkischer Migranten zu wissenschaftlichen Studien am Beispiel des Projektes saba. Hallesche Beiträge zur Gesundheits- und Pflegewissenschaft. 2009;8:636–53. Shaghaghi A, Bhopal RS, Sheikh A. Approaches to recruiting 'Hard-To-Reach' populations into re-search: a review of the literature. Health Promot Perspect. 2011;2:86–94. doi:10.5681/hpp.2011.009. Nolke L, Mensing M, Kramer A, Hornberg C. Sociodemographic and health-(care-)related characteristics of online health information seekers: a cross-sectional German study. BMC Public Health. 2015;15:31. doi:10.1186/s12889-015-1423-0. We acknowledge support for the Article Processing Charge by the Deutsche Forschungsgemeinschaft and the Open Access Publication Funds of Bielefeld University Library. We kindly thank the team at the Ludwig Boltzmann Institute of Health Promotion Research for providing the German version of the HLS-EU Questionnaire and for their support in data preparation. This research project was supported by the German Federal Ministry of Justice and Consumer Protection, Berlin, Germany. The funder had no role in study design, data collection, analysis, decision to publish, interpretation of data or preparation of the manuscript. The data supporting the conclusions of this article is included within the article. EMB cleaned the data, completed the statistical analyses, interpreted the data, and wrote the manuscript. MM and DV organised data collection, cleaned the data, contributed to the statistical analyses, interpreted the data, and helped to draft the manuscript. KH interpreted the data and helped to draft the manuscript. DS conceived the idea for the study, interpreted the data, and helped to draft the manuscript. All authors read and approved the final manuscript. The study was approved by the Ethics Committee of the University of Bielefeld (reference number 066). All potential respondents were contacted personally and thoroughly informed about the aim of the study, data processing, and the use of the data. Participation was voluntary and participants could refuse to participate. Respondents gave informed verbal consent to participate in the study which is in accordance with §4a paragraph 1 of the German Federal Data Protection Act. Adolescent participants gave consent to participate on their own. This is permitted as not the legal age is relevant in this context but the adolescent's capacity of discernment which is generally assumed for adolescents aged 15 years and above in accordance with established case-law. Department of Health Services Research and Nursing Science, School of Public Health, Bielefeld University, Universitaetsstrasse 25, 33615, Bielefeld, North-Rhine Westphalia, Germany Eva-Maria Berens, Dominique Vogt, Melanie Messer & Doris Schaeffer Hertie School of Governance, Friedrichstraße 180, 10117, Berlin, Germany Klaus Hurrelmann Eva-Maria Berens Dominique Vogt Melanie Messer Doris Schaeffer Correspondence to Eva-Maria Berens. Berens, EM., Vogt, D., Messer, M. et al. Health literacy among different age groups in Germany: results of a cross-sectional survey. BMC Public Health 16, 1151 (2016). https://doi.org/10.1186/s12889-016-3810-6 HLS-EU-Q
CommonCrawl
\begin{document} \authorrunninghead{Lai and Roach} \titlerunninghead{Construction of Bivariate Symmetric Wavelets} \title{Construction of bivariate symmetric orthonormal wavelets with short support} \author{Ming-Jun Lai\thanks{Supported by the National Science Foundation under grant DMS-9870187}} \affil{University of Georgia} \and \author{David W. Roach} \affil{Murray State University} \email{[email protected] and [email protected]} \abstract{ In this paper, we give a parameterization of the class of bivariate symmetric orthonormal scaling functions with filter size $6\times 6$ using the standard dilation matrix $2I$. In addition, we give two families of refinable functions which are not orthonormal but have associated tight frames. Finally, we show that the class of bivariate symmetric scaling functions with filter size $8\times 8$ can not have two or more vanishing moments.} \keywords{bivariate, nonseparable, symmetric, wavelets, vanishing moments} \begin{article} \input epsf.tex \def\psone#1#2{\centerline{ \epsfxsize #2in \epsfbox{#1} }} \def\pstwo#1#2#3#4#5#6{ \centerline{ #5\epsfxsize=#2in \epsfbox{#1} #6\epsfxsize=#4in \epsfbox{#3} }} \def3{3} \section{Introduction} \label{Section 1} The most common wavelets used for image processing are the tensor-product of univariate compactly supported orthonormal wavelets. Of this class of wavelets, only the Haar wavelet is symmetric which gives its associated filter the property of linear phase. Since Daubechies' work\cite{D}, numerous generalizations of wavelets have been developed including biorthogonal wavelets, multiwavelets, and bivariate wavelets. Since 1992, several examples of bivariate compactly supported orthonormal and biorthogonal wavelets have been constructed. See Cohen and Daubechies'93 \cite{CD} for nonseparable bidimensional wavelets, J.~Kova\v{c}evi\'c and M.~Vetterli'92\cite{KV} for nonseparable filters and wavelets based on a generalized dilation matrix, He and Lai'97\cite{HL97} for the complete solution of bivariate compactly supported wavelets with filter size up to $4\times 4$, Belogay and Wang'99\cite{BW} for a special construction of bivariate nonseparable wavelets for any given regularity, and Ayache'99 \cite{A} for nonseparable dyadic compactly supported wavelets with arbitrary regularity. See also Cohen and Schlenker'93\cite{CS}, Riemenschneider and Shen'97\cite{RS}, and He and Lai'98\cite{HL98} for bivariate biorthogonal box spline wavelets. It is well-known that in the univariate setting, there does not exist symmetric compactly supported orthonormal wavelets except Haar for dilation factor $2$. We are interested in the construction of symmetric wavelets in the bivariate setting with dilation matrix $2I$ which have compact support and vanishing moments. We start with a scaling funtion $\phi$. Let $$ \hat{\phi}(\omega_1, \omega_2)= \prod_{k=1}^\infty m(e^{\omega_1/2^k}, e^{\omega_2/2^k}) $$ be the Fourier transform of $\phi$, where $$ m(x,y)= \sum_{j=0}^N\sum_{k=0}^N c_{jk} x^j y^k $$ is a trigonometric polynomial satisfying $m(1,1)=1$. In addition, the trigonometric polynomial $m(x,y)$ satisfies the orthonormality condition $$|m(x,y)|^2+|m(-x,y)|^2+|m(x,-y)|^2+|m(-x,-y)|^2=1.$$ Let $\psi_i(x,y)$ be the corresponding wavelet satisfying $$ \hat{\psi}_i(\omega_1,\omega_2)= m_i(e^{\omega_1/2},e^{\omega_2/2}) \hat{\phi}(\omega_1/2, \omega_2/2), i=1,2,3, $$ where the $m_i$ are trigonometric polynomials such that the following matrix $$ \left[\begin{array}{cccc} m(x,y) & m(-x,y) & m(x,-y) & m(-x,-y) \cr m_1(x,y) & m_1(-x,y) & m_1(x,-y) & m_1(-x,-y) \cr m_2(x,y) & m_2(-x,y) & m_2(x,-y) & m_2(-x,-y) \cr m_3(x,y) & m_3(-x,y) & m_3(x,-y) & m_3(-x,-y) \cr \end{array}\right] $$ is unitary. Moreover, we are interested in symmetric scaling functions $\phi$ with a certain number of vanishing moments in the sense that their associated trigonometric polynomial $m(x,y)$ satisfies $$ m(1/x,1/y)= x^{-N}y^{-N} m(x,y) $$ as well as $$ \left.{\partial^k \over \partial x^k} m(x,y)\right|_{x=-1}= \left.{\partial^k \over \partial y^k}m(x,y)\right|_{y=-1}=0, \hspace{.25in} 0\le k\le M-1. $$ The symmetry condition provides $\phi$ with the property of linear phase and the vanishing moment conditions provide $\phi$ with polynomial reproduction up to degree $M-1$. If $m(x,y)$ satisfies the symmetric property, then the associated wavelets can easily be found by using $\hat{\phi}$ and \begin{eqnarray*} m_1(x,y)&=&m(-x,y)\\ m_2(x,y)&=&x\cdot m(x,-y)\\ m_3(x,y)&=&x\cdot m(-x,-y). \end{eqnarray*} In summary, we are looking for trigonometric polynomials $m(x,y)= \displaystyle \sum_{j=0}^N\sum_{k=0}^N c_{jk} x^j y^k$ which satisfy the following properties: \begin{enumerate} \begin{enumerate} \item Existence: $m(1,1)=1$ \label{(i)}. \item Orthogonality: $\displaystyle |m(x,y)|^2+|m(-x,y)|^2+|m(x,-y)|^2+ |m(-x,-y)|^2=1$ \label{(ii)}. \item Symmetry: $m(1/x,1/y)=x^{-N}y^{-N}m(x,y)$\label{(iii)}. \item $M$ vanishing moments: $m(x,y)=(x+1)^M(y+1)^M\tilde{m}(x,y)$ where $\tilde{m}(x,y)$ is another trigonometric polynomial.\label{(iv)} \end{enumerate} \end{enumerate} In this paper, we construct a complete parameterization of all trigonometric polynomials $m(x,y)$ which satisfy the symmetry condition, the vanishing moment condition, and the orthonormality condition for $N=5$ and $M=1$. Within this class, we identify a two-parameter family which contains the trigonometric polynomials associated with scaling functions. Outside of this two-parameter family, we show that the remaining trigonometric functions are not associated with scaling functions but instead determine families of tight frames. Finally, we show that there are no trigonometric polynomials for $N=7$ and $M=2$, and consequently no symmetric bivariate scaling functions with two vanishing moments for the support size we are considering. The paper is organized as follows. Section 2 gives the parameterized solution when $N=5$ and $M=1$. The problem is broken down into four cases which are dealt with in turn. Section 3 discusses the orthonormality of the solutions from Section 2 and concludes with a numerical experiment comparing Haar, D4, and one solution from Section 2. The last two sections show that these trigonometric polynomials cannot have higher vanishing moments(i.e $M\geq 2$) even for $N=7$. \section{The $6\times 6$ Case} \label{Section 2} Our goal is to parameterize the coefficients of the trigonometric polynomials which satisfy properties (i)-(iv). We begin our investigation with trigonometric polynomials whose filter size is $6\times 6$ with one vanishing moment, i.e. $N=5$ and $M=1$. The case $N=1$ is trivially the tensor product Haar function, and the case $N=3$ has eight singleton solutions given by He and Lai'97 \cite{HL97}. Let us express $m(x,y)$ in its polyphase form, i.e., \begin{eqnarray*} m(x,y)&=&f_a(x^2,y^2)+xf_b(x^2,y^2)+ yf_c(x^2,y^2)+ xyf_d(x^2,y^2)\\ &=&\left[\begin{array}{c}1\\y\\y^2\\y^3\\y^4\\y^5\end{array}\right]^T \left[\matrix{ a_0 & b_0 & a_1 & b_1 & a_2 & b_2\cr c_0 & d_0 & c_1 & d_1 & c_2 & d_2\cr a_3 & b_3 & a_4 & b_4 & a_5 & b_5\cr c_3 & d_3 & c_4 & d_4 & c_5 & d_5\cr a_6 & b_6 & a_7 & b_7 & a_8 & b_8\cr c_6 & d_6 & c_7 & d_7 & c_8 & d_8 }\right]\left[\begin{array}{c}1\\x\\x^2\\x^3\\x^4\\x^5\end{array}\right] \end{eqnarray*} where \begin{eqnarray*} f_\nu(x,y) &=&\nu_0+\nu_1x+\nu_2x^2+\nu_3y+ \nu_4xy+\nu_5x^2y+\nu_6y^2+\nu_7xy^2+\nu_8x^2y^2, \end{eqnarray*} for $\nu=a,b,c,d$. The symmetry condition (iii) reduces the number of unknowns by half since $m(x,y)$ becomes $$ m(x,y)=\left[\begin{array}{c}1\\y\\y^2\\y^3\\y^4\\y^5\end{array}\right]^T \left[\matrix{ a_0 & b_0 & a_1 & b_1 & a_2 & b_2\cr b_8 & a_8 & b_7 & a_7 & b_6 & a_6\cr a_3 & b_3 & a_4 & b_4 & a_5 & b_5\cr b_5 & a_5 & b_4 & a_4 & b_3 & a_3\cr a_6 & b_6 & a_7 & b_7 & a_8 & b_8\cr b_2 & a_2 & b_1 & a_1 & b_0 & a_0 }\right]\left[\begin{array}{c}1\\x\\x^2\\x^3\\x^4\\x^5\end{array}\right]. $$ For convenience, we denote $\displaystyle \sum_{\nu=a,b} \nu = a + b$. Thus, by (i), we have \begin{equation} m(1,1)=2\sum^8_{i=0} \sum_{\nu=a,b} \nu_i = 2\sum^8_{i=0}(a_i + b_i) = 1. \label{(2.1)} \end{equation} By (ii), we have the following 13 nonlinear equations \begin{eqnarray} &\displaystyle\sum_{\nu=a,b}& \nu_0 \nu_8 = 0 \label{(2.2)}\\ &\displaystyle\sum_{\nu=a,b}& \nu_2 \nu_6 = 0 \label{(2.3)}\\ &\displaystyle\sum_{\nu=a,b}& (\nu_1\nu_6+\nu_2\nu_7) = 0 \label{(2.4)}\\ &\displaystyle\sum_{\nu=a,b}& (\nu_0\nu_7+\nu_1\nu_8) = 0 \label{(2.5)}\\ &\displaystyle\sum_{\nu=a,b}& (\nu_2\nu_3+\nu_5\nu_6) = 0 \label{(2.6)}\\ &\displaystyle\sum_{\nu=a,b}& (\nu_0\nu_5+\nu_3\nu_8) = 0 \label{(2.7)}\\ &\displaystyle\sum_{\nu=a,b}& (\nu_0\nu_6+\nu_1\nu_7+\nu_2\nu_8) = 0 \label{(2.8)}\\ &\displaystyle\sum_{\nu=a,b}& (\nu_0 \nu_2 + \nu_3\nu_5 + \nu_6\nu_8) = 0 \label{(2.9)}\\ &\displaystyle\sum_{\nu=a,b}& (\nu_1\nu_3 + \nu_2\nu_4 + \nu_4\nu_6 + \nu_5\nu_7) = 0 \label{(2.10)}\\ &\displaystyle\sum_{\nu=a,b}& (\nu_0\nu_4+\nu_1 \nu_5 + \nu_3 \nu_7+ \nu_4\nu_8) = 0 \label{(2.11)}\\ &\displaystyle\sum_{\nu=a,b}& (\nu_0\nu_3+\nu_1\nu_4 +\nu_2\nu_5+ \nu_3\nu_6+\nu_4\nu_7+\nu_5\nu_8)=0 \label{(2.12)}\\ &\displaystyle\sum_{\nu=a,b}& (\nu_0\nu_1+\nu_1\nu_2+\nu_3\nu_4 + \nu_4\nu_5+\nu_6\nu_7+\nu_7\nu_8) = 0 \label{(2.13)}\\ &\displaystyle\sum^8_{i=0}& \displaystyle\sum_{\nu=a,b} \nu^2_i = {1\over 8}. \label{(2.14)} \end{eqnarray} The first moment condition (iv) for $M=1$ yields the following six linear equations: \begin{eqnarray} &a_0 + a_1 + a_2 = b_0 + b_1 + b_2 \label{(2.15)}\\ &a_3 + a_4 + a_5 = b_3 + b_4 + b_5 \label{(2.16)}\\ &a_6 + a_7 + a_8 = b_6 + b_7 + b_8 \label{(2.17)}\\ &a_0 + a_3 + a_6 = b_2 + b_5 + b_8 \label{(2.18)}\\ &a_1 + a_4 + a_7 = b_1 + b_4 + b_7 \label{(2.19)}\\ &a_2 + a_5 + a_8 = b_0 + b_3 + b_6. \label{(2.20)} \end{eqnarray} We need to find the $a_i$'s and $b_i$'s which satisfy the equations (\ref{(2.1)})-(\ref{(2.20)}) simultaneously. We proceed by specifying necessary conditions derived from these equations. \begin{lemma}\label{Lemma 2.2} $\displaystyle \sum^8_{i=0} a_i \ =\ \displaystyle \sum^8_{i=0} b_i\ =\ {1\over 4}$. \end{lemma} \begin{proof} Adding (\ref{(2.15)})- (\ref{(2.17)}) together, we have $\displaystyle \sum^8_{i=0} a_i = \sum^8_{i=0} b_i$. By equation (\ref{(2.1)}), the result follows. \end{proof} Next, equations (\ref{(2.2)}), (\ref{(2.3)}), (\ref{(2.8)}), (\ref{(2.9)}), and (\ref{(2.14)}) imply \begin{equation} \sum_{\nu=a,b}(\nu_4^2 + (\nu_1 + \nu_7)^2 + (\nu_3 + \nu_5)^2 + (\nu_0 + \nu_2 + \nu_6 + \nu_8)^2) = {1\over 8}. \label{(2.21)} \end{equation} We use equations (\ref{(2.10)}) and (\ref{(2.11)}) to get \begin{equation} \sum_{\nu=a,b}(\nu_4(\nu_0 + \nu_2 + \nu_6 + \nu_8) + (\nu_1 + \nu_7)(\nu_3 + \nu_5)) = 0.\label{(2.22)} \end{equation} From equations (\ref{(2.4)}), (\ref{(2.5)}), and (\ref{(2.13)}), we have \begin{equation} \sum_{\nu=a,b}((\nu_1+\nu_7)(\nu_0 + \nu_2 + \nu_6 + \nu_8) + \nu_4(\nu_3 + \nu_5)) = 0.\label{(2.23)} \end{equation} Also, we have, using equations (\ref{(2.6)}), (\ref{(2.7)}), and (\ref{(2.12)}), \begin{equation} \sum_{\nu=a,b}( \nu_4(\nu_1 + \nu_7)+ (\nu_0 + \nu_2 + \nu_6 + \nu_8)(\nu_3 + \nu_5)) = 0.\label{(2.24)} \end{equation} \begin{lemma}\label{Lemma 2.3} \begin{eqnarray*} a_0 + a_2 + a_4 + a_6 + a_8 &=& {1\over 8} + {1\over 4\sqrt{2}} \cos \alpha,\\ b_0 + b_2 + b_4 + b_6 + b_8 &=& {1\over 8} + {1\over 4\sqrt{2}} \sin \alpha. \end{eqnarray*} \end{lemma} \begin{proof} By equation (\ref{(2.21)}) and (\ref{(2.22)}), we have $$ \sum_{\nu=a,b}\left((\nu_4+(\nu_0 + \nu_2 + \nu_6 + \nu_8))^2 + ((\nu_1 + \nu_7) + (\nu_3 + \nu_5))^2\right) = {1\over 8}. $$ By Lemma \ref{Lemma 2.2} and letting $a^* = a_0 + a_2 + a_4 + a_6 + a_8$ and $b^* = b_0 + b_2 + b_4 + b_6 + b_8$, we have $$ (a^*)^2 + ({1\over 4} - a^*)^2 + (b^*)^2 + ({1\over 4} - b^*)^2 = {1\over 8} $$ or $$ (a^* - {1\over 8})^2 + (b^* - {1\over 8})^2 = {1\over 32}. $$ Thus, we are able to conclude the proof. \end{proof} Let $\hat{a} = a_1+ a_4 + a_7$. Observe that equation (\ref{(2.19)}) implies $\hat{a}= b_1 + b_4 + b_7$. It follows from (\ref{(2.21)}) and (\ref{(2.24)}) that $$ \sum_{\nu=a,b} \left((\nu_4+(\nu_1 + \nu_7))^2 + ((\nu_0 + \nu_2 + \nu_6 + \nu_8)+ (\nu_3 + \nu_5))^2\right) = {1\over 8}. $$ Thus, $2(\hat{a})^2 + 2({1\over 4} - \hat{a})^2 = {1\over 8}$. That is, we have $\hat{a} = 0\ \ {\rm or }\ \ {1\over 4}. $ Similarly, it follows from (\ref{(2.21)}) and (\ref{(2.23)}) that $$ \sum_{\nu=a,b}\left((\nu_3 + \nu_4 + \nu_5)^2 + ((\nu_0 + \nu_2 + \nu_6 + \nu_8) +(\nu_1 + \nu_7))^2\right) = {1\over 8}. $$ Let $\tilde{a}=a_3+a_4+a_5=b_3+b_4+b_5$ where the second equality comes from equation (\ref{(2.16)}). Thus, $2(\tilde{a})^2 + 2({1\over 4} - \tilde{a})^2 = {1\over 8}$. That is, $ \tilde{a} = 0\ \ {\rm or } \ \ \tilde{a} = {1\over 4}. $ Therefore, we have the following four cases to consider: \begin{itemize} \item Case 1: \ \ $a_1+a_4+a_7=1/4$\ \ and\ \ $a_3+a_4+a_5=1/4$ \item Case 2: \ \ $a_1+a_4+a_7=1/4$\ \ and\ \ $a_3+a_4+a_5= 0$ \item Case 3: \ \ $a_1+a_4+a_7= 0$\ \ \hskip .135in and\ \ $a_3+a_4+a_5=1/4$ \item Case 4: \ \ $a_1+a_4+a_7= 0$\ \ \hskip .135in and\ \ $a_3+a_4+a_5= 0$ \end{itemize} Since Case 3 is a rotation of Case 2, we will only study Cases 1,2, and 4. As will be discussed, the different cases are actually conditions associated with the zeros of $m(x,y)$ along the axes. \subsection{Complete Solution of Case 1} \label{section 2.1} \noindent Let us first consider Case 1 where both $a_1+a_4+ a_7 = {1\over 4}$ and $a_3 + a_4 + a_5 = {1\over 4}$. By Lemmas \ref{Lemma 2.2} and \ref{Lemma 2.3}, we have $$ a_1 + a_3 + a_5 + a_7\ =\ {1\over 8} - {1\over 4\sqrt{2}}\cos \alpha, \ \ \ b_1 + b_3 + b_5 + b_7\ =\ {1\over 8} - {1\over 4\sqrt{2}}\sin \alpha. $$ However, $(a_1 +a_4 +a_7)+(a_3 +a_4+a_5) = {1\over 2}$. It then follows that $$ 2a_4 = {1\over 2} - {1\over 8} + {1\over 4\sqrt{2}}\cos \alpha \ \ \ {\rm or }\ \ \ a_4 = {3\over 16} + {1\over 8\sqrt{2}} \cos \alpha. $$ Similarly, $b_4 = {3\over 16} + {1\over 8\sqrt{2}} \sin \alpha$. Let us now summarize the above discussion combined with Lemma \ref{Lemma 2.3} in the following lemma. \begin{lemma}\label{Lemma 3.1} \begin{eqnarray*} a_4 = {3\over 16} + {1\over 8\sqrt{2}} \cos \alpha, &&\hspace{.85in}b_4 = {3\over 16} + {1\over 8\sqrt{2}} \sin\alpha,\\ a_1 + a_7 = {1\over 16} - {1\over 8\sqrt{2}} \cos \alpha, &&\hspace{.55in} b_1 + b_7 = {1\over 16} - {1 \over 8\sqrt{2}} \sin \alpha,\\ a_3 + a_5 = {1\over 16} - {1\over 8\sqrt{2}} \cos \alpha, &&\hspace{.55in} b_3 + b_5 = {1\over 16} - {1\over 8\sqrt{2}} \sin\alpha,\\ a_0 + a_2 + a_6 + a_8 = -{1\over 16} + {1\over 8\sqrt{2}} \cos \alpha, && b_0 + b_2 + b_6 + b_8 = -{1\over 16} + {1\over 8\sqrt{2}} \sin \alpha. \end{eqnarray*} \end{lemma} We are now ready to solve for the 18 unknowns. The complete parameterization of Case 1 is given in Theorem \ref{Theorem 3.2}. \begin{theorem}\label{Theorem 3.2} For any $\beta, \gamma \in [0, 2\pi]$, let \begin{eqnarray*} \alpha=2(\beta-\gamma)+\frac{\pi}{4},\hspace{.2in} p={1\over 16} - {1\over 8\sqrt{2}}\cos \alpha, \ &{\rm and}&\ q={1\over 16} - {1\over 8\sqrt{2}} \sin \alpha. \end{eqnarray*} If \begin{eqnarray*} a_0 &=& (-p(1 + \cos(\beta - \gamma)) - q \sin (\beta - \gamma) - \sqrt{p^2 + q^2} (\cos \beta + \cos \gamma))/4\\ a_2 &=&(-p(1 - \cos(\beta - \gamma)) + q \sin (\beta - \gamma) - \sqrt{p^2 + q^2}(\cos \beta - \cos \gamma))/4\\ a_6 &=&(-p(1 - \cos(\beta - \gamma)) + q\sin(\beta - \gamma) + \sqrt{p^2 + q^2}(\cos \beta - \cos \gamma))/4\\ a_8 &=&(-p(1 + \cos(\beta - \gamma)) - q(\sin(\beta - \gamma) + \sqrt{p^2 + q^2} (\cos \beta + \cos \gamma))/4\\ \\ b_0 &=&(-q(1 + \cos(\beta - \gamma)) + p\sin(\beta - \gamma) - \sqrt{p^2 + q^2}(\sin \beta + \sin \gamma))/4\\ b_2 &=&(-q(1 - \cos(\beta - \gamma)) - p\sin(\beta - \gamma) - \sqrt{p^2 + q^2}(\sin \beta - \sin \gamma))/4\\ b_6 &=&(-q(1 - \cos (\beta - \gamma)) - p\sin(\beta - \gamma) + \sqrt{p^2 + q^2}(\sin \beta - \sin \gamma))/4\\ b_8 &=&(-q(1 + \cos (\beta - \gamma)) + p\sin(\beta - \gamma) + \sqrt{p^2 + q^2}(\sin \beta + \sin \gamma))/4\\ \\ a_1 &=& {p\over 2} + {1\over 2}\sqrt{p^2 + q^2} \cos \beta, \hspace{.2in} b_1 \ =\ {q\over 2} + {1\over 2}\sqrt{p^2 + q^2} \sin\beta \\ a_3 &=& {p\over 2} + {1\over 2}\sqrt{p^2 + q^2} \cos\gamma, \hspace{.2in} b_3\ =\ {q\over 2} + {1\over 2}\sqrt{p^2 + q^2} \sin\gamma \\ a_5 &=& {p\over 2} - {1\over 2}\sqrt{p^2 + q^2}\cos\gamma, \hspace{.2in} b_3\ =\ {q\over 2} - {1\over 2}\sqrt{p^2 + q^2}\sin\gamma\\ a_7 &=& {p\over 2} - {1\over 2}\sqrt{p^2 + q^2}\cos\beta, \hspace{.2in} b_7\ =\ {q\over 2} - {1\over 2}\sqrt{p^2 + q^2 }\sin\beta \\ \\ a_4 &=& {1\over 4} - p, \hspace{.2in} b_4 \ =\ {1\over 4} -q, \end{eqnarray*} \noindent then $m(x,y)$ with these coefficients $a_i$ and $b_i$ satisfies the properties (i), (ii), (iii), and (iv). On the other hand, if $m(x,y)$ satisfies the properties (i), (ii), (iii), (iv), and the conditions of Case 1, then the coefficients $a_i$ and $b_i$ can be expressed in the above format. \end{theorem} \begin{proof} The proof consists of deriving necessary conditions from various combinations of the nonlinear equations which eventually leads to a complete parameterization that satisfies each nonlinear equation individually. Assume the conditions of Case 1, i.e. $a_1+a_4+a_7=1/4$ and $a_3+a_4+a_5=1/4$. Using equations (\ref{(2.9)}) and (\ref{(2.14)}), we have $$ \sum_{\nu=a,b}((\nu_0 + \nu_2)^2 + (\nu_3 + \nu_5)^2 + (\nu_6 + \nu_8)^2 + \nu_1^2 + \nu_4^2 + \nu_7^2) = {1\over 8}. $$ Using (\ref{(2.13)}), the above equation becomes $$ \sum_{\nu=a,b}((\nu_0 + \nu_1 + \nu_2)^2 + (\nu_3 + \nu_4 + \nu_5)^2 + (\nu_6 + \nu_7 + \nu_8)^2) = {1\over 8}. $$ Since $a_3 + a_4 + a_5 = b_3 + b_4 + b_5 = {1\over 4}$, we have \begin{equation} a_0 + a_1 + a_2 = 0,\ \ a_6 + a_7 + a_8 = 0, \ \ b_0 + b_1 + b_2 = 0, \ \ b_6 + b_7 + b_8 = 0. \label{(3.1)} \end{equation} Similarly, by equations (\ref{(2.8)}) and (\ref{(2.14)}), we have $$ \sum_{\nu=a,b}((\nu_0 + \nu_6)^2 + (\nu_1 + \nu_7)^2 + (\nu_2 + \nu_8)^2 + \nu^2_3 + \nu^2_4 + \nu^2_5) = {1\over 8}. $$ Using (\ref{(2.12)}), the above equation becomes $$ \sum_{\nu=a,b}((\nu_0 + \nu_3 + \nu_6)^2 + (\nu_1 + \nu_4 + \nu_7)^2 + (\nu_2 + \nu_5 + \nu_8)^2) = {1\over 8}. $$ Since $a_1 + a_4+ a_7 = b_1 + b_4 + b_7 = {1\over 4}$, we have \begin{equation} a_0 + a_3 + a_6 = 0, \ \ a_2 + a_5 + a_8 = 0, \ \ b_0 + b_3 + b_6 = 0, \ \ b_2 + b_5 + b_8 = 0. \label{(3.2)} \end{equation} We now use (\ref{(3.1)}), (\ref{(3.2)}), and the equations in Lemma \ref{Lemma 3.1} to simplify the equations (\ref{(2.2)})-(\ref{(2.14)}). By (\ref{(3.1)}), equation (\ref{(2.13)}) becomes \begin{eqnarray*} 0 &=& \sum_{\nu=a,b} (-\nu_1^2 +\nu_4(\nu_3+\nu_5)-\nu_7^2)\\ &=&\sum_{\nu=a,b} -(\nu_1+\nu_7)^2+\nu_4({1\over 4}-\nu_4)+\nu_1\nu_7 \\ &=&\sum_{\nu=a,b}(-({1\over 4}- \nu_4)^2+\nu_4({1\over 4}-\nu_4)+\nu_1\nu_7)\\ &=& \sum_{\nu=a,b} \nu_1 \nu_7. \end{eqnarray*} since $\sum_{\nu=a,b} (-({1\over 4}- \nu_4)^2+\nu_4({1\over 4}-\nu_4))=0$ by Lemma \ref{Lemma 3.1}. A similar situation exists for equation (\ref{(2.12)}), thus equations (\ref{(2.12)}) and (\ref{(2.13)}) simplify to \begin{equation} a_3 a_5 + b_3 b_5 = 0, \ \ a_1 a_7 + b_1 b_7 = 0.\label{(3.3)} \end{equation} We are now able to solve for $a_1, a_3, a_5, a_7, b_1, b_3, b_5,$ and $b_7$. For simplicity, we denote $a_1+a_7=p$ and $b_1+b_7=q$. By (\ref{(3.3)}) we have $$ a^2_1 - pa_1+b^2_1 - b_1 q = 0 \ \ \ \hbox{or}\ \ \ (a_1-{p\over 2})^2+(b_1-{q\over 2})^2={1\over 4}(p^2 + q^2). $$ Thus, we have \begin{eqnarray*} a_1 &=& {p\over 2} + {1\over 2}\sqrt{p^2+q^2} \cos \beta, \ \ \ b_1 \ =\ {q\over 2} + {1\over 2}\sqrt{p^2+q^2} \sin \beta, \\ a_7 &=& {p\over 2} - {1\over 2}\sqrt{p^2+q^2} \cos \beta, \ \ \ b_7 \ =\ {q\over 2} - {1\over 2}\sqrt{p^2+q^2} \sin \beta. \end{eqnarray*} Similarly, we have \begin{eqnarray*} a_3 &=& {p\over 2} + {1\over 2}\sqrt{p^2+q^2} \cos \gamma, \ \ \ b_3 \ =\ {q\over 2} + {1\over 2}\sqrt{p^2+q^2} \sin \gamma, \\ a_5 &=& {p\over 2} - {1\over 2}\sqrt{p^2+q^2} \cos \gamma, \ \ \ b_5 \ =\ {q\over 2} - {1\over 2}\sqrt{p^2+q^2} \sin \gamma. \end{eqnarray*} Next we simplify the equations (\ref{(2.2)})-(\ref{(2.11)}). Note that the addition of equation (\ref{(2.10)}) and (\ref{(2.11)}) is \begin{equation} \sum_{\nu=a,b}(\nu_4(\nu_0 + \nu_2 + \nu_6 + \nu_8) + (\nu_1 + \nu_7)(\nu_3 +\nu_5)) = 0.\label{(3.4)} \end{equation} After substitution of the equations in Lemma \ref{Lemma 3.1}, the left hand side of equation (\ref{(3.4)}) becomes \begin{eqnarray*} &&2({1\over 4}-p)(-p) + 2(-p)(-p) + 2({1\over 4}-q)(-q) + 2(-q)(-q)\\ &=& 4[p^2 - {1\over 8} p + ({1\over 16})^2 + q^2 - {1\over 8} q + ({1\over 16})^2] - 8({1\over 16})^2\\ &=& 4[(p - {1\over 16})^2 + (q - {1\over 16})^2] - {1\over 32}\\ &=& 4[{1\over 128}\cos^2\alpha+{1\over 128}\sin^2\alpha]- {1\over 32} = 0. \end{eqnarray*} That is, the equations (\ref{(2.10)}) and (\ref{(2.11)}) are linearly dependent and consequently only one needs to be considered. We will deal with equation (\ref{(2.10)}) later. Furthermore, for equations (\ref{(2.4)})-(\ref{(2.9)}), we have the following \begin{splitmath} \sum_{\nu=a,b} ( \nu_1 \nu_6+\nu_2 \nu_7) = \sum_{\nu=a,b} \nu_1 \nu_6 + \nu_1 \nu_7 + \nu_2 \nu_6+ \nu_2 \nu_7\\ = \sum_{\nu=a,b} (\nu_1 + \nu_2)(\nu_6 + \nu_7) = \sum_{\nu=a,b}(-\nu_0)(-\nu_8) = \sum_{\nu=a,b} \nu_0 \nu_8, \end{splitmath} \begin{splitmath} \sum_{\nu=a,b} ( \nu_0 \nu_7+\nu_1 \nu_8 ) = \sum_{\nu=a,b} \nu_0 \nu_7+ \nu_0 \nu_8+\nu_1 \nu_7+\nu_1\nu_8\\ = \sum_{\nu=a,b}(\nu_0 + \nu_1)(\nu_7 + \nu_8) = \sum_{\nu=a,b}(-\nu_2)(-\nu_6) = \sum_{\nu=a,b} \nu_2 \nu_6, \end{splitmath} \begin{splitmath} \sum_{\nu=a,b} (\nu_2 \nu_3+\nu_5 \nu_6) = \sum_{\nu=a,b} \nu_2\nu_3 + \nu_2\nu_6 + \nu_3\nu_5+ \nu_5\nu_6\\ =\sum_{\nu=a,b} (\nu_2 + \nu_5)(\nu_3 + \nu_6) = \sum_{\nu=a,b}(-\nu_8)(-\nu_0) = \sum_{\nu=a,b} \nu_0 \nu_8, \end{splitmath} \begin{splitmath} \sum_{\nu=a,b}(\nu_0 \nu_5 + \nu_3 \nu_8) = \sum_{\nu=a,b} \nu_0 \nu_5 + \nu_0 \nu_8+ \nu_3\nu_5+ \nu_3 \nu_8\\ = \sum_{\nu=a,b}(\nu_0 + \nu_3)(\nu_5 + \nu_8) = \sum_{\nu=a,b}(-\nu_6)(-\nu_2) = \sum_{\nu=a,b} \nu_2\nu_6, \end{splitmath} \begin{splitmath} \sum_{\nu=a,b}(\nu_0 \nu_6 +\nu_1 \nu_7 + \nu_2 \nu_8) = \sum_{\nu=a,b} \nu_0 \nu_6 + \nu_2 \nu_8 = \sum_{\nu=a,b} \nu_0 \nu_6 + \nu_0 \nu_8+ \nu_2 \nu_6 + \nu_2 \nu_8\\ = \sum_{\nu=a,b}(\nu_0 + \nu_2)(\nu_6 + \nu_8) = \sum_{\nu=a,b} (-\nu_1)(-\nu_7) = \sum_{\nu=a,b} \nu_1\nu_7, \end{splitmath} \begin{splitmath} \sum_{\nu=a,b} (\nu_0 \nu_2 + \nu_3 \nu_5 + \nu_6 \nu_8) = \sum_{\nu=a,b} \nu_0 \nu_2 + \nu_0 \nu_8 + \nu_2 \nu_6+ \nu_6 \nu_8\\ = \sum_{\nu=a,b} (\nu_0 + \nu_6)(\nu_2 + \nu_8) = \sum_{\nu=a,b} (-\nu_3)(-\nu_5) =\sum_{\nu=a,b} \nu_3 \nu_5. \end{splitmath} Therefore, only $\sum_{\nu=a,b} \nu_0 \nu_8 = 0$ and $\sum_{\nu=a,b} \nu_2 \nu_6 = 0$ remain to be solved in addition to (\ref{(2.10)}) since equation (\ref{(2.14)}) follows from (\ref{(2.1)})-(\ref{(2.13)}). In order to solve equations (\ref{(2.2)}) and (\ref{(2.3)}), note that (\ref{(3.1)}) and (\ref{(3.2)}) imply $a_0 = a_5 + a_8 - a_1$ and $b_0 = b_5 + b_8 -b_1$. Thus, (\ref{(2.2)}) becomes \begin{equation} a^2_8 + a_8(a_5 - a_1) + b^2_8 + b_8(b_5 - b_1) =0\label{(3.5)} \end{equation} while (\ref{(2.3)}) becomes $$ (a_5 + a_8)(a_7 + a_8) + (b_5 + b_8)(b_7 + b_8) = 0 $$ or \begin{equation} a^2_8 + a_8(a_5 + a_7) + b^2_8 + b_8(b_5 + b_7) = -a_5 a_7 - b_5 b_7. \label{(3.6)} \end{equation} The subtraction of (\ref{(3.5)}) from (\ref{(3.6)}) yields $$ a_8(a_1 + a_7) + b_8(b_1 + b_7) = -a_5 a_7 - b_5 b_7. $$ So, we have \begin{equation} p a_8 + q b_8 = -a_5 a_7 - b_5 b_7.\label{(3.7)} \end{equation} The addition of (\ref{(3.5)}) and (\ref{(3.6)}) yields $$ a^2_8 + (a_5 + {a_7 - a_1\over 2}) a_8 + b^2_8 + (b_5 + {b_7 - b_1\over 2}) b_8 = -{1\over 2} (a_5 a_7 + b_5b_7) $$ which is \begin{equation} (a_8 + {1\over 2}(a_5 + {a_7 - a_1 \over 2}))^2 + (b_8 + {1\over 2}(b_5 + {b_7 - b_1\over 2}))^2 = R_1\label{R1} \end{equation} for some known value $R_1$. In addition, equation (\ref{(3.7)}) can be rewritten as \begin{equation} p(a_8 + {1\over 2}(a_5 + {a_7 - a_1\over 2})) + q(b_8 + {1\over 2}(b_5 + {b_7 - b_1\over 2})) = R_2\label{R2} \end{equation} for some known value $R_2$. Equations (\ref{R1}) and (\ref{R2}) can be solved simultaneously. Thus, we obtain the expression for the $a_i$'s and $b_i$'s given in Theorem \ref{Theorem 3.2}. Finally, to satisfy (\ref{(2.10)}), we put these $a_i$ and $b_i$ into (\ref{(2.10)}) and simplify the equation yielding $$ \cos(\beta-\gamma)={1\over \sqrt{2}}\cos(\alpha-\beta+\gamma) +{1\over \sqrt{2}} \sin(\alpha-\beta+\gamma). $$ Solving, we find that $\alpha=2(\beta-\gamma)+\pi/4$. The above discussion shows that if $m(x,y)$ satisfies the properties (i), (ii), (iii), (iv), and the conditions of Case 1, then its coefficients $a_i$ and $b_i$ can be expressed as the two-parameter family given in the statement of Theorem \ref{Theorem 3.2}. On the other hand, to verify that $m(x,y)$ with the coefficients $a_i$ and $b_i$ satisfies the properties (i), (ii), (iii), (iv), and the conditions of Case 1, we just substitute the solutions back into equations (\ref{(2.1)})-(\ref{(2.20)}). This completes the proof. \end{proof} \subsection{Solution of Case 2} In this section, we consider Case $2$ where $a_1+a_4+a_7=1/4$\ \ and\ \ $a_3+a_4+a_5= 0$. By Lemmas \ref{Lemma 2.2} and \ref{Lemma 2.3}, we have $a_1 + a_3 + a_5 + a_7 = {1 \over 8} - {1 \over4\sqrt{2}} \cos \alpha$. So, we have $$ a_4 = {1 \over 16} + { 1 \over 8 \sqrt{2}} \cos \alpha\ \ \hbox{and} \ \ b_4 = {1 \over 16} + {1 \over 8 \sqrt{2}}\sin \alpha. $$ By (\ref{(2.9)}),(\ref{(2.13)}), and (\ref{(2.14)}), we have $$ \sum_{\nu=a,b}\left( (\nu_0 + \nu_1 + \nu_2)^2 + (\nu_3 + \nu_4 + \nu_5)^2 + (\nu_6 + \nu_7 +\nu_8)^2 \right)= {1 \over 8}. $$ The assumptions of Case $2$ imply $$ \sum_{\nu=a,b} \left((\nu_0 + \nu_1 + \nu_2)^2 + (\nu_6 + \nu_7 + \nu_8)^2\right) = {1 \over 8}, $$ i.e. $$ (a_0 + a_1 + a_2)^2 + (a_6 + a_7 + a_8)^2 + (b_0 + b_1 + b_2)^2 + (b_6 + b_7 + b_8)^2 = {1 \over 8}. $$ By (\ref{(2.15)}) and (\ref{(2.17)}), we have $$ (a_0 + a_1 + a_2) ^2 + (a_6 + a_7 + a_8)^2= {1 \over 16}. $$ By Lemma \ref{Lemma 2.2}, the above equation becomes $$ ( {1 \over 4} - (a_6 + a_7 + a_8))^2 + (a_6 + a_7 + a_8)^2 = {1 \over 16}. $$ It follows that $a_6 + a_7 + a_8 = 0$ or $a_6 + a_7 + a_8 = {1 \over 4}$. Thus, Case $2$ branches out into two subcases. \begin{itemize} \item Subcase 2a: $\displaystyle a_1 + a_4 + a_7 = {1 \over 4}, \ \ a_3 + a_4 + a_5 = 0, \ \ a_6 + a_7 + a_8 = 0, \ \ a_0 + a_1 + a_2 = {1\over 4}. $ \item Subcase 2b: $\displaystyle a_1 + a_4 + a_7 = {1 \over 4}, \ \ a_3 + a_4 +a_5 = 0, \ \ a_6 + a_7 + a_8 = {1 \over 4}, \ \ a_0 + a_1 +a_2 = 0. $ \end{itemize} We only consider subcase $2a$ here. The subcase $2b$ can be treated similarly and is left to the interested reader. Theorem \ref{Theorem 4.1} gives the complete solution of Subcase 2a. \begin{theorem} \label{Theorem 4.1} 1) For any $\gamma \in [0,2\pi]$,\ \ \ $a_3=b_3=a_4=b_4=a_5=b_5=0$, \begin{eqnarray*} a_1 &=& {3\over 16} - {1 \over 8\sqrt{2}}\cos \gamma,\ \ \ \ a_7 = {1\over 16} + {1 \over 8\sqrt{2}}\cos \gamma,\\ a_0 &=& {1\over 32}\left(1 +\sqrt{2} \cos \gamma \pm \sqrt{2 + \sqrt{2} (\cos \gamma + \sin \gamma)}\right),\\ a_2 &=& {1\over 32}\left(1 + \sqrt{2} \cos \gamma \mp \sqrt{2 + \sqrt{2}(\cos \gamma + \sin \gamma)}\right), \\ a_6 &=& {1\over 32}\left(-1 - \sqrt{2} \cos \gamma \mp \sqrt{2 + \sqrt{2}(\cos \gamma + \sin \gamma)}\right),\\ a_8 &=& {1\over 32}\left(-1 -\sqrt{2} \cos \gamma \pm \sqrt{2 + \sqrt{2} (\cos \gamma + \sin \gamma)}\right), \\ b_1 &=& {3\over 16} - {1 \over 8\sqrt{2}}\sin \gamma, \ \ \ \ b_7 = {1\over 16} +{1 \over 8\sqrt{2}}\sin \gamma,\\ b_0 &=& {1\over 32}\left(1 +\sqrt{2} \sin \gamma \mp \sqrt{2 + \sqrt{2} (\cos \gamma + \sin \gamma)}\right),\\ b_2 &=& {1\over 32}\left(1 + \sqrt{2} \sin \gamma \pm \sqrt{2 + \sqrt{2}(\cos \gamma + \sin \gamma)}\right),\\ b_6 &=& {1\over 32}\left(-1 - \sqrt{2} \sin \gamma \pm \sqrt{2 + \sqrt{2}(\cos \gamma + \sin \gamma)}\right),\\ b_8 &=& {1\over 32}\left(-1 -\sqrt{2} \sin \gamma \mp \sqrt{2 + \sqrt{2} (\cos \gamma + \sin \gamma)}\right). \end{eqnarray*} 2) For any $\alpha \in [0,{\pi \over 4})\cup({\pi \over 4},2\pi]$, let \begin{eqnarray*} \gamma&=&\left\{\begin{array}{ccccc} -{1\over 2}\alpha+{7\pi \over 8}& {\rm or} & {1\over 2}\alpha-{3\pi \over 8}& {\rm if} & 0\leq \alpha <{ \pi \over 4}\\ {1\over 2}\alpha+{5\pi \over 8}& {\rm or} & -{1\over 2}\alpha-{ \pi \over 8}& {\rm if} & { \pi \over 4}< \alpha \leq 2\pi \end{array}\right.\\ p&=&-{1\over 16}-{1\over 8\sqrt{2}}\cos \alpha, \ \ \ \ q=-{1\over 16}-{1\over 8\sqrt{2}}\sin \alpha, \\ s&=&\sqrt{32(p^2+q^2)}, \ \ \ \ t=\sqrt{(p+{1\over 8})^2+(q+{1\over 8})^2}, \end{eqnarray*} \begin{eqnarray*} &&a_0=-{1\over 32}\left(-1+8p+s+{t\over p-q}\left[ (-s-8p+8q)\cos \gamma+s\sin\gamma\right]\right)\\ &&a_2={1\over 32}\left(1-8p+s+{t\over p-q}\left[ (-s+8p-8q)\cos \gamma+s\sin\gamma\right]\right)\\ &&a_6=-{1\over 32}\left(1+8p+s+{t\over p-q}\left[ (s+8p-8q)\cos \gamma-s\sin\gamma\right]\right)\\ &&a_8={1\over 32}\left(-1-8p+s+{t\over p-q}\left[ (s-8p+8q)\cos \gamma-s\sin\gamma\right]\right)\\ &&b_0={1\over 32}\left(1-8q+s+{t\over p-q}\left[ (s+8p-8q)\sin \gamma-s\cos\gamma\right]\right)\\ &&b_2=-{1\over 32}\left(-1+8q+s+{t\over p-q}\left[ (s-8p+8q)\sin \gamma-s\cos\gamma\right]\right)\\ &&b_6={1\over 32}\left(-1-8q+s+{t\over p-q}\left[ (-s-8p+8q)\sin \gamma+s\cos\gamma\right]\right)\\ &&b_8=-{1\over 32}\left(1+8q+s+{t\over p-q}\left[ (-s+8p-8q)\sin \gamma+s\cos\gamma\right]\right) \end{eqnarray*} \begin{eqnarray*} &&a_1={1\over 16}(3+8p-8t\cos\gamma), \ \ \ \ b_1\ =\ {1\over 16}(3+8q-8t\sin\gamma)\\ &&a_7={1\over 16}(1+8p+8t\cos\gamma), \ \ \ \ b_7\ =\ {1\over 16}(1+8q+8t\sin\gamma)\\ &&a_3={1\over 16}(8p+s), \hspace{.75in} b_3\ =\ {1\over 16}(8q-s),\\ &&a_5\ =\ {1\over 16}(8p-s), \hspace{.7in} b_5\ =\ {1\over 16}(8q+s)\\ &&a_4=-p, \hspace{1.25in} b_4\ =\ -q. \end{eqnarray*} 3) For $\alpha={\pi \over 4}$, $$ (c_{jk})_{j,k}={1\over 8}\left[\begin{array}{cccccc} 0 & \ \ 1 & 1 & 1 & \ \ 1 & 0\\ 0 &\ \ 0 &0 &0 &\ \ 0& 0\\ 0& -1 &1 &1& -1& 0\\ 0& -1& 1& 1& -1& 0\\ 0 &\ \ 0& 0& 0& \ \ 0 &0\\ 0 &\ \ 1 & 1 & 1 & \ \ 1 & 0\\ \end{array}\right] \ \ \ \hbox{or}\ \ \ {1\over 16}\left[\begin{array}{cccccc} \ \ 1 & \ \ 1 & 2 & 2 & \ \ 1 & \ \ 1\\ -1 &\ \ 1 &0 &0 &\ \ 1& -1\\ \ \ 0& -2 &2 &2& -2& \ \ 0\\ \ \ 0& -2& 2& 2& -2& \ \ 0\\ -1 & \ \ 1& 0& 0& \ \ 1 &-1\\ \ \ 1 & \ \ 1 & 2 & 2 & \ \ 1 & \ \ 1\\ \end{array}\right]. $$ If $m(x,y)$ has the coefficients given by 1), 2), or 3), then $m(x,y)$ satisfies the properties of (i), (ii), (iii), and (iv). Conversely, if $m(x,y)$ satisfies the properties of (i), (ii), (iii), (iv), and the conditions of Subcase 2a, then the coefficients of $m(x,y)$ can be expressed as 1), 2), or 3). \end{theorem} \begin{proof} Assume the conditions of Subcase 2a, i.e. $$\displaystyle a_1 + a_4 + a_7 = {1 \over 4}, \ \ a_3 + a_4 + a_5 = 0, \ \ a_6 + a_7 + a_8 = 0, \ \ a_0 + a_1 + a_2 = {1\over 4}. $$ We immediately have from Lemma \ref{Lemma 2.3} $$a_4=\frac{1}{16}+\frac{1}{8\sqrt{2}}\cos \alpha, \ \ \ \ b_4=\frac{1}{16}+\frac{1}{8\sqrt{2}}\sin \alpha.$$ By equations (\ref{(2.8)}), (\ref{(2.12)}) and (\ref{(2.14)}), we have $$ \sum_{\nu=a,b} (\nu_0 + \nu_3 + \nu_6)^2 + (\nu_1 + \nu_4 + \nu_7)^2 + (\nu_2 + \nu_5 + \nu_8)^2 = {1 \over 8}. $$ It follows that \begin{eqnarray} a_0 + a_3 + a_6 &=& 0, \ \ \ \ b_0 + b_3 + b_6 = 0,\nonumber\\ a_2 + a_5 + a_8 &=& 0, \ \ \ \ b_2 + b_5 + b_8 = 0.\label{(4.1)} \end{eqnarray} We now simplify the 13 nonlinear equations (\ref{(2.2)}) -(\ref{(2.14)}). With (\ref{(2.2)}), (\ref{(2.3)}), and (\ref{(4.1)}), we may simplify (\ref{(2.9)}) as follows: \begin{eqnarray} 0 &=& \sum_{\nu=a,b}(\nu_0 \nu_2 + \nu_3 \nu_5 + \nu_6 \nu_8) \ =\ \sum_{\nu=a,b} (\nu_0 + \nu_6) (\nu_2 + \nu_8) + \nu_3 \nu_5 \nonumber\\ &=& \sum_{\nu=a,b} (-\nu_3) (-\nu_5) + \nu_3 \nu_5 \ =\ 2 \sum_{\nu=a,b} \nu_3 \nu_5. \label{(4.2)} \end{eqnarray} Recall $a_3 + a_4 + a_5 = 0$ and $ b_3 + b_4 + b_5 = 0$. We can now solve for $a_3, a_5, b_3, b_5$. Indeed, (\ref{(4.2)}) can be rewritten as $(a_4 +a_5) a_5 + (b_4 + b_5) b_5 = 0$. After simplifying, we have $$ (a_5 + {a_4 \over 2})^2 + (b_5 + {b_4 \over 2})^2 = {1\over 4} (a_4^2 + b_4^2). $$ Thus, we have \begin{eqnarray*} a_3 &=& {p \over 2} + {1 \over 2} \sqrt{p^2 + q^2} \cos \beta, \ \ b_3 \ =\ {q \over 2} + {1 \over 2} \sqrt{p^2 + q^2} \sin \beta\\ a_5&=& {p \over 2} - {1 \over 2} \sqrt{p^2 + q^2} \cos \beta, \ \ b_5 \ =\ {q \over 2} - {1 \over 2} \sqrt{p^2 + q^2} \sin \beta, \end{eqnarray*} where $p = -a_4$ and $q = -b_4$ (different $p$ and $q$ from section \ref{section 2.1}). We note that \begin{equation} p^2 + q^2 + {1 \over 8} ( p + q) = 0. \label{(4.3)} \end{equation} Similarly, we may simplify (\ref{(2.8)}) to be \begin{eqnarray} 0&=& \sum_{\nu=a,b} (\nu_1 \nu_7 + \nu_0 \nu_6 + \nu_2 \nu_8) \ =\ \sum_{\nu=a,b} \nu_1 \nu_7 + (\nu_0 + \nu_2) (\nu_6 + \nu_8)\nonumber\\ &=& \sum_{\nu=a,b} \nu_1 \nu_7 + ( {1 \over 4} - \nu_1) (-\nu_7) \ =\ 2 \sum_{\nu=a,b} (\nu_1 \nu_7 - {1 \over 8} \nu_7).\label{(4.4)} \end{eqnarray} Recall $a_1 + a_7 = {1 \over 4} -a_4$ and $b_1 + b_7= {1 \over 4} - b_4.$ We may solve for $a_1$ and $b_1$ to get $a_1 = {1 \over 4} - a_4 - a_7$ and $b_1 = { 1 \over 4} - b_4 - b_7$. Putting them into (\ref{(4.4)}), we have $$ \left({1 \over 4} - a_4 - a_7 \right) a_7 - {a_7 \over 8} + \left({1 \over 4} - b_4 - b_7 \right) b_7 - { b_7 \over 8} = 0 $$ or $$ \left(a_7 - {{1 \over 8} - a_4 \over 2}\right)^2 + \left(b_7 - {{1 \over 8} - b_4 \over 2}\right)^2 = {\left(a_4 - {1 \over 8} \right)^2 + \left( b_4-{1 \over 8}\right)^2 \over 4}. $$ It follows that \begin{eqnarray*} a_7 &=& {p \over 2} + {1 \over 16} + {1 \over 2} \sqrt{\left( p+{1 \over 8} \right)^2 + \left(q+ {1 \over 8} \right)^2} \ \ \cos \gamma, \\ b_7 &=& {q \over 2} + {1 \over 16} + { 1 \over 2} \sqrt{ \left( p+ { 1 \over 8} \right)^2 + \left( q+ { 1 \over 8} \right)^2} \ \ \sin \gamma,\\ a_1 &=& {p\over 2} + { 3 \over 16} - {1 \over 2} \sqrt{ \left( p+ {1 \over 8} \right)^2 + \left( q+ { 1 \over 8} \right)^2} \ \ \cos \gamma,\\ b_1 &=& {q \over 2} + { 3 \over 16} - { 1 \over 2} \sqrt{\left( p+ { 1 \over 8} \right)^2 + \left( q+ { 1 \over 8} \right)^2} \ \ \sin \gamma.\\ \end{eqnarray*} We add the left-hand side of (\ref{(2.10)}) and (\ref{(2.11)}) together to get \begin{eqnarray*} \sum_{\nu=a,b}&&\hspace{-.2in}\bigg(\nu_0 + \nu_2\bigg) \nu_4 + \nu_1 \bigg(\nu_3 + \nu_5 \bigg) + \nu_4 \bigg(\nu_6 + \nu_8 \bigg) + \nu_7 \bigg(\nu_3 + \nu_5\bigg)\\ &=&\sum_{\nu=a,b} \nu_4 \bigg( {1 \over 4} - \nu_1 \bigg) + \nu_1 \bigg(-\nu_4 \bigg) + \nu_4 \bigg( -\nu_7 \bigg) + \nu_7 \bigg( -\nu_4 \bigg)\\ &=&\sum_{\nu=a,b} \nu_4 \left[ { 1 \over 4} - 2 \bigg( \nu_1 + \nu_7 \bigg) \right] \ =\ \sum_{\nu=a,b} \nu_4 \bigg(2\nu_4 - { 1 \over 4} \bigg)\\ &=&2 \bigg( 2a_4^2 - { 1 \over 4} a_4 \bigg) + 2 \bigg( 2 b_4^2 - { 1 \over 8} b_4 \bigg)\\ &=& 4 \left[ p^2 + q^2 + { 1 \over 8} \bigg( p + q \bigg) \right] \ =\ 0. \end{eqnarray*} That is, only one of (\ref{(2.10)}) and (\ref{(2.11)}) needs to be solved. Thus, we deal with (\ref{(2.10)}) later. Turning our attention to equation (\ref{(2.13)}). We have by (\ref{(4.2)}) and (\ref{(4.3)}) \begin{eqnarray*} \sum_{\nu=a,b}&& \hspace{-.2in}\nu_0 \nu_1 + \nu_1 \nu_2 + \nu_3 \nu_4 + \nu_4 \nu_5 + \nu_6 \nu_7 + \nu_7 \nu_8\\ &=& \sum_{\nu=a,b} \bigg(\nu_0 + \nu_2 \bigg) \nu_1 + \bigg(\nu_3 + \nu_5 \bigg) \nu_4 + \nu_7 \bigg(\nu_6 + \nu_8 \bigg)\\ &=& \sum_{\nu=a,b} \bigg( { 1 \over 4} - \nu_1 \bigg) \nu_1 - \nu_4^2 - \nu_7^2\\ &=& \sum_{\nu=a,b} -2 \nu_1 \nu_7 - \nu_1^2 - \nu_7^2 - \nu_4^2+{1\over 4}(\nu_1+\nu_7)\\ &=& \sum_{\nu=a,b} \bigg( - \bigg( \nu_1 + \nu_7 \bigg)^2 - \nu_4^2+ {1\over 4}(\nu_1+\nu_7) \bigg)\\ &=& - \sum_{\nu=a,b} \bigg( { 1 \over 4} - \nu_4 \bigg)^2 + \nu_4^2- {1\over 4}\bigg({1\over 4}-\nu_4\bigg)\ =\ 0. \end{eqnarray*} That is, (\ref{(2.13)}) holds for these $a_4, a_1, a_7, b_1, b_7$. Similarly, (\ref{(2.12)}) is satisfied in that \begin{eqnarray*} \sum_{\nu=a,b} &&\hspace{-.2in}\bigg( \nu_0 \nu_3 + \nu_1 \nu_4 + \nu_2 \nu_5 + \nu_3 \nu_6 +\nu_4 \nu_7 + \nu_5 \nu_8 \bigg) \\ &=& \sum_{\nu=a,b} -\nu_3 \bigg(\nu_0 + \nu_6 \bigg) + \nu_4 \bigg( \nu_1 + \nu_7\bigg) + \nu_5 \bigg( \nu_2 + \nu_8 \bigg)\\ &=& \sum_{\nu=a,b} \bigg(-\nu_3^2 + \nu_4 \bigg( { 1 \over 4} - \nu_4 \bigg) -\nu_5^2 \bigg) \\ &=& \sum_{\nu=a,b} - \bigg( \nu_3 + \nu_5 \bigg)^2 - a^2_4 + {\nu_4\over 4}\\ &=& -\sum_{\nu=a,b} \bigg( 2 \nu_4^2 - {\nu_4\over 4 }\bigg) = 0. \end{eqnarray*} We now show that equations (\ref{(2.3)}), (\ref{(2.9)}), (\ref{(4.1)}) and (\ref{(4.2)}) imply (\ref{(2.6)}). The addition of (\ref{(2.6)}) and (\ref{(2.9)}) yields $$ \sum_{\nu=a,b} \nu_6 \bigg(\nu_5 + \nu_8 \bigg) + \nu_2 \bigg( \nu_0 + \nu_3 \bigg) + \nu_3 \nu_5 = \sum_{\nu=a,b} -2\nu_2 \nu_6 + \nu_3\nu_5 =0. $$ Similarly, the equations (\ref{(2.2)}), (\ref{(2.9)}) and (\ref{(4.2)}) imply (\ref{(2.7)}). Since (\ref{(2.8)}) holds for these $a_1, b_7, a_7, b_7$ under the assumptions of (\ref{(2.2)}) and (\ref{(2.3)}), we further simplify (\ref{(2.4)}) by adding (\ref{(2.4)}) and (\ref{(2.8)}) together. That is, \begin{eqnarray*} 0 &=& \sum_{\nu=a,b} \nu_2 \nu_7 + \nu_1 \nu_6 + \nu_1 \nu_7 + \nu_0 \nu_6 + \nu_2 \nu_8\nonumber\\ &=& \sum_{\nu=a,b} \nu_6 \bigg( \nu_0 + \nu_1 \bigg) + \nu_2 \bigg( \nu_7 + \nu_8\bigg) + \nu_1 \nu_7 \nonumber\\ &=& \sum_{\nu=a,b} \nu_6 \bigg( { 1 \over 4} - \nu_2 \bigg) + \nu_2 \bigg (-\nu_6\bigg) + \nu_1 \nu_7\nonumber\\ &=& \sum_{\nu=a,b} \bigg( { 1 \over 4 } \nu_6 + \nu_1 \nu_7 \bigg) = \sum_{\nu=a,b}\left( {1\over 4} \nu_6+ {1\over 8} \nu_7\right).\label{(4.5)} \end{eqnarray*} Similarly, the sum of equations (\ref{(2.5)}) and (\ref{(2.8)}) is equivalent to \begin{equation} \sum_{\nu=a,b} \bigg(\nu_1 \nu_7 + {1 \over 4} \nu_8 \bigg) = \sum_{\nu=a,b} \left({1\over 8}\nu_7 +{1\over 4}\nu_8\right)=0. \label{(4.6)} \end{equation} However, equations (\ref{(4.5)}) and (\ref{(4.6)}) are equivalent by using $a_6+a_7+a_8=0$ and $b_6+b_7+b_8=0$. Thus, we only need to consider one of them. In summary, we only need to solve the following equations $$ a_0 a_8 + b_0 b_8 = 0, \ \ \ \ \ a_2 a_6 + b_2 b_8 = 0, $$ $$ \sum_{\nu=a,b}(\nu_7 + 2\nu_8) = 0, \ \ \ \sum_{\nu=a,b}(\nu_0 \nu_4 + \nu_4 \nu_8 + \nu_1\nu_5 + \nu_3 \nu_7) = 0. $$ Using the linear relationships, we have \begin{eqnarray*} a_0 &=& {1\over 4} - a_1 + a_5 + a_8, \ \ b_0 = {1\over 4} - b_1 + b_5 + b_8\\ a_2 &=& -a_5 - a_8, \ \ \ b_2 = -b_5 - b_8, \ \ \ a_6 = -a_7 - a_8, \ \ \ b_6 = -b_7 - b_8. \end{eqnarray*} Putting these linear equations in (\ref{(2.2)}) and (\ref{(2.3)}), we get \begin{eqnarray*} a^2_8 + (a_5 - a_1 + {1\over 4}) a_8 + b^2_8 + (b_5 - b_1 + {1\over 4})b_8 &=& 0\\ a^2_8 + (a_5 + a_7) a_8 + a_5 a_7 + b^2_8 +(b_5 + b_7)b_8 + b_5b_7 &=& 0. \end{eqnarray*} Subtracting the first one of the above two equations from the second one, and using $a_1 + a_7 = p + {1\over 4}$, $b_1 + b_7 = q + {1\over 4}$, we get $$ pa_8 + qb_8 = -a_5 a_7 - b_5b_7. $$ Using these linear relationships and (\ref{(2.11)}), we get $$ \displaystyle pa_8 + qb_8 = {1\over 2} \sum_{\nu=a,b} (\nu_1 \nu_5 + \nu_3 \nu_7(\nu_5 - \nu_1 + {1\over 4})\nu_4). $$ It follows that $$ \displaystyle \sum_{\nu=a,b}(2\nu_5\nu_7 + \nu_1 \nu_5 + \nu_3 \nu_7 + (\nu_5 - \nu_1 +{1\over 4})\nu_4) = 0 $$ which can be simplified to be $$ \sum_{\nu=a,b}({1\over 4} \nu_5 + \nu_4^2) = 0 $$ That is, we have \begin{eqnarray*} 0 &=& a^2_4 + b^2_4 + {1\over 4}(a_5 + b_5)\\ &=& {1\over 8}(a_4 + b_4) + {1\over 4}(p + q + {1\over 2} \sqrt{p^2 + q^2}(\cos \beta + \sin \beta)\\ &=& {1\over \sqrt{2}} \sqrt{p^2 + q^2} \sin({\pi\over 4} + \beta). \end{eqnarray*} It follows that either $\beta = -{\pi \over 4}$, ${3\pi \over 4}$ or $p^2 + q^2 = 0$ which the latter occurs when $\alpha = -{3\pi \over 4}$. We first consider $\alpha = -{3\pi \over 4}$. In this situation, we have $p = 0$ and $q = 0$. It follows that $a_3 = b_3 = a_4 = b_4 = a_5 = b_5 = 0$. It is clear that $p a_8 + qb_8 = -\sum_{\nu=a,b} \nu_5 \nu_7$ holds. Thus, we only have two equations to solve $$ a_0 a_8 + b_0 b_8 = 0, \ \ \ a_8 + b_8 =-{1\over 2}(a_7 + b_7). $$ Using $a_0 = {1\over 4} - a_1 + a_8$ and $b_0 = {1\over 4} - b_1 + b_8$, we solve the above equations and get \begin{eqnarray*} a_8 &=& {1\over 32}\bigg(-1 -\sqrt{2} \cos \gamma \pm \sqrt{2 + \sqrt{2} (\cos \gamma + \sin \gamma)}\bigg)\\ b_8 &=& {1\over 32}\bigg(-1 -\sqrt{2} \sin \gamma \mp \sqrt{2 + \sqrt{2}(\cos \gamma + \sin \gamma)}\bigg)\\ a_0 &=& {1\over 32}\bigg(1 +\sqrt{2} \cos \gamma \pm \sqrt{2 + \sqrt{2} (\cos \gamma + \sin \gamma)}\bigg)\\ b_0 &=& {1\over 32}\bigg(1 +\sqrt{2} \sin \gamma \mp \sqrt{2 + \sqrt{2} (\cos \gamma + \sin \gamma)}\bigg). \end{eqnarray*} Using the linear relationships, we have \begin{eqnarray*} a_2 &=& {1\over 32}\bigg(1 + \sqrt{2} \cos \gamma \mp \sqrt{2 + \sqrt{2}(\cos \gamma + \sin \gamma)}\bigg)\\ b_2 &=& {1\over 32}\bigg(1 + \sqrt{2} \sin \gamma \pm \sqrt{2 + \sqrt{2}(\cos \gamma + \sin \gamma)}\bigg)\\ a_6 &=& {1\over 32}\bigg(-1 - \sqrt{2} \cos \gamma \mp \sqrt{2 + \sqrt{2}(\cos \gamma + \sin \gamma)}\bigg)\\ b_6 &=& {1\over 32}\bigg(-1 - \sqrt{2} \sin \gamma \pm \sqrt{2 + \sqrt{2}(\cos \gamma + \sin \gamma)}\bigg)\\ a_1 &=& {3\over 16} - {\cos \gamma \over 8\sqrt{2}}, \ \ b_1 = {3\over 16} - {\sin \gamma \over 8\sqrt{2}}, \ \ a_7 = {1\over 16} + {\cos \gamma \over 8\sqrt{2}}, \ \ b_7 = {1\over 16} +{\sin \gamma \over 8\sqrt{2}}. \end{eqnarray*} Next we consider $\beta = -{\pi \over 4}$ or $\beta = {3\pi \over 4}$. We only need to consider $\beta = -{\pi \over 4}$ while $p^2 + q^2 \ne 0$ because the other case is a rotation of this one. For $\beta = -{\pi\over 4}$, we have three equations to solve: \begin{equation} a_0 a_8 + b_0 b_8 = 0, \ \ \ pa_8 + qb_8 = -\sum_{\nu=a,b} a_5 a_7, \ \ \ a_8 + b_8 = -{1\over 2} (a_7 + b_7).\label{(4.7)} \end{equation} Assuming $p \ne q$, i.e. $\alpha \ne {\pi \over 4}$, we can solve the second and third ones in the above three equations for $a_8$ and $b_8$: The solutions for $a_8$ and $b_8$ are \begin{eqnarray*} b_8 &=& {1\over q - p}\bigg(-{p\over 2}(a_7 + b_7) + \sum_{\nu=a,b} a_5 a_7\bigg)\\ a_8 &=& -{1\over 2}(a_7 + b_7) +{1\over p - q}\bigg(-{p\over 2}(a_7 + b_7) + \sum_{\nu=a,b} a_5 a_7\bigg) \end{eqnarray*} leaving the relationship between $\alpha$ and $\gamma$ as $a_0a_8 + b_0 b_8 = 0$. Upon substitution and simplification, we have \begin{equation} \sin(\alpha + {\pi \over 4}) - 2\sin(\gamma + {\pi \over 4}) \sqrt{2(1 - \sin(\alpha + {\pi \over 4})} + \sin(2\gamma)-2 \ =\ 0. \label{(4.8)} \end{equation} This equation (\ref{(4.8)}) has a one parameter family of solutions given by $$ \gamma=\left\{\begin{array}{ccccc} -{1\over 2}\alpha+{7\pi \over 8}& {\rm or} & {1\over 2}\alpha-{3\pi \over 8}& {\rm if} & 0< \alpha <{ \pi \over 4}\\ {1\over 2}\alpha+{5\pi \over 8}& {\rm or} & -{1\over 2}\alpha-{ \pi \over 8}& {\rm if} & { \pi \over 4}< \alpha < 2\pi. \end{array}\right. $$ The final case is when $\beta=-{\pi\over 4}$ and $p=q$ (i.e. $\alpha={\pi\over 4}$) which implies that $$a_1=b_1=a_4=b_4={1\over 8}, a_3=a_7=b_5=b_7=0, a_5=b_3=-{1\over8}.$$ This reduces (\ref{(4.7)}) to $$ a_0a_8+b_0b_8=0,\ \ \ a_8+b_8=0, $$ which has two solutions: $a_8=b_8=0$ or $a_8={1\over 16}$ and $b_8=-{1\over 16}$ yielding two rational solutions: $$ {1\over 8}\left[\begin{array}{cccccc} 0 & 1 & 1 & 1 & 1 & 0\\ 0 &0 &0 &0 &0& 0\\ 0& -1 &1 &1& -1& 0\\ 0& -1& 1& 1& -1& 0\\ 0 & 0& 0& 0& 0 &0\\ 0 & 1 & 1 & 1 & 1 & 0\end{array}\right], \ \ \ \ {1\over 16}\left[\begin{array}{cccccc} 1 & 1 & 2 & 2 & 1 & 1\\ -1 &1 &0 &0 &1& -1\\ 0& -2 &2 &2& -2& 0\\ 0& -2& 2& 2& -2& 0\\ -1 & 1& 0& 0& 1 &-1\\ 1 & 1 & 2 & 2 & 1 & 1\end{array}\right]. $$ \end{proof} \subsection{Solution of Case 4} Finally, we consider Case 4 where both $a_1 + a_4 + a_7 = 0$ and $a_3 + a_4 + a_5 = 0$. From equations (\ref{(2.16)}) and (\ref{(2.19)}) we have $b_3 + b_4 +b_5 = 0$ and $b_1 + b_4 + b_7 = 0$. As before, using equations (\ref{(2.9)}), (\ref{(2.13)}) and (\ref{(2.14)}), we have $$ \sum_{\nu=a,b}((\nu_0 + \nu_1 + \nu_2)^2 + (\nu_3 + \nu_4 + \nu_5)^2 + (\nu_6 + \nu_7 + \nu_8)^2) = {1\over 8}. $$ It follows that $\sum_{\nu=a,b}((\nu_0 + \nu_1 + \nu_2)^2 + (\nu_6 + \nu_7 +\nu_8)^2) = {1\over 8}$. Using equations (\ref{(2.15)}) and (\ref{(2.17)}), we have $$ (a_0 + a_1 + a_2)^2 + (a_6 + a_7 + a_8)^2 ={1\over 16}. $$ By Lemma \ref{Lemma 2.2}, the above equation is $$ ({1\over 4} - (a_6 + a_7 + a_8))^2 + (a_6 +a_7 + a_8)^2 = {1\over 16}. $$ Thus, $a_6 + a_7 + a_8 = {1\over 4}$ or $a_6 + a_7 + a_8 = 0$. Also, using equations (\ref{(2.8)}), (\ref{(2.12)}) and (\ref{(2.14)}), we have $$ \sum_{\nu=a,b}\left((\nu_0 + \nu_3 + \nu_6)^2 + (\nu_1 + \nu_4 + \nu_7)^2 + (\nu_2 + \nu_5 + \nu_8)^2 \right)= {1\over 8}. $$ It follows that $\sum_{\nu=a,b}\left((\nu_0 + \nu_3 + \nu_6)^2 + (\nu_2 + \nu_5 + \nu_8)^2 \right)= {1\over 8}.$ So, we have $$ (a_0 + a_3 + a_6)^2 + (a_2 + a_5 + a_8)^2 = {1\over 16}. $$ Again by Lemma \ref{Lemma 2.2}, $\left({1\over 4} - (a_0 + a_3 + a_6)\right)^2 + (a_0 + a_3 + a_6)^2 = {1\over 16}.$ Thus, $a_0 + a_3 + a_6 = 0$ or $a_0 + a_3 + a_6 = {1\over 4}$. Therefore, we have four subcases to consider. In addition to $$a_1 + a_4 + a_7 =0, a_3 + a_4 + a_5 = 0, b_1 + b_4 + b_7 =0, b_3 + b_4 + b_5 = 0, $$ we have \begin{itemize} \item Subcase 4a: \begin{eqnarray*} &&a_0 + a_1 + a_2 = {1\over 4},\ \ \ \ a_0 + a_3 + a_6 = {1\over 4}, \ \ \ \ a_6 + a_7 + a_8 = 0, \ \ \ \ a_2 + a_5 + a_8 = 0\\ &&b_0 + b_1 + b_2 ={1\over 4},\ \ \ \ b_0 + b_3 + b_6 = 0,\ \ \ \ b_6 + b_7 + b_8 =0, \ \ \ \ b_2 + b_5 + b_8 = {1\over 4} \end{eqnarray*} \item Subcase 4b: \begin{eqnarray*} &&a_0 + a_1 + a_2 = 0, a_0 + a_3 + a_6 = {1\over 4},\ \ \ \ a_6 + a_7 + a_8 = {1\over 4},\ \ \ \ a_2 + a_5 + a_8 = 0,\\ &&b_0 + b_1 + b_2 =0,\ \ \ \ b_0 + b_3 + b_6 =0 ,\ \ \ \ b_6 + b_7 + b_8 ={1\over 4},\ \ \ \ b_2 + b_5 + b_8 = {1\over 4}\\ \end{eqnarray*} \item Subcase 4c: \begin{eqnarray*} &&a_0 + a_1 + a_2 ={1\over 4} ,\ \ \ \ a_0 + a_3 + a_6 = 0,\ \ \ \ a_6 + a_7 + a_8 = 0,\ \ \ \ a_2 + a_5 + a_8 = {1\over 4}\\ &&b_0 + b_1 + b_2 ={1\over 4},\ \ \ \ b_0 + b_3 + b_6 = {1\over 4},\ \ \ \ b_6 + b_7 + b_8 =0,\ \ \ \ b_2 + b_5 + b_8 = 0, \end{eqnarray*} \item Subcase 4d: \begin{eqnarray*} &&a_0 + a_1 + a_2 =0 ,\ \ \ \ a_0 + a_3 + a_6 = 0,\ \ \ \ a_6 + a_7 + a_8 = {1\over 4},\ \ \ \ a_2 + a_5 + a_8 = {1\over 4}\\ &&b_0 + b_1 + b_2 =0 ,\ \ \ \ b_0 + b_3 + b_6 ={1\over 4},\ \ \ \ b_6 + b_7 + b_8 ={1\over 4},\ \ \ \ b_2 + b_5 + b_8 = 0. \end{eqnarray*} \end{itemize} We only study the Subcase 4a and leave the other three subcases to the interested reader. With the linear constraints, we tackle the nonlinear conditions (\ref{(2.2)})-(\ref{(2.14)}). We use (\ref{(2.2)}) and (\ref{(2.3)}) to simplify (\ref{(2.8)}) and (\ref{(2.9)}) as follows: \begin{eqnarray} 0 &=& \sum_{\nu=a,b} \nu_0 \nu_6 + \nu_1 \nu_7 + \nu_2 \nu_8 \ =\ \sum_{\nu=a,b}(\nu_0 + \nu_2)(\nu_6 + \nu_8) + \nu_1 \nu_7\nonumber\\ &=& \sum_{\nu=a,b}\left({1\over 4} - \nu_1\right)(-\nu_7) + \nu_1 \nu_7 = 2\sum_{\nu=a,b}(\nu_1 \nu_7 - {1\over 8} \nu_7), \label{(5.1)} \end{eqnarray} and similarly, \begin{eqnarray} 0 &=& \sum_{\nu=a,b} (\nu_0 \nu_2 + \nu_3 \nu_5 + \nu_6 \nu_8) \ =\ \sum_{\nu=a,b}(\nu_0 + \nu_6)(\nu_2 + \nu_8) + \nu_3 \nu_5\nonumber\\ &=& 2\left(({1\over 4} - a_3)(-a_5) + a_3 a_5 + (-b_3)({1\over 4} - b_5) + b_3b_5\right)\\ &=& 4\left(-{a_5\over 8} - {b_3\over 8} + a_3 a_5 + b_3 b_5\right). \label{(5.2)} \end{eqnarray} As we did before, we have $a_4 = -{1\over 16} + {1\over 8\sqrt{2}} \cos \alpha$ and $b_4 = -{1\over 16} + {1\over 8\sqrt{2}} \sin \alpha$. It follows that \begin{equation} a^2_4 + b^2_4 + {1\over 8}(a_4 + b_4) = 0. \label{(5.3)} \end{equation} This fact will be used later. With (\ref{(2.8)}), i.e., (\ref{(5.1)}), we can see that (\ref{(2.12)}) holds. Indeed, the left-hand side of (\ref{(2.12)}) is, by using (\ref{(5.3)}), \begin{eqnarray*} \sum_{\nu=a,b} &&\hspace{-.2in}(\nu_1(\nu_0 + \nu_2) + \nu_4(\nu_3 + \nu_5) + \nu_7(\nu_6 + \nu_8)) \ =\ \sum_{\nu=a,b}(\nu_1({1\over 4} - \nu_1) - a^2_4 - a^2_7)\\ &=& \sum_{\nu=a,b}({1\over 4}(-\nu_4 - \nu_7) - a^2_1 - a^2_7 - a^2_4) \ =\ -\sum_{\nu=a,b}({1\over 4} \nu_4 + (\nu_1 + \nu_7)^2 + a^2_4)\\ &=& -\sum_{\nu=a,b}({1\over 4} \nu_4 + 2\nu_4^2)\ =\ 0. \end{eqnarray*} Similarly, we can show that with (\ref{(2.9)}), i.e. (\ref{(5.2)}), equation (\ref{(2.13)}) holds. If we add equations (\ref{(2.10)}) and (\ref{(2.11)}) together, we have by (\ref{(5.3)}) \begin{eqnarray*} \sum_{\nu=a,b} &&\hspace{-.2in}\nu_4 (\nu_0 + \nu_2) + \nu_1(\nu_3 + \nu_5) + \nu_7(\nu_3 + \nu_5) + \nu_4(\nu_6 + \nu_8)\\ &=&\sum_{\nu=a,b}(\nu_4({1\over 4} -\nu_1) - \nu_1\nu_4 - \nu_7 \nu_4 - \nu_4 \nu_7)\\ &=& \sum_{\nu=a,b} {1\over 4} \nu_4 - 2\nu_4(\nu_1+ \nu_7) \ =\ \sum_{\nu=a,b}{1\over 4} \nu_4 + 2\nu^2_4 \ =\ 0. \end{eqnarray*} That is, the addition of (\ref{(2.10)}) and (\ref{(2.11)}) is always true. We only need to consider one of these two equations. Next we simplify equations (\ref{(2.4)}) - (\ref{(2.7)}). Adding (\ref{(2.4)}) and (\ref{(2.8)}) with (\ref{(2.3)}), we have \begin{eqnarray} 0 &=& \sum_{\nu=a,b}(\nu_2(\nu_7 + \nu_8) + (\nu_0 + \nu_1) \nu_6 + \nu_1 \nu_7)\nonumber \\ &=& \sum_{\nu=a,b}({1\over 4}\nu_6 + \nu_1 \nu_7) \ =\ \sum_{\nu=a,b}\left({1\over 4}\nu_6 + {\nu_7\over 8} \right). \label{(5.4)} \end{eqnarray} Adding (\ref{(2.5)}) and (\ref{(2.8)}) together, we have \begin{eqnarray} 0 &=& \sum_{\nu=a,b}(\nu_1 + \nu_2) \nu_8 + \nu_0(\nu_7 + \nu_6) + \nu_1 \nu_7\nonumber\\ &=& \sum_{\nu=a,b}({1\over 4} \nu_8 + \nu_1 \nu_7) \ =\ \sum_{\nu=a,b} \left({1\over 4} \nu_8 +{1\over 8} \nu_8\right). \label{(5.5)} \end{eqnarray} It is easy to see that equations (\ref{(5.4)}) and (\ref{(5.5)}) are equivalent by using $a_6+a_7+a_8=0$ and $b_0+b_1+b_2=0$. Adding (\ref{(2.6)}) and (\ref{(2.9)}) together, we have \begin{eqnarray} 0 &=& \sum_{\nu=a,b}(\nu_6(\nu_5 + \nu_8) + \nu_2(\nu_0 + \nu_3) + \nu_3 \nu_5)\nonumber\\ &=& {1\over 4} a_2 + a_3 a_5 + {1\over 4} b_6 + b_3b_5 \ =\ {1\over 4}(a_2+b_6)+{1\over 8}(a_5+b_3). \label{(5.6)} \end{eqnarray} Adding (\ref{(2.7)}) and (\ref{(2.9)}) together with (\ref{(2.2)}), we have \begin{eqnarray} 0 &=& \sum_{\nu=a,b}(\nu_8(\nu_3 + \nu_6) + \nu_0(\nu_2 + \nu_5) + \nu_3 \nu_5)\nonumber\\ &=& {1\over 4} a_8 + a_3 a_5 + b_3 b_5 + {1\over 4} b_0 \ =\ {1\over 4}(a_8+b_0)+{1\over 8}(a_5+b_3). \label{(5.7)} \end{eqnarray} In fact, equations (\ref{(5.6)}) and (\ref{(5.7)}) are equivalent by using $a_2+a_5+a_8=0$ and $b_0+b_3+b_6=0$. Next we solve (\ref{(5.1)}) using $a_1 + a_4 + a_7 = 0$ and $b_1 + b_4 + b_7 = 0$. Letting $p = -a_4$ and $q = -b_4$, we have $pa_7 - a^2_7 - {a_7 \over 8} + qb_7 -b^2_7 - {b_7 \over 8} = 0$. That is $$ (a_7 - {p\over 2} + {1\over 16})^2 + (b_7 - {q\over 2} + {1\over 16})^2 = {1\over 4} \left((p - {1\over 8})^2 + (q - {1\over 8})^2\right). $$ Recalling (\ref{(5.3)}), i.e., $p^2 - {1\over 4} p + q^2 - {1\over 4}q = 0$, we have \begin{eqnarray*} (p - {1\over 8})^2 + (q - {1\over 8})^2 &=& {1\over 8}({1\over 4} - p - q) = {1\over 8}\left({1\over 8} + {1\over 8\sqrt{2}}(\cos \alpha + \sin \alpha)\right)\\ &=& {1\over 64}\left(1 + {1\over \sqrt{2}}(\cos \alpha + \sin \alpha)\right) \ =\ {1\over 64}\left(1 + \sin(\alpha + {\pi \over 4})\right). \end{eqnarray*} Thus, we have $$a_7 = {p\over 2} - {1\over 16} + {1\over 16} \sqrt{1 + \sin(\alpha + {\pi \over 4})} \cos \beta, \ \ \ b_7 = {q\over 2} - {1\over 16} + {1\over 16} \sqrt{1 + \sin(\alpha + {\pi \over 4})} \sin \beta. $$ Similarly we can solve (\ref{(5.2)}) using $a_3 + a_4 + a_5 = 0$ and $b_3 + b_4 + b_5 =0 $ giving us $$ a_5 = {p\over 2} - {1\over 16} + {1\over 16} \sqrt{1 + \sin(\alpha + {\pi \over 4})} \cos \gamma, \ \ \ b_3 = {q\over 2} - {1\over 16} + {1\over 16} \sqrt{1 + \sin(\alpha + {\pi \over 4})} \sin \gamma. $$ It follows that \begin{eqnarray*} a_1 &= {p\over 2} - {1\over 16} + {1\over 16} \sqrt{1 + \sin(\alpha + {\pi \over 4})} \cos \beta, \ \ \ b_1 = {q\over 2} - {1\over 16} + {1\over 16} \sqrt{1 + \sin(\alpha + {\pi \over 4})} \sin \beta\\ a_3 &= {p\over 2} + {1\over 16} + {1\over 16} \sqrt{1 + \sin(\alpha + {\pi \over 4})} \cos \gamma, \ \ \ b_5 = {q\over 2} + {1\over 16} + {1\over 16} \sqrt{1 + \sin(\alpha + {\pi \over 4})} \sin \gamma. \end{eqnarray*} So, we still need to satisfy (\ref{(2.2)}), (\ref{(2.3)}), (\ref{(5.5)}), (\ref{(5.7)}), and (\ref{(2.11)}). Using the linear relationships for Subcase 4a, (\ref{(2.2)}) and (\ref{(2.3)}) become \begin{eqnarray} a_8^2+({1\over 4}-a_1+a_5)a_8+b_8^2(-b_1+b_5)b_8 &=&0\label{(5.8)}\\ a_8^2+(a_5+a_7)a_8+b_8^2+(b_5+b_7-{1\over 4})b_8 &=&-a_5a_7-b_5b_7+{b_7\over 4}.\label{(5.9)} \end{eqnarray} After subtracting (\ref{(5.8)}) from (\ref{(5.9)}) and using $a_1+a_7=p$ and $b_1+b_7=q$, we can replace one of these equations with \begin{eqnarray} (p-{1\over 4})a_8+(q-{1\over 4})b_8&={1\over 4}b_7-a_5a_7-b_5b_7. \label{(5.10)} \end{eqnarray} Moreover, the linear relationships combined with (\ref{(2.11)}) yield \begin{eqnarray*} pa_8+qb_8&=&{1\over 8}a_4+{1\over 2}\sum_{\nu=a,b} \nu_1\nu_5+ \nu_3\nu_7+(\nu_5-\nu_1)\nu_4. \end{eqnarray*} Thus, (\ref{(5.5)}), (\ref{(5.7)}), and (\ref{(2.11)}) can be replaced by \begin{eqnarray} a_8+b_8 &=&-{1\over 2}(a_7+b_7)\label{(5.11)}\\ a_8+b_8 &=&-{1\over 2}(a_5-2b_1+b_3+2b_5)\label{(5.12)}\\ a_8+b_8&=&-b_7+{1\over 2}a_4+2\sum_{\nu=a,b} 2\nu_5\nu_7+\nu_1\nu_5+\nu_3\nu_7+(\nu_5-\nu_1)\nu_4.\label{(5.13)} \end{eqnarray} Now, we only need to satisfy (\ref{(5.8)}), (\ref{(5.10)}), and (\ref{(5.11)})-(\ref{(5.13)}). Equating the right-hand sides of (\ref{(5.11)}) and (\ref{(5.12)}) gives $$ \sqrt{1 + \sin(\alpha + {\pi \over 4})} (\cos \gamma-\sin \gamma-\cos \beta + \sin\beta)=0. $$ This constraint is satisfied whenever $\alpha=-{3\pi \over 4},\gamma=\beta,$ or $\gamma=-\beta-{\pi \over 2}$. Equating the right hand sides of (\ref{(5.11)}) and (\ref{(5.13)}) gives \begin{eqnarray*} -{1\over 2}(a_7+b_7)&=&-b_7+{1\over 2}a_4+ 2\sum_{\nu=a,b}(\nu_5(\nu_1+\nu_7)+(\nu_3+\nu_5)\nu_7+\nu_4\nu_5-\nu_1\nu_4\\ &=&-b_7+{1\over 2}a_4-2\sum_{\nu=a,b} \nu_4(\nu_1+\nu_7). \end{eqnarray*} Thus, \begin{eqnarray*} 0&=&{1\over 2}a_4+2\sum_{\nu=a,b} \nu_4^2+{\nu_4\over 2}+{\nu_7\over 2}\\ &=&2(p^2+q^2)-{p\over 4}-{q\over 4}+ {1\over 32}\sqrt{1+\sin(\alpha+{\pi\over 4})}(\cos\beta-\sin\beta)\\ &=&{1\over 32}\sqrt{1+\sin(\alpha+{\pi\over 4})}(\cos\beta-\sin\beta). \end{eqnarray*} Therefore, we only need to solve (\ref{(5.8)}), (\ref{(5.10)}), and (\ref{(5.11)}) when $\alpha=-{3\pi \over 4}, \beta={\pi\over 4},$ or $\beta=-{3\pi \over 4}$ and $\gamma=\beta$ or $\gamma=-\beta-{\pi \over 2}$. We begin with $\alpha=-{3\pi \over 4}$, then $$ a_4=b_4=-{1 \over 8}, \ \ \ a_5=a_7=b_3=b_7=0,\ \ \ a_1=a_3=b_1=b_5={1 \over 8} $$ which reduces (\ref{(5.8)}), (\ref{(5.10)}), and (\ref{(5.11)}) to $$ (a_8+{1\over 8})a_8+b_8^2=0, \ \ \ a_8+b_8=0. $$ These equations yield two solutions: $a_8=b_8=0$ or $a_8=-{1\over 16}$ and $b_8={1\over 16}.$ Now, with $\beta=\gamma={\pi\over 4}$, we solve for $a_8$ and $b_8$ using the linear equations (\ref{(5.10)}) and (\ref{(5.11)}), we have $$ a_8=-{p\over 4}-{\sqrt{2}\over 8}\sqrt{1+\sin(\alpha+\pi/4)}), \ \ \ b_8=-{q\over 4}+{1\over 16}. $$ Plugging these solutions into (\ref{(5.8)}) and simplifying we have $$ \cos(\alpha-{\pi\over 4})-1=4\sqrt{2+2\cos(\alpha-{\pi\over 4})} $$ which has no real solution. Similarly for $\beta=\gamma=-{3\pi\over 4}$, (\ref{(5.10)}) and (\ref{(5.11)}) yield $$ a_8=-{p\over 4}+{\sqrt{2}\over 8}\sqrt{1+\sin(\alpha+\pi/4)}), \ \ \ b_8=-{q\over 4}+{1\over 16}, $$ but now (\ref{(5.8)}) reduces to $$ \cos(\alpha-{\pi\over 4})-1=-4\sqrt{2+2\cos(\alpha-{\pi\over 4})} $$ which has two solutions: $\alpha={\pi\over 4}\pm\cos^{-1}(17-8\sqrt{5})$. Finally for $\beta={\pi\over 4}$ and $\gamma=-{3\pi\over 4}$, the linear equations produce $$ a_8=-{p\over 4}, \ \ \ b_8=-{q\over 4}+{1\over 16}- {\sqrt{2}\over 32}\sqrt{1+\sin(\alpha-{\pi \over 4})}, $$ and similarly for $\beta=-{3\pi\over 4}$ and $\gamma={\pi\over 4}$ $$ a_8=-{p\over 4}, \ \ \ b_8=-{q\over 4}+{1\over 16}+ {\sqrt{2}\over 32}\sqrt{1+\sin(\alpha-{\pi \over 4})}. $$ Both of these choices for $a_8$ and $b_8$ after being substituted into (\ref{(5.8)}) require $$ p^2+q^2=0 $$ which is satisfied by $\alpha={\pi \over 4}$. Therefore, the complete solution for Subcase 4a has 6 solitary solutions. The four rational solutions are given below: \begin{eqnarray*} &&{1\over 8}\left[\begin{array}{cccccc} 1&0&\ \ 1& \ \ 1 & 0 & 1\\ 0&0&\ \ 0& \ \ 0 &0& 0\\ 1&0& -1&-1& 0& 1\\ 1&0& -1& -1& 0& 1\\ 0&0&\ \ 0& \ \ 0& 0 &0\\ 1&0&\ \ 1& \ \ 1 & 0 & 1\end{array}\right], \ \ \ \ {1\over 16}\left[\begin{array}{cccccc} 1 & \ \ 1 & \ \ 2 & \ \ 2 & \ \ 1 & 1\\ 1 &-1 & \ \ 0 & \ \ 0 &-1& 1\\ 2& \ \ 0 &-2 &-2& \ \ 0& 2\\ 2& \ \ 0& -2& -2& \ \ 0& 2\\ 1 & -1& \ \ 0& \ \ 0& -1 &1\\ 1 & \ \ 1 & \ \ 2 & \ \ 2 & \ \ 1 & 1\end{array}\right],\\ &&{1\over 8}\left[\begin{array}{cccccc} 1 & 0 & \ \ 1 & \ \ 1 & 0 & 1\\ 1 &0 &-1 &-1 &0& 1\\ 0& 0 & \ \ 0 & \ \ 0& 0& 0\\ 0& 0& \ \ 0& \ \ 0& 0& 0\\ 1 & 0& -1& -1& 0 &1\\ 1 & 0 & \ \ 1 & \ \ 1 & 0 & 1\end{array}\right], \ \ \ \ {1\over 8}\left[\begin{array}{cccccc} 1 & \ \ 1 & 0 & 0 & \ \ 1 & 1\\ 0 & \ \ 0 &0 &0 & \ \ 0& 0\\ 1& -1 &0 &0& -1& 1\\ 1& -1& 0& 0& -1& 1\\ 0 & \ \ 0& 0& 0& \ \ 0 &0\\ 1 & \ \ 1 & 0 & 0 & \ \ 1 & 1\end{array}\right]. \end{eqnarray*} \section{ORTHOGONALITY} In this section, we discuss the orthogonality of the solutions from Cases 1-4. We begin with a review of the Lawton condition as well as a well known necessary and sufficient condition for orthogonality. We conclude this section with a numerical experiment consisting of a one-level decomposition of two gray-scale images using a nonseparable filter from Subcase 4a, and compare its performance with Haar and D4. Let $$m(e^{i\omega_1},e^{i\omega_2})=\sum_{k,\ell} h_{k,l}e^{ik\omega_1}e^{i\ell\omega_2}, $$ where the $h_{k,\ell}$'s are the $a_i$'s and $b_i$'s as discussed previously in Section \ref{Section 2}. Define \begin{eqnarray} \hat{\phi}(\omega_1,\omega_2)&=& \prod_{k=1}^\infty m\left(\displaystyle e^{{i\omega_1} \over 2^k}, \displaystyle e^{{i\omega_2} \over 2^k}\right). \label{(6.1)} \end{eqnarray} For the coefficients as defined in Section \ref{Section 2}, we know that $\phi$ is well defined and is in $L_2({\bf R}^2)$. Define \begin{equation} \alpha_{\ell_1,\ell_2}\ =\ \int_{{\bf R}^2} \phi(x,y)\overline{\phi(x-\ell_1,y-\ell_2)}{\rm d}x{\rm d}y.\label{(6.2)} \end{equation} Thus, if $\alpha_{\ell_1,\ell_2}=\delta_{\ell_1,\ell_2}$, then $\phi$ is orthonormal. By the refinement equation \begin{equation} \phi(x,y)=4\sum_{k_1,k_2=0}^5 h_{k_1,k_2}\phi(2x-k_1,2y-k_2). \label{(6.3)} \end{equation} Using (\ref{(6.3)}) in (\ref{(6.2)}) we have \begin{eqnarray} \alpha_{\ell_1,\ell_2} &=&4\sum_{n_1,n_2}\left(\sum_{k_1,k_2}h_{k_1,k_2}h_{k_1+n_1-2\ell_1, k_2+n_2-2\ell_2} \right)\alpha_{n_1,n_2}.\label{(6.4)} \end{eqnarray} Because the $supp(\phi)\subset [0,5]^2$, the only possible nonzero $\alpha_{\ell_1,\ell_2}$ are for $-4\leq\ell_1,\ell_2\leq 4$. Let $\alpha$ be the vector of length 81 consisting of the $\alpha_{\ell_1,\ell_2}$'s for some fixed ordering of the indices in the range of $-4\leq\ell_1,\ell_2\leq 4$ and define the matrix \begin{equation} A_{(\ell_1,\ell_2),(n_1,n_2)}=4\sum_{k_1,k_2}h_{k_1,k_2}h_{k_1+n_1-2\ell_1, k_2+n_2-2\ell_2}\label{(6.5)} \end{equation} for this same ordering. Then equation (\ref{(6.4)}) says that $\alpha$ is an eigenvector of $A$ with eigenvalue$\lambda=1$, i.e., $\alpha=A\alpha.$ Now, condition (i) of Section \ref{Section 1} implies that $$ 4\sum_{k_1,k_2}h_{k_1,k_2}h_{k_1-2j_1,k_2-2j_2}x^{2j_1}y^{2j_2}=1, $$ i.e. $$ 4\sum_{k_1,k_2}h_{k_1,k_2}h_{k_1-2j_1,k_2-2j_2}=\delta_{j_1,j_2}. $$ Thus the vector $\delta$ of length 81 consisting of the entries $\delta_{\ell_1,\ell_2}$ for the same ordering as before is also an eigenvector for $A$ with eigenvalue $\lambda=1$. For completeness, we state the generalization of Lawton's condition (cf. \cite{L}) in $R^2$. \begin{theorem} \label{Theorem 6.1} Let $m(x,y)$ be a given polynomial satisfying (i) and (ii) and A a matrix defined as in equation (\ref{(6.5)}) for the coefficients of $m(x,y)$. Let $\phi$ be the function generated by equation (\ref{(6.1)}). If $\lambda=1$ is a non-degenerate eigenvalue of $A$, then $\{\phi(\cdot-\ell_1,\cdot-\ell_2)|(\ell_1,\ell_2)\in{\bf Z}^2\}$ is an orthonormal set. \end{theorem} We also need the following well-known necessary and sufficient condition for orthonormality. \begin{theorem}\label{Theorem 6.2} Let $m(x,y)$ be a given polynomial satisfying (i) and (ii). Let $\phi$ be the function generated by equation (\ref{(6.1)}). Then $\{\phi(\cdot-\ell_1,\cdot-\ell_2)|(\ell_1,\ell_2)\in{\bf Z}^2\}$ is an orthonormal set if and only if $$ \sum_{(k,\ell)\in {\bf Z}^2} |\widehat{\phi}((\omega_1,\omega_2)+ 2\pi (k,\ell))|^2 = 1, \quad \forall (\omega_1,\omega_2)\in [-\pi,\pi]^2.$$. \end{theorem} {\bf Case 1:} We have used Matlab and Mathematica to check the eigenvalues of the Lawton matrix associated with this two-parameter family for a large sample of parameters. The eigenvalue $\lambda=1$ was non-degenerate for every sample we tested. {\bf Case 2:} These solutions are not associated with scaling functions. The conditions for Case 2a immediately imply $\displaystyle m(e^{i\omega_1},1)=(1+e^{5i\omega_1})/2.$ If we consider the one-dimensional restriction $\bar{m}(\omega):=m(e^{i\omega},1)$, we see that $\{\pm{\pi \over 5}, \pm{3\pi \over 5},\pm\pi\}$ are the zeros of $\bar{m}(\omega)$. Because $m(e^{i\omega},-1)=0$, condition (ii) implies that $|\bar{m}(\omega)|^2+ |\bar{m}(\omega+\pi)|^2=1$. Moreover, $$ \left|\bar{m}\left(-{3\pi \over 5}+\pi\right)\right|= \left|\bar{m}\left(-{\pi \over 5}+\pi\right)\right|= \left|\bar{m}\left({3\pi \over 5}-\pi\right)\right|= \left|\bar{m}\left({\pi \over 5}-\pi\right)\right|=1. $$ Because $\left\{\xi_1={2\pi \over 5},\xi_2={4\pi \over 5},\xi_3 =-{2\pi \over 5},\xi_4=-{4\pi \over 5} \right\}$ is a nontrivial cycle in $[-\pi,\pi]$ for the operation $\xi\rightarrow 2\xi$ mod $2\pi$ such that $|\bar{m}(\xi_i)|=1$, the set of functions $\{\bar{\phi}(\cdot-n)\}_{n\in{\bf Z}}$ associated with $\bar{m}(\omega)$ is not orthonormal and $$\sum_k |\hat{\bar{\phi}}({2\pi \over 5}+2\pi k)|^2=0$$ (See \cite{D}). So, for the unrestricted function, we have $$\sum_{k,l} \left|\hat{\phi}\left({2\pi \over 5}+2\pi k, 2\pi\ell\right) \right|^2=0$$ which contradicts Theorem 6.2. The other subcase 2b has the property that $\displaystyle m(e^{i\omega_1},1)=(1+e^{i3\omega_1})/2$ which similarly excludes it from being associated with scaling functions. {\bf Case 3:} This case has the same problematic factors as Case 2 but with respect to the other component $\displaystyle m(1,e^{i\omega_2})=(1+e^{i5\omega_2})/2$ or $\displaystyle m(1,e^{i\omega_2})=(1+e^{i3\omega_2})/2$. {\bf Case 4:} The solutions for this final case have the same factors as in Cases 2 and 3. So, Case 1 is the only solution associated with scaling functions since the other cases failed to satisfy the necessary and sufficient condition for othogonality. Although, the refinable functions for Cases 2-4 are not orthogonal to their shifts they still have associated tight frames since they satisfy condition (ii). These cases are analogous to the univariate Haar function with support $[0,3]$. \section{The $8\times 8$ Case} In this section, We derive several necessary conditions from the properties (i)-(iv) with $N=7$ and $M=1$. We will use these necessary conditions to show in the next section that the second order vanishing moment $M=2$ is not possible for this support size. Let $N=7$ and consider $m(x,y)=\displaystyle \sum_{i=0}^7\sum_{j=0}^7 c_{ij} x^i y^j$ which satisfies properties (i)-(iv) with $M=1$. The symmetry property (iii) implies that \begin{equation} m(x,y)= \left[\begin{array}{c} 1\\ y\\ y^2\\ y^3\\ y^4\\ y^5\\ y^6\\ y^7 \end{array}\right]^T \left[\begin{array}{cccccccc} a_0 & b_0 & a_1 & b_1 & a_2 & b_2 & a_3& b_3\\ b_{15} & a_{15} & b_{14} & a_{14} & b_{13} & a_{13} & b_{12}& a_{12}\\ a_4 & b_4 & a_5 & b_5 & a_6 & b_6 & a_7& b_7\\ b_{11} & a_{11} & b_{10} & a_{10} & b_{9} & a_{9} & b_{8}& a_{8}\\ a_8 & b_8 & a_9 & b_9 & a_{10} & b_{10} & a_{11}& b_{11}\\ b_{7} & a_{7} & b_{6} & a_{6} & b_{5} & a_{5} & b_{4}& a_{4}\\ a_{12} & b_{12} & a_{13} & b_{13} & a_{14} & b_{14} & a_{15}& b_{15}\\ b_{3} & a_{3} & b_{2} & a_{2} & b_{1} & a_{1} & b_{0}& a_{0} \end{array}\right] \left[\begin{array}{c} 1\\ x\\ x^2\\ x^3\\ x^4\\ x^5\\ x^6\\ x^7 \end{array}\right].\label{eq1} \end{equation} Properties (i) and (iii) imply that \begin{equation} \sum_{i=0}^{15} \sum_{\nu=a,b} \nu_i=\frac{1}{2}. \label{(8.1)} \end{equation} Property (ii) implies the following 25 nonlinear equations: \begin{eqnarray} &&\sum_{\nu=a,b}{} \nu_0\nu_{15} =0 \label{(8.2)}\\ &&\sum_{\nu=a,b}{} \nu_3 \nu_{12} =0 \label{(8.3)}\\ &&\sum_{\nu=a,b}{} \nu_0\nu_{11}+\nu_4\nu_{15}= 0 \label{(8.4)}\\ &&\sum_{\nu=a,b}{} \nu_0\nu_{14}+\nu_1\nu_{15}=0 \label{(8.5)}\\ &&\sum_{\nu=a,b}{} \nu_2\nu_{12}+\nu_3\nu_{13}=0 \label{(8.6)}\\ &&\sum_{\nu=a,b}{} \nu_3\nu_8+\nu_7\nu_{12}=0 \label{(8.7)}\\ &&\sum_{\nu=a,b}{} \nu_0\nu_7+\nu_4\nu_{11}+\nu_8\nu_{15}=0 \label{(8.8)}\\ &&\sum_{\nu=a,b}{} \nu_0\nu_{13}+\nu_1\nu_{14}+\nu_2\nu_{15} =0 \label{(8.9)}\\ &&\sum_{\nu=a,b}{} \nu_1\nu_{12}+\nu_2\nu_{13}+\nu_3\nu_{14}=0 \label{(8.10)}\\ &&\sum_{\nu=a,b}{} \nu_3\nu_4+\nu_7\nu_8+\nu_{11}\nu_{12}=0 \label{(8.11)}\\ &&\sum_{\nu=a,b}{} \nu_0\nu_3+\nu_4\nu_7+\nu_8\nu_{11}+\nu_{12}\nu_{15}=0 \label{(8.12)}\\ &&\sum_{\nu=a,b}{} \nu_0\nu_{10}+\nu_1\nu_{11}+\nu_4\nu_{14}+\nu_5\nu_{15}=0 \label{(8.13)}\\ &&\sum_{\nu=a,b}{} \nu_0\nu_{12}+\nu_1\nu_{13}+\nu_2\nu_{14}+\nu_3\nu_{15}=0 \label{(8.14)}\\ &&\sum_{\nu=a,b}{} \nu_2\nu_8+\nu_3\nu_9+\nu_6\nu_{12}+\nu_7\nu_{13}=0 \label{(8.15)}\\ &&\sum_{\nu=a,b}{} \nu_0\nu_6+\nu_1\nu_7+\nu_4\nu_{10}+\nu_5\nu_{11}+\nu_8\nu_{14}+\nu_9\nu_{15}=0 \label{(8.16)}\\ &&\sum_{\nu=a,b}{} \nu_0\nu_9+\nu_1\nu_{10}+\nu_2\nu_{11}+\nu_4\nu_{13}+\nu_5\nu_{14}+\nu_6\nu_{15}=0 \label{(8.17)}\\ &&\sum_{\nu=a,b}{} \nu_1\nu_8+\nu_2\nu_9+\nu_3\nu_{10}+\nu_5\nu_{12}+\nu_6\nu_{13}+\nu_7\nu_{14}=0 \label{(8.18)}\\ &&\sum_{\nu=a,b}{} \nu_2\nu_4+\nu_3\nu_5+\nu_6\nu_8+\nu_7\nu_9+\nu_{10}\nu_{12}+\nu_{11}\nu_{13}=0 \label{(8.19)}\\ &&\sum_{\nu=a,b}{} \nu_0\nu_8+\nu_1\nu_9+\nu_2\nu_{10}+\nu_3\nu_{11}+\nu_4\nu_{12}+\nu_5\nu_{13}+\nu_6\nu_{14} \hspace{.25in}\nonumber\\ &&\hspace{3in}+\nu_7\nu_{15}=0 \label{(8.20)}\\ \nonumber\\ &&\sum_{\nu=a,b}{} \nu_0\nu_2+\nu_1\nu_3+\nu_4\nu_6+\nu_5\nu_7+\nu_8\nu_{10}+\nu_9\nu_{11}+\nu_{12}\nu_{14} \hspace{.25in}\nonumber\\ &&\hspace{3in}+\nu_{13}\nu_{15}=0 \label{(8.21)}\\ \nonumber\\ &&\sum_{\nu=a,b}{} \nu_0\nu_5+\nu_1\nu_6+\nu_2\nu_7+\nu_4\nu_9+\nu_5\nu_{10}+ \nu_6\nu_{11}+\nu_8\nu_{13}+\nu_9\nu_{14} \hspace{.25in}\nonumber\\ &&\hspace{3in}+\nu_{10}\nu_{15}=0 \label{(8.22)}\\ \nonumber\\ &&\sum_{\nu=a,b}{} \nu_1\nu_4+\nu_2\nu_5+\nu_3\nu_6+\nu_5\nu_8+\nu_6\nu_9+ \nu_7\nu_{10}+\nu_9\nu_{12}+\nu_{10}\nu_{13} \hspace{.25in}\nonumber\\ &&\hspace{3in}+\nu_{11}\nu_{14} =0 \label{(8.23)}\\ \nonumber\\ &&\sum_{\nu=a,b}{} \nu_0\nu_1+\nu_1\nu_2+\nu_2\nu_3+\nu_4\nu_5+\nu_5\nu_6+ \nu_6\nu_7+\nu_8\nu_9+\nu_9\nu_{10}+\nu_{10}\nu_{11} \hspace{.25in}\nonumber\\ &&\hspace{2in}+\nu_{12}\nu_{13}+\nu_{13}\nu_{14}+\nu_{14}\nu_{15} =0 \label{(8.24)}\\ \nonumber\\ &&\sum_{\nu=a,b}{} \nu_0\nu_4+\nu_1\nu_5+\nu_2\nu_6+\nu_3\nu_7+ \nu_4\nu_8+\nu_5\nu_9 +\nu_6\nu_{10}+\nu_7\nu_{11}+\nu_8\nu_{12} \hspace{.25in}\nonumber\\ &&\hspace{2in}+\nu_9\nu_{13}+\nu_{10}\nu_{14}+\nu_{11}\nu_{15}=0 \label{(8.25)}\\ \nonumber\\ \displaystyle &&\sum_{i=0}^{15}{} \sum_{\nu=a,b} \nu_i^2 =\frac{1}{8}. \label{(8.26)} \end{eqnarray} Property (iv) with $M=1$ implies \begin{eqnarray} a_0+a_1+a_2+a_3&=&b_0+b_1+b_2+b_3 \label{(8.27)}\\ a_4+a_5+a_6+a_7&=&b_4+b_5+b_6+b_7 \label{(8.28)}\\ a_8+a_9+a_{10}+a_{11}&=&b_8+b_9+b_{10}+b_{11} \label{(8.29)}\\ a_{12}+a_{13}+a_{14}+a_{15} &=& b_{12}+b_{13}+b_{14}+b_{15} \label{(8.30)}\\ a_0+a_4+a_8+a_{12}&=&b_3+b_7+b_{11}+b_{15} \label{(8.31)} \\ a_1+a_5+a_9+a_{13}&=&b_2+b_6+b_{10}+b_{14} \label{(8.32)} \\ a_2+a_6+a_{10}+a_{14}&=&b_1+b_5+b_9+b_{13} \label{(8.33)} \\ a_3+a_7+a_{11}+a_{15}&=&b_0+b_4+b_8+b_{12}. \label{(8.34)} \end{eqnarray} Using (\ref{(8.27)})-(\ref{(8.30)}) and (\ref{(8.1)}), we immediately have \begin{equation} \sum_{i=0}^{15}a_i=\sum_{i=0}^{15} b_i=\frac{1}{4}. \label{(8.35)} \end{equation} Next, we use various combinations of the nonlinear equations to make perfect squares as we have done previously. This enables us to introduce parameters in order to simplify these equations. Using (\ref{(8.26)}), (\ref{(8.25)}), (\ref{(8.20)}), and (\ref{(8.14)}), we have \begin{eqsplitmath} \sum_{\nu=a,b} (\nu_0+\nu_4+\nu_8+\nu_{12})^2+ (\nu_1+\nu_5+\nu_9+\nu_{13})^2 \\ +(\nu_2+\nu_6+\nu_{10}+\nu_{14})^2+ (\nu_3+\nu_7+\nu_{11}+\nu_{15})^2 ={1\over 8}. \label{(9.5)} \end{eqsplitmath} Moreover, using (\ref{(8.5)}), (\ref{(8.6)}), (\ref{(8.13)}), (\ref{(8.15)}), (\ref{(8.16)}), (\ref{(8.19)}), (\ref{(8.21)}), we have \begin{eqsplitmath} \sum_{\nu=a,b} (\nu_0+\nu_4+\nu_8+\nu_{12})(\nu_2+\nu_6+\nu_{10}+\nu_{14}) \\ +(\nu_1+\nu_5+\nu_9+\nu_{13})(\nu_3+\nu_7+\nu_{11}+\nu_{15}) =0. \label{(9.6)} \end{eqsplitmath} After using (\ref{(8.31)})-(\ref{(8.34)}), (\ref{(9.5)}), and (\ref{(9.6)}), we obtain \begin{eqsplitmath} ((a_0+a_4+a_8+a_{12})\pm(a_2+a_6+a_{10}+a_{14}))^2 \\ +((a_1+a_5+a_9+a_{13})\pm(a_3+a_7+a_{11}+a_{15}))^2 =\frac{1}{8}. \label{(9.6b)} \end{eqsplitmath} Choosing the plus sign in equation (\ref{(9.6b)}) and using (\ref{(8.35)}) yields \begin{eqnarray} a_0+a_4+a_8+a_{12}+a_2+a_6+a_{10}+a_{14}&=& r_0, \\ a_1+a_5+a_9+a_{13}+a_{3}+a_{7}+a_{11}+a_{15}&=&s_0, \end{eqnarray} where $r_0+s_0=1/4$ and $r_0s_0=0$. Giving us four cases: $r_0,s_0=1/4$ or $0$. Similarly, using (\ref{(8.26)}), (\ref{(8.24)}), (\ref{(8.21)}), and (\ref{(8.12)}), we have \begin{eqsplitmath} \sum_{\nu=a,b} (\nu_0+\nu_1+\nu_2+\nu_3)^2+(\nu_4+\nu_5+\nu_6+\nu_7)^2\\ +(\nu_8+\nu_9+\nu_{10}+\nu_{11})^2+(\nu_{12}+\nu_{13}+\nu_{14}+\nu_{15})^2 ={1\over 8}. \label{(9.1)} \end{eqsplitmath} Using (\ref{(8.20)}), (\ref{(8.18)}), (\ref{(8.17)}), (\ref{(8.15)}), (\ref{(8.13)}), (\ref{(8.7)}), (\ref{(8.4)}), we have \begin{eqsplitmath} \sum_{\nu=a,b} (\nu_0+\nu_1+\nu_2+\nu_3)(\nu_8+\nu_9+\nu_{10}+\nu_{11})\\ +(\nu_4+\nu_5+\nu_6+\nu_7)(\nu_{12}+\nu_{13}+\nu_{14}+\nu_{15})=0. \label{(9.2)} \end{eqsplitmath} In a similar fashion as before, we have \begin{eqnarray} a_0+a_1+a_2+a_3+ a_8+a_9+a_{10}+a_{11}&=&t_0,\\ a_4+a_5+a_6+a_7 +a_{12}+a_{13}+a_{14}+a_{15}&=&u_0, \end{eqnarray} where $t_0+u_0=1/4$ and $t_0u_0=0$. We now refine our solutions to sums of four coefficients. Choosing the minus sign in equation (\ref{(9.6b)}) yields \begin{eqnarray*} a_0+a_4+a_8+a_{12} &=&\frac{1}{2}(r_0+r_1),\ \ a_2+a_6+a_{10}+a_{14}\ =\ \frac{1}{2}(r_0-r_1) \\ a_1+a_5+a_9+a_{13} &=&\frac{1}{2}(s_0+s_1),\ \ a_{3}+a_{7}+a_{11}+a_{15}\ =\ \frac{1}{2}(s_0-s_1), \end{eqnarray*} where $r_1=\frac{1}{4}\cos\alpha$ and $s_1=\frac{1}{4}\sin\alpha$. Furthermore, equations (\ref{(8.2)}), (\ref{(8.3)}), (\ref{(8.4)}), (\ref{(8.7)}), (\ref{(8.8)}), (\ref{(8.11)}), and (\ref{(8.12)}) yield $$ \sum (a_0+a_4+a_8+a_{12})(a_{3}+a_{7}+a_{11}+a_{15}) =0. \label{(9.7)} $$ This together with (\ref{(8.31)}) and (\ref{(8.34)}), give the following constraint on our parameters \begin{equation} 2(r_0+r_1)(s_0-s_1)=0.\label{rseq} \end{equation} Because of the relationship between $r_1$ and $s_1$, equation (\ref{rseq}) produces three cases: $r_1=r_0$ and $s_1=s_0$, $r_1=-r_0$ and $s_1=s_0$, or $r_1=-r_0$ and $s_1=-s_0$. Similarly, \begin{eqnarray*} a_0+a_1+a_2+a_3 &=&\frac{1}{2}(t_0+t_1),\ \ a_8+a_9+a_{10}+a_{11}\ =\ \frac{1}{2}(t_0-t_1) \\ a_4+a_5+a_6+a_7 &=&\frac{1}{2}(u_0+u_1), \ \ a_{12}+a_{13}+a_{14}+a_{15}\ =\ \frac{1}{2}(u_0-u_1), \end{eqnarray*} where $t_1=\frac{1}{4}\cos \beta$ and $u_1=\frac{1}{4}\sin \beta$. Additionally, equations (\ref{(8.2)}), (\ref{(8.3)}), (\ref{(8.5)}), (\ref{(8.9)}), (\ref{(8.10)}), and (\ref{(8.14)}) with (\ref{(8.27)}) and (\ref{(8.30)}), gives us $(t_0+t_1)(u_0-u_1)=0$. In summary, we have the following lemma \begin{lemma} \label{lems4} $$ \begin{array}{ll} a_0+a_4+a_8+a_{12} \ =\ \frac{1}{2}(r_0+r_1),& b_0+b_4+b_8+b_{12}\ =\ \frac{1}{2}(s_0-s_1),\\ a_1+a_5+a_9+a_{13}\ =\ \frac{1}{2}(s_0+s_1),& b_1+b_5+b_9+b_{13}\ =\ \frac{1}{2}(r_0-r_1)\\ a_2+a_6+a_{10}+a_{14}\ =\ \frac{1}{2}(r_0-r_1),& b_2+b_6+b_{10}+b_{14}\ =\ \frac{1}{2}(s_0+s_1),\\ a_{3}+a_{7}+a_{11}+a_{15}\ =\ \frac{1}{2}(s_0-s_1),& b_{3}+b_{7}+b_{11}+b_{15}\ =\ \frac{1}{2}(r_0+r_1),\\ a_0+a_1+a_2+a_3 \ =\ \frac{1}{2}(t_0+t_1),& b_0+b_1+b_2+b_3 \ =\ \frac{1}{2}(t_0+t_1),\\ a_4+a_5+a_6+a_7 \ =\ \frac{1}{2}(u_0+u_1),& b_4+b_5+b_6+b_7 \ =\ \frac{1}{2}(u_0+u_1),\\ a_8+a_9+a_{10}+a_{11}\ =\ \frac{1}{2}(t_0-t_1),& b_8+b_9+b_{10}+b_{11}\ =\ \frac{1}{2}(t_0-t_1),\\ a_{12}+a_{13}+a_{14}+a_{15}\ = \ \frac{1}{2}(u_0-u_1),& b_{12}+b_{13}+b_{14}+b_{15}\ =\ \frac{1}{2}(u_0-u_1). \end{array} $$ where \begin{eqnarray*} r_0+s_0=\frac{1}{4}, \ \ r_0s_0=0, \ \ (r_0+r_1)(s_0-s_1)=0, \ \ r_1=\frac{1}{4}\cos \alpha, \ \ s_1\ =\ \frac{1}{4}\sin \alpha,\\ t_0+u_0=\frac{1}{4} , \ \ t_0u_0=0 , \ \ (t_0+t_1)(u_0-u_1)=0 , \ \ t_1=\frac{1}{4}\cos \beta , \ \ u_1\ =\ \frac{1}{4}\sin \beta. \end{eqnarray*} \end{lemma} Because there are no symmetric compactly-supported tensor-product wavelets with more than one vanishing moment, it is natural to ask whether there are any nonseparable symmetric solutions with multiple vanishing moments. In \cite{LR1}, it is shown that their are no symmetric solutions with higher vanishing moments for $m(x,y)$ with $N=7$. Although the bivariate case allows enough freedom to generate a family of symmetric solutions, it does not allow for multiple vanishing moments at least for the support size we have considered. So, compact support, orthogonality, vanishing moments, and symmetry are again at odds in the construction of bivariate wavelets. \begin{references} \bibitem{A} A. Ayache, ``Construction of nonseparable dyadic compactly supported orthonormal wavelet bases for $L^2(R^2)$ of arbitrarily high regularity'', Revista Matematica Iberoamericana, {\bf 15},1999, pp. 37-58. \bibitem{BW}Belogay, E. and Y. Wang, ``Arbitrarily smooth orthogonal nonseparable wavelets in ${R}^2$'', SIAM J. Math. Anal., 30, pp. 678-697, 1999.. \bibitem{CD}Cohen, A. and I. Daubechies, ``Nonseparable bidimensional wavelet bases'', Revista Mat. Iberoamericana, volume 9, pp. 51--137, 1993. \bibitem{CS}Cohen, A. and J.~M. Schlenker, ``Compactly supported bidimensional wavelet bases with hexagonal symmetry'', Constr. Approx., {\bf 9}, pp.~209--236, 1993. \bibitem{D}Daubechies, I., Ten Lectures on Wavelets, SIAM, Philadelphia, 1992 \bibitem{HL97}He, W. and M. J. Lai, ``Examples of bivariate nonseparable compactly supported orthonormal continuous wavelets'', Wavelet Applications in Signal and Image Processing IV, proceedings of SPIE, 3169, pp. 303--314, 1997, also appears in IEEE Transactions on Image Processing, vol. 9-5, 2000. \bibitem{HL98}He, W. and M. J. Lai, ``Construction of bivariate compactly supported biorthogonal box spline wavelets with arbitrarily high regularities'', Applied Comput. Harm. Ana., 6, pp. 53-74, 1998. \bibitem{KV}Kova\v{c}evi\'c, J. and M. Vetterli, ``Nonseparable multidimensional perfect reconstruction filter banks and wavelet bases for $(R)^n$'', IEEE Trans. Info. Theory, volume 38, pp. 533--555, 1992. \bibitem{LR1} Lai, M.J. and D. W. Roach, "The nonexistence of bivariate symmetric wavelets with two vanishing moments and short support", Trends in Approximation Theory, pp. 213-223, Innovations in Applied Mathematics, Vanderbilt Univ. Press, 2001. \bibitem{L}Lawton, W., ``Necessary and sufficient conditions for constructing orthonormal wavelet bases'', J. Math. Phys., volume 32, pp. 57--61, 1991. \bibitem{RS}Riemenschneider, S. D. and Z. Shen, ``Box splines, cardinal series, and wavelets'', Approximation Theory and Functional Analysis, editor C. K. Chui, pp. 133--149, Academic Press,Boston, 1991 \end{references} \end{article} \end{document}
arXiv
\begin{document} \title{BSDEs driven by $G$-Brownian motion under degenerate case and its application to the regularity of fully nonlinear PDEs} \author{Mingshang Hu \thanks{Zhongtai Securities Institute for Financial Studies, Shandong University, Jinan, Shandong 250100, PR China. [email protected]. Research supported by National Key R\&D Program of China (No. 2018YFA0703900) and NSF (No. 11671231). } \and Shaolin Ji\thanks{Zhongtai Securities Institute for Financial Studies, Shandong University, Jinan, Shandong 250100, PR China. [email protected]. Research supported by NSF (No. 11971263 and 11871458). } \and Xiaojuan Li\thanks{Zhongtai Securities Institute for Financial Studies, Shandong University, Jinan 250100, China. Email: [email protected].} } \maketitle \textbf{Abstract}. In this paper, we obtain the existence and uniqueness theorem for backward stochastic differential equation driven by $G$-Brownian motion ($G$-BSDE) under degenerate case. Moreover, we propose a new probabilistic method based on the representation theorem of $G$-expectation and weak convergence to obtain the regularity of fully nonlinear PDE associated to $G$-BSDE. {\textbf{Key words}. } $G$-expectation; $G$-Brownian motion; Backward stochastic differential equation; Fully nonlinear PDE \textbf{AMS subject classifications.} 60H10 \addcontentsline{toc}{section}{\hspace*{1.8em}Abstract} \section{Introduction} Motivated by volatility uncertainty in finance, Peng \cite{Peng2004, Peng2005, P07a, P08a} introduced the notions of $G$-expectation $\mathbb{\hat{E}} [\cdot]$ and $G$-Brownian motion $B$ for each monotone and sublinear function $G:\mathbb{S}_{d}\rightarrow \mathbb{R}$. The Ito's calculus with respect to $G$-Brownian motion was constructed. Furthermore, he studied stochastic differential equation driven by $G$-Brownian motion ($G$-SDE) and a special type of backward stochastic differential equation (BSDE) containing only the solution $Y$, and then established \ the relevant theory. Denis et al. \cite{DHP11} (see also \cite{HP09}) obtained that the $G$-expectation can be represented as an upper expectation over a family of weakly compact and non-dominated probability measures $\mathcal{P}$, and gave the characterizations of some spaces by inner capacity associated to $\mathcal{P} $. By quasi-surely stochastic analysis based on outer capacity, Denis and Martini \cite{DenisMartini2006} made a great contribution to study super-pricing of contingent claims under volatility uncertainty. The relationship between these two capacities has been clearly explained in notes and comments of Chapter 6 in \cite{P2019}. Hu et al. \cite{HJPS1} studied the following BSDE driven by $G$-Brownian motion ($G$-BSDE) \begin{equation} Y_{t}=\xi+\int_{t}^{T}f(s,Y_{s},Z_{s})ds+\int_{t}^{T}g(s,Y_{s},Z_{s})d\langle B\rangle_{s}-\int_{t}^{T}Z_{s}dB_{s}-(K_{T}-K_{t}) \label{e0-1} \end{equation} under the non-degenerate $G$, i.e., there exists a constant $\underline {\sigma}^{2}>0$ such that \[ G(A)-G(B)\geq \frac{1}{2}\underline{\sigma}^{2}\mathrm{tr}[A-B]\text{ for }A\geq B. \] They proved that the above $G$-BSDE has a unique solution $(Y,Z,K)$, where $K$ is a non-increasing $G$-martingale with $K_{0}=0$. Soner et al. \cite{STZ11} studied a new type of fully nonlinear BSDE, called $2$BSDE, by different formulation and method, and obtained the deep result of the existence and uniqueness theorem for $2$BSDE. For recent advances in these two directions, the reader may refer to \cite{DK, HJ0, HJ1, HYH, LPH, LRT, MPZ, PZ} and the references therein. The key step to obtain the solution of $G$-BSDE (\ref{e0-1}) under non-degenerate $G$ is to use Krylov's regularity estimate for fully nonlinear PDEs (see Appendix C.4 in \cite{P2019}). But under degenerate $G$, we have to get round the difficulty that the regularity estimation condition (see Definition C.4.3 in \cite{P2019}) is not satisfied. A natural idea is to construct a family of non-degenerate $G_{\varepsilon}$ with $\varepsilon \in(0,\varepsilon_{0}]$ such that $G_{\varepsilon}\uparrow G$ as $\varepsilon \downarrow0$. The corresponding $G_{\varepsilon}$-expectation and the set of probability measures are denoted by $\mathbb{\hat{E}}^{\varepsilon }[\cdot]$ and $\mathcal{P}^{\varepsilon}$, respectively. By the definition of $G$-expectation, we know that $\mathbb{\hat{E}}^{\varepsilon}[X]\uparrow \mathbb{\hat{E}}[X]$ for $X\in L_{G}^{1}(\Omega_{T})$ and $\mathcal{P}$ is the closure of $\mathcal{P}_{1}:=\cup_{\varepsilon>0}\mathcal{P}^{\varepsilon}$ under the topology of weak convergece. It is important to note that the quasi-surely stochastic analysis with respect to $\mathcal{P}$ (i.e. $\mathcal{P}$-q.s.) and $\mathcal{P}_{1}$ (i.e. $\mathcal{P}_{1}$-q.s.) are different (see \cite{HWZ}). Following the method proposed in the proof of Proposition A.1. in \cite{STZ}, we can get a process $Z$ in the $\mathcal{P} _{1}$-q.s. sence such that \[ \inf_{\eta \in M^{0}(0,T)}\mathbb{\hat{E}}^{\varepsilon}\left[ \left( \int_{0}^{T}|Z_{s}-\eta_{s}|^{2}d\langle B\rangle_{s}\right) ^{p/2}\right] =0\text{ for }\varepsilon>0\text{, }p>1\text{,} \] where the definition of $M^{0}(0,T)$ can be found in Section 3. At this point, there is a natural misconception that $Z\in H_{G}^{2,p}(0,T;\langle B\rangle)$ holds. But we notice Sion's minimax theorem can not be used to obtain \[ \inf_{\eta \in M^{0}(0,T)}\sup_{\varepsilon \in(0,\varepsilon_{0}]} \mathbb{\hat{E}}^{\varepsilon}\left[ \left( \int_{0}^{T}|Z_{s}-\eta_{s} |^{2}d\langle B\rangle_{s}\right) ^{p/2}\right] =\sup_{\varepsilon \in(0,\varepsilon_{0}]}\inf_{\eta \in M^{0}(0,T)}\mathbb{\hat{E}}^{\varepsilon }\left[ \left( \int_{0}^{T}|Z_{s}-\eta_{s}|^{2}d\langle B\rangle_{s}\right) ^{p/2}\right] , \] because $(0,\varepsilon_{0}]$ is not compact. Therefore, whether $Z$ belongs to $H_{G}^{2,p}(0,T;\langle B\rangle)$ remains unsolved, even for $G$-martingale representation theorem, which is a special case of $G$-BSDE (\ref{e0-1}), i.e., $f=g=0$. Thus, one purpose of this paper is to investigate the existence and uniqueness theorem for $G$-BSDE (\ref{e0-1}) under degenerate $G$. It is well known that the theory of classical BSDEs provides a tool to study the regularity of quasilinear PDEs (see \cite{P-P92}). However, we all know that this classical tool is not suitable for the regularity of fully nonlinear PDEs, and up to our knowledge there is no result on this field. So, the other purpose of this paper is to establish the regularity of fully nonlinear PDEs by $G$-BSDEs. In this paper, we introduce a quite different method to study the existence and uniqueness theorem for a type of well-posed $G$-BSDEs under degenerate $G$ (see (\ref{new-e2-4})), which has two major contributions. The first one is to obtain the soution $(Y,Z,K)$ for $G$-BSDE under degenerate $G$ in the extended $\tilde{G}$-expectation space, which is essential to show that $K$ is a $G$-martingale in the key Lemma \ref{pro2-6}. The second one is to propose a new probabilistic method based on the representation theorem of $G$ -expectation and weak convergence to obtain the uniform lower bound for $\partial_{xx}^{2}u_{\varepsilon}$ with $\varepsilon>0$, where $u_{\varepsilon }$ is a soution to a fully nonlinear PDE associated to a $G_{\varepsilon} $-BSDE under non-degenerate $G_{\varepsilon}$ (see (\ref{new-e2-11}) and (\ref{e2-26})). This uniform lower bound for $\partial_{xx}^{2}u_{\varepsilon }$ plays a key role in proving $Z\in H_{G}^{2,p}(0,T;\langle B\rangle)$ in Lemma \ref{pro2-6}, and up to our knowledge, it is completely new in the literature because it does not depend on $\varepsilon$ as the bound by Krylov's regularity estimate for fully nonlinear PDEs. Finally, we use the above probabilistic method to obtain the regularity of fully nonlinear PDE associated to $G$-BSDE under degenerate $G$. The paper is organized as follows. In Section 2, we present some basic results of $G$-expectations. The existence and uniqueness theorem for $G$-BSDE under degenerate case is established in Section 3. In Section 4, we obtain the regularity of fully nonlinear PDE associated to $G$-BSDE under degenerate $G$. \ \section{Preliminaries} We recall some basic results of $G$-expectations. The readers may refer to Peng's book \cite{P2019} for more details. Let $T>0$ be given and let $\Omega_{T}=C_{0}([0,T];\mathbb{R}^{d})$ be the space of $\mathbb{R}^{d}$-valued continuous functions on $[0,T]$ with $\omega_{0}=0$. The canonical process $B_{t}(\omega):=\omega_{t}$, for $\omega \in \Omega_{T}$ and $t\in \lbrack0,T]$. For any fixed $t\leq T$, set \[ Lip(\Omega_{t}):=\{ \varphi(B_{t_{1}},B_{t_{2}}-B_{t_{1}},\ldots,B_{t_{N} }-B_{t_{N-1}}):N\geq1,t_{1}<\cdots<t_{N}\leq t,\varphi \in C_{b.Lip} (\mathbb{R}^{d\times N})\}, \] where $C_{b.Lip}(\mathbb{R}^{d\times N})$ denotes the space of bounded Lipschitz functions on $\mathbb{R}^{d\times N}$. Let $G:\mathbb{S}_{d}\rightarrow \mathbb{R}$ be a given monotonic and sublinear function, where $\mathbb{S}_{d}$ denotes the set of $d\times d$ symmetric matrices. Then there exists a unique bounded, convex and closed set $\Sigma \subset \mathbb{S}_{d}^{+}$ such that \begin{equation} G(A)=\frac{1}{2}\sup_{\gamma \in \Sigma}\mathrm{tr}[A\gamma]\text{ for } A\in \mathbb{S}_{d}, \label{new-e1-1} \end{equation} where $\mathbb{S}_{d}^{+}$ denotes the set of $d\times d$ nonnegative matrices. If there exists a $\underline{\sigma}^{2}>0$ such that $\gamma \geq \underline{\sigma}^{2}I_{d}$ for any $\gamma \in \Sigma$, $G$ is called non-degenerate. Otherwise, $G$ is called degenerate. Peng \cite{P07a, P08a} constructed the $G$-expectation $\mathbb{\hat{E} }:Lip(\Omega_{T})\rightarrow \mathbb{R}$ and the conditional $G$-expectation $\mathbb{\hat{E}}_{t}:Lip(\Omega_{T})\rightarrow Lip(\Omega_{t})$ as follows: \begin{description} \item[(i)] For each $s_{1}\leq s_{2}\leq T$ and $\varphi \in C_{b.Lip} (\mathbb{R}^{d})$, define $\mathbb{\hat{E}}[\varphi(B_{s_{2}}-B_{s_{1} })]=u(s_{2}-s_{1},0)$, where $u$ is the viscosity solution (see \cite{CIP}) of the following $G$-heat equation: \[ \partial_{t}u-G(D_{x}^{2}u)=0,\ u(0,x)=\varphi(x). \] \item[(ii)] For each $X=\varphi(B_{t_{1}},B_{t_{2}}-B_{t_{1}},\ldots,B_{t_{N} }-B_{t_{N-1}})\in Lip(\Omega_{T})$, define \[ \mathbb{\hat{E}}_{t_{i}}[X]=\varphi_{i}(B_{t_{1}},\ldots,B_{t_{i}}-B_{t_{i-1} })\text{ for }i=N-1,\ldots,1\text{ and }\mathbb{\hat{E}}[X]=\mathbb{\hat{E} }[\varphi_{1}(B_{t_{1}})], \] where $\varphi_{N-1}(x_{1},\ldots,x_{N-1}):=\mathbb{\hat{E}}[\varphi (x_{1},\ldots,x_{N-1},B_{t_{N}}-B_{t_{N-1}})]$ for $(x_{1},\ldots,x_{N-1} )\in \mathbb{R}^{d\times(N-1)}$ and \[ \varphi_{i}(x_{1},\ldots,x_{i}):=\mathbb{\hat{E}}[\varphi_{i+1}(x_{1} ,\ldots,x_{i},B_{t_{i+1}}-B_{t_{i}})]\text{ for }i=N-2,\ldots,1. \] \end{description} The space $(\Omega_{T},Lip(\Omega_{T}),\mathbb{\hat{E}},(\mathbb{\hat{E}} _{t})_{t\in \lbrack0,T]})$ is a consistent sublinear expectation space, where $\mathbb{\hat{E}}_{0}=\mathbb{\hat{E}}$. The canonical process $(B_{t} )_{t\in \lbrack0,T]}$ is called the $G$-Brownian motion under $\mathbb{\hat{E} }$. For each $t\in \lbrack0,T]$, denote by $L_{G}^{p}(\Omega_{t})$ the completion of $Lip(\Omega_{t})$ under the norm $||X||_{L_{G}^{p}}:=(\mathbb{\hat{E} }[|X|^{p}])^{1/p}$ for $p\geq1$. It is clear that $\mathbb{\hat{E}}_{t}$ can be continuously extended to $L_{G}^{1}(\Omega_{T})$ under the norm $||\cdot||_{L_{G}^{1}}$. The following theorem is the representation theorem of $G$-expectation. \begin{theorem} (\cite{DHP11, HP09}) There exists a unique weakly compact and convex set of probability measures $\mathcal{P}$ on $(\Omega_{T},\mathcal{B}(\Omega_{T}))$ such that \[ \mathbb{\hat{E}}[X]=\sup_{P\in \mathcal{P}}E_{P}[X]\text{ for all }X\in L_{G}^{1}(\Omega_{T}), \] where $\mathcal{B}(\Omega_{T})=\sigma(B_{s}:s\leq T)$. \end{theorem} For this $\mathcal{P}$, define \[ \mathbb{L}^{p}(\Omega_{t}):=\left \{ X\in \mathcal{B}(\Omega_{t}):\sup _{P\in \mathcal{P}}E_{P}[|X|^{p}]<\infty \right \} \text{ for }p\geq1. \] It is easy to check that $L_{G}^{p}(\Omega_{t})\subset \mathbb{L}^{p} (\Omega_{t})$. For each $X\in \mathbb{L}^{1}(\Omega_{T})$, \[ \mathbb{\hat{E}}[X]:=\sup_{P\in \mathcal{P}}E_{P}[X] \] is still called the $G$-expectation. The capacity associated to $\mathcal{P}$ is defined by \[ c(A):=\sup_{P\in \mathcal{P}}P(A)\text{ for }A\in \mathcal{B}(\Omega_{T}). \] A set $A\in \mathcal{B}(\Omega_{T})$ is polar if $c(A)=0$. A property holds \textquotedblleft quasi-surely" (q.s. for short) if it holds outside a polar set. In the following, we do not distinguish two random variables $X$ and $Y$ if $X=Y$ q.s. \begin{definition} A process $(M_{t})_{t\leq T}$ is called a $G$-martingale if $M_{t}\in L_{G}^{1}(\Omega_{t})$ and $\mathbb{\hat{E}}_{s}[M_{t}]=M_{s}$ for any $0\leq s\leq t\leq T$. \end{definition} The following Doob's inequality for $G$-martingale can be found in \cite{STZ, Song11}. The following proof is based on \cite{HJL, STZ}. \begin{theorem} \label{th1-1}Let $1\leq p<p^{\prime}$ and $\xi \in L_{G}^{p^{\prime}} (\Omega_{T})$. Then \begin{equation} \left( \hat{\mathbb{E}}\left[ \sup_{t\leq T}\left( \hat{\mathbb{E}} _{t}[|\xi|]\right) ^{p}\right] \right) ^{1/p}\leq \left( \hat{\mathbb{E} }\left[ \sup_{t\leq T}\hat{\mathbb{E}}_{t}[|\xi|^{p}]\right] \right) ^{1/p}\leq C\left( \hat{\mathbb{E}}[|\xi|^{p^{\prime}}]\right) ^{1/p^{\prime}}, \label{e1-2} \end{equation} where \[ C=\left( 1+\frac{p}{p^{\prime}-p}\right) ^{1/p}. \] \end{theorem} \begin{proof} By the definition of $L_{G}^{p^{\prime}}(\Omega_{T})$, we only need to prove the inequality for $\xi \in Lip(\Omega_{T})$. Define \[ M_{t}=\hat{\mathbb{E}}_{t}[|\xi|]\text{ for }t\leq T. \] For each fixed $\lambda>0$ and integer $n\geq1$, define a stopping time \[ \tau=\inf \{t_{i}:M_{t_{i}}\geq \lambda,i=0,\ldots,n\}, \] where $t_{i}=iT/n$ and $\inf \emptyset=\infty$. It is easy to check that \[ \{ \tau=t_{i}\} \in \mathcal{B}(\Omega_{t_{i}}),\text{ }\{ \tau=\infty \} \in \mathcal{B}(\Omega_{T})\text{ and }\{ \tau=t_{i}\} \cap \{ \tau =t_{j}\}=\emptyset \text{ for }i\not =j\text{.} \] By Proposition 3.9 in \cite{HJL}, we have \[ \hat{\mathbb{E}}\left[ \sum_{i=0}^{n}|\xi|I_{\{ \tau=t_{i}\}}+0I_{\{ \tau=\infty \}}\right] =\hat{\mathbb{E}}\left[ \sum_{i=0}^{n}\hat{\mathbb{E} }_{t_{i}}[|\xi|]I_{\{ \tau=t_{i}\}}+\hat{\mathbb{E}}_{T}[0]I_{\{ \tau =\infty \}}\right] , \] which implies \[ \hat{\mathbb{E}}\left[ |\xi|I_{\{ \tau \leq t_{n}\}}\right] =\hat{\mathbb{E} }\left[ \sum_{i=0}^{n}M_{t_{i}}I_{\{ \tau=t_{i}\}}\right] \geq \lambda \hat{\mathbb{E}}\left[ I_{\{ \tau \leq t_{n}\}}\right] . \] Note that $\{ \tau \leq t_{n}\}=\{ \sup_{i}M_{t_{i}}\geq \lambda \}$, then we have \[ \lambda \hat{\mathbb{E}}\left[ I_{\{ \sup_{i}M_{t_{i}}\geq \lambda \}}\right] \leq \hat{\mathbb{E}}\left[ |\xi|I_{\{ \sup_{i}M_{t_{i}}\geq \lambda \}}\right] \leq \left( \hat{\mathbb{E}}[|\xi|^{p^{\prime}}]\right) ^{1/p^{\prime} }\left( \hat{\mathbb{E}}\left[ I_{\{ \sup_{i}M_{t_{i}}\geq \lambda \}}\right] \right) ^{1/q^{\prime}}, \] where $1/p^{\prime}+1/q^{\prime}=1$. Thus, \[ \hat{\mathbb{E}}\left[ I_{\{ \sup_{i}M_{t_{i}}\geq \lambda \}}\right] \leq \frac{1}{\lambda^{p^{\prime}}}\hat{\mathbb{E}}\left[ |\xi|^{p^{\prime} }\right] \text{ for each }\lambda>0\text{.} \] For each fixed $\lambda_{0}>0$, we have \begin{align*} \hat{\mathbb{E}}\left[ \sup_{i}M_{t_{i}}^{p}\right] & =\sup_{P\in \mathcal{P}}E_{P}\left[ \sup_{i}M_{t_{i}}^{p}\right] \\ & =\sup_{P\in \mathcal{P}}p\int_{0}^{\infty}P(\sup_{i}M_{t_{i}}\geq \lambda)\lambda^{p-1}d\lambda \\ & \leq \int_{0}^{\lambda_{0}}p\lambda^{p-1}d\lambda+\int_{\lambda_{0}} ^{\infty}p\lambda^{p-1-p^{\prime}}\hat{\mathbb{E}}[|\xi|^{p^{\prime}} ]d\lambda \\ & =(\lambda_{0})^{p}+\frac{p\lambda_{0}^{p-p^{\prime}}}{p^{\prime}-p} \hat{\mathbb{E}}[|\xi|^{p^{\prime}}]. \end{align*} Taking $\lambda_{0}=\left( \hat{\mathbb{E}}[|\xi|^{p^{\prime}}]\right) ^{1/p^{\prime}}$, we get \[ \hat{\mathbb{E}}\left[ \sup_{i}M_{t_{i}}^{p}\right] \leq \left( 1+\frac {p}{p^{\prime}-p}\right) \left( \hat{\mathbb{E}}[|\xi|^{p^{\prime}}]\right) ^{p/p^{\prime}}. \] Since $|\xi|\in L_{ip}(\Omega_{T})$, we have \[ \sup_{i}M_{t_{i}}^{p}\uparrow \sup_{t\leq T}M_{t}^{p}. \] Then we obtain \begin{equation} \hat{\mathbb{E}}\left[ \sup_{t\leq T}\left( \hat{\mathbb{E}}_{t} [|\xi|]\right) ^{p}\right] \leq \left( 1+\frac{p}{p^{\prime}-p}\right) \left( \hat{\mathbb{E}}[|\xi|^{p^{\prime}}]\right) ^{p/p^{\prime}}. \label{e1-1} \end{equation} It is obvious that $\left( \hat{\mathbb{E}}_{t}[|\xi|]\right) ^{p}\leq \hat{\mathbb{E}}_{t}[|\xi|^{p}]$. Since inequality (\ref{e1-1}) holds for $|\xi|^{p}\in L_{ip}(\Omega_{T})$ and $1<p^{\prime}/p$, we have \[ \hat{\mathbb{E}}\left[ \sup_{t\leq T}\hat{\mathbb{E}}_{t}[|\xi|^{p}]\right] \leq \left( 1+\frac{1}{p^{\prime}/p-1}\right) \left( \hat{\mathbb{E}} [|\xi|^{p^{\prime}}]\right) ^{p/p^{\prime}}=\left( 1+\frac{p}{p^{\prime} -p}\right) \left( \hat{\mathbb{E}}[|\xi|^{p^{\prime}}]\right) ^{p/p^{\prime}}. \] Thus we obtain (\ref{e1-2}). \end{proof} \section{BSDEs driven by $G$-Brownian motion under degenerate case} Let $B_{t}=(B_{t}^{1},\ldots,B_{t}^{d})^{T}$ be a $d$-dimensional $G$-Brownian motion satisfying \begin{equation} G(A)=G^{\prime}(A^{\prime})+\frac{1}{2}\sum_{i=d^{\prime}+1}^{d}\bar{\sigma }_{i}^{2}a_{i}^{+}, \label{new-e2-2} \end{equation} where $d^{\prime}<d$, $A^{\prime}\in \mathbb{S}_{d^{\prime}}$, $a_{i} \in \mathbb{R}$ for $d^{\prime}<i\leq d$, \[ A=\left( \begin{array} [c]{cccc} A^{\prime} & \cdots & \cdots & \cdots \\ \cdots & a_{d^{\prime}+1} & \cdots & \cdots \\ \vdots & \vdots & \ddots & \vdots \\ \cdots & \cdots & \cdots & a_{d} \end{array} \right) \in \mathbb{S}_{d}, \] $G^{\prime}:\mathbb{S}_{d^{\prime}}\rightarrow \mathbb{R}$ is non-degenerate, $\bar{\sigma}_{i}>0$ for $i=d^{\prime}+1,\ldots,d$. By Corollary 3.5.8 in Peng \cite{P2019}, we know that \begin{equation} (\langle B^{i},B^{j}\rangle_{t+s}-\langle B^{i},B^{j}\rangle_{t})_{i,j=1} ^{d}\in s\Sigma \text{ for any }t\text{, }s\geq0, \label{new-e2-3} \end{equation} where $\langle B^{i},B^{j}\rangle$ is the mutual variation process of $B^{i}$ and $B^{j}$, and $\Sigma \subset \mathbb{S}_{d}^{+}$ is the unique bounded, convex and closed set satisfying (\ref{new-e1-1}). It follows from (\ref{new-e2-2}) and (\ref{new-e2-3}) that, for any $t$, $s\geq0$, \begin{equation} cs\leq \langle B^{i}\rangle_{t+s}-\langle B^{i}\rangle_{t}\leq Cs\text{ for }i\leq d^{\prime},\text{ }\langle B^{i}\rangle_{t+s}-\langle B^{i}\rangle _{t}\leq \bar{\sigma}_{i}^{2}s\text{ for }d^{\prime}<i\leq d, \label{new-e2-5} \end{equation} \[ \langle B^{i},B^{j}\rangle_{t}=0\text{ for }i\leq d\text{, }d^{\prime}<j\leq d\text{, }i\not =j\text{,} \] where $\langle B^{i}\rangle=\langle B^{i},B^{i}\rangle$, $0<c\leq C<\infty$. We consider the following type of $G$-BSDE under degenerate case: \begin{equation} \begin{array} [c]{rl} Y_{t}= & \xi+\int_{t}^{T}f(s,Y_{s},Z_{s}^{\prime})ds+\sum_{i,j=1}^{d^{\prime} }\int_{t}^{T}g_{ij}(s,Y_{s},Z_{s}^{\prime})d\langle B^{i},B^{j}\rangle_{s}\\ & +\sum_{l=d^{\prime}+1}^{d}\int_{t}^{T}g_{l}(s,Y_{s},Z_{s}^{\prime},Z_{s} ^{l})d\langle B^{l}\rangle_{s}-\sum_{k=1}^{d}\int_{t}^{T}Z_{s}^{k}dB_{s} ^{k}-(K_{T}-K_{t}), \end{array} \label{new-e2-4} \end{equation} where $Z_{s}^{\prime}=(Z_{s}^{1},\ldots,Z_{s}^{d^{\prime}})^{T}$, \[ f,g_{ij}:[0,T]\times \Omega_{T}\times \mathbb{R}\times \mathbb{R}^{d^{\prime} }\rightarrow \mathbb{R}\text{, }g_{l}:[0,T]\times \Omega_{T}\times \mathbb{R}\times \mathbb{R}^{d^{\prime}}\times \mathbb{R}\rightarrow \mathbb{R}\text{.} \] The following spaces and norms are needed to define the solution of the above $G$-BSDE. \begin{itemize} \item $M^{0}(0,T):=\left \{ \eta_{t}=\sum_{k=0}^{N-1}\xi_{k}I_{[t_{k} ,t_{k+1})}(t):N\in \mathbb{N}\text{, }0=t_{0}<\cdots<t_{N}=T,\text{ }\xi_{k}\in Lip(\Omega_{t_{k}})\right \} $; \item $||\eta||_{M_{G}^{p,\bar{p}}(0,T)}:=\left( \mathbb{\hat{E}}\left[ \left( \int_{0}^{T}|\eta_{t}|^{p}dt\right) ^{\bar{p}/p}\right] \right) ^{1/\bar{p}}$, $||\eta||_{H_{G}^{p,\bar{p}}(0,T;\langle B^{i}\rangle )}:=\left( \mathbb{\hat{E}}\left[ \left( \int_{0}^{T}|\eta_{t}|^{p}d\langle B^{i}\rangle_{t}\right) ^{\bar{p}/p}\right] \right) ^{1/\bar{p}}$; \item $M_{G}^{p,\bar{p}}(0,T):=\left \{ \text{the completion of } M^{0}(0,T)\text{ under the norm }||\cdot||_{M_{G}^{p,\bar{p}}(0,T)}\right \} $ for $p$, $\bar{p}\geq1$; \item $H_{G}^{p,\bar{p}}(0,T;\langle B^{i}\rangle):=\left \{ \text{the completion of }M^{0}(0,T)\text{ under the norm }||\cdot||_{H_{G}^{p,\bar{p} }(0,T;\langle B^{i}\rangle)}\right \} $ for $p$, $\bar{p}\geq1$; \item $M_{G}^{p}(0,T):=M_{G}^{p,p}(0,T)$, $H_{G}^{p}(0,T;\langle B^{i} \rangle):=H_{G}^{p,p}(0,T;\langle B^{i}\rangle)$; \item $S^{0}(0,T):=\left \{ h(t,B_{t_{1}\wedge t},\ldots,B_{t_{N}\wedge t}):N\in \mathbb{N}\text{, }0<t_{1}<\cdots<t_{N}=T,\text{ }h\in C_{b.Lip} (\mathbb{R}^{N+1})\right \} $; \item $||\eta||_{S_{G}^{p}(0,T)}:=\left( \mathbb{\hat{E}}\left[ \sup_{t\leq T}|\eta_{t}|^{p}\right] \right) ^{1/p}$; \item $S_{G}^{p}(0,T):=\left \{ \text{the completion of }S^{0}(0,T)\text{ under the norm }||\cdot||_{S_{G}^{p}(0,T)}\right \} $ for $p\geq1$. \end{itemize} By (\ref{new-e2-5}), we know that \[ c^{1/p}||\eta||_{M_{G}^{p,\bar{p}}(0,T)}\leq||\eta||_{H_{G}^{p,\bar{p} }(0,T;\langle B^{i}\rangle)}\leq C^{1/p}||\eta||_{M_{G}^{p,\bar{p}} (0,T)}\text{ for }i\leq d^{\prime} \] and \[ ||\eta||_{H_{G}^{p,\bar{p}}(0,T;\langle B^{i}\rangle)}\leq \bar{\sigma} _{i}^{2/p}||\eta||_{M_{G}^{p,\bar{p}}(0,T)}\text{ for }d^{\prime}<i\leq d. \] Thus $M_{G}^{p,\bar{p}}(0,T)=H_{G}^{p,\bar{p}}(0,T;\langle B^{i}\rangle)$ for $i\leq d^{\prime}$ and $M_{G}^{p,\bar{p}}(0,T)\subset H_{G}^{p,\bar{p} }(0,T;\langle B^{i}\rangle)$ for $d^{\prime}<i\leq d$. Throughout the paper, we use the following assumptions: \begin{description} \item[(H1)] There exists a $\bar{p}>1$ such that $\xi \in L_{G}^{\bar{p} }(\Omega_{T})$, $f(\cdot,y,z^{\prime})$, $g_{ij}(\cdot,y,z^{\prime})\in M_{G}^{1,\bar{p}}(0,T)$ and $g_{l}(\cdot,y,z^{\prime},z)\in H_{G}^{1,\bar{p} }(0,T;\langle B^{l}\rangle)$ for any $y$, $z\in \mathbb{R}$, $z^{\prime} \in \mathbb{R}^{d^{\prime}}$, $i$, $j\leq d^{\prime}$, $d^{\prime}<l\leq d$; \item[(H2)] There exists a constant $L>0$ such that, for any $(t,\omega )\in \lbrack0,T]\times \Omega_{T}$, $(y,z^{\prime},z)$, $(\bar{y},\bar {z}^{\prime},\bar{z})\in$ $\mathbb{R}\times \mathbb{R}^{d^{\prime}} \times \mathbb{R}$, \[ \begin{array} [c]{l} |f(t,\omega,y,z^{\prime})-f(t,\omega,\bar{y},\bar{z}^{\prime})|+\sum _{i,j=1}^{d^{\prime}}|g_{ij}(t,\omega,y,z^{\prime})-g_{ij}(t,\omega,\bar {y},\bar{z}^{\prime})|\\ +\sum_{l=d^{\prime}+1}^{d}|g_{l}(t,\omega,y,z^{\prime},z)-g_{l}(t,\omega ,\bar{y},\bar{z}^{\prime},\bar{z})|\leq L(|y-\bar{y}|+|z^{\prime}-\bar {z}^{\prime}|+|z-\bar{z}|). \end{array} \] \end{description} Now we give the $L^{p}$-solution of $G$-BSDE (\ref{new-e2-4}) for $p\in (1,\bar{p})$. \begin{definition} $(Y,Z^{1},\ldots,Z^{d},K)$ is called an $L^{p}$-solution of $G$-BSDE (\ref{new-e2-4}) if the following properties hold: \begin{description} \item[(i)] $Y\in S_{G}^{p}(0,T)$, $Z^{i}\in H_{G}^{2,p}(0,T;\langle B^{i}\rangle)$ for $i\leq d$, $K$ is a non-increasing $G$-martingale with $K_{0}=0$ and $K_{T}\in L_{G}^{p}(\Omega_{T})$; \item[(ii)] \[ \begin{array} [c]{rl} Y_{t}= & \xi+\int_{t}^{T}f(s,Y_{s},Z_{s}^{\prime})ds+\sum_{i,j=1}^{d^{\prime} }\int_{t}^{T}g_{ij}(s,Y_{s},Z_{s}^{\prime})d\langle B^{i},B^{j}\rangle_{s}\\ & +\sum_{l=d^{\prime}+1}^{d}\int_{t}^{T}g_{l}(s,Y_{s},Z_{s}^{\prime},Z_{s} ^{l})d\langle B^{l}\rangle_{s}-\sum_{k=1}^{d}\int_{t}^{T}Z_{s}^{k}dB_{s} ^{k}-(K_{T}-K_{t}), \end{array} \] where $Z_{s}^{\prime}=(Z_{s}^{1},\ldots,Z_{s}^{d^{\prime}})^{T}$ and $t\leq T$. \end{description} \end{definition} For simplicity of representation, we only give the proof for the following $G$-BSDE: \begin{equation} Y_{t}=\xi+\int_{t}^{T}f(s,Y_{s})ds+\int_{t}^{T}g(s,Y_{s},Z_{s})d\langle B\rangle_{s}-\int_{t}^{T}Z_{s}dB_{s}-(K_{T}-K_{t}), \label{e2-1} \end{equation} where $B$ is a $1$-dimensional $G$-Brownian motion, $G(a):=\frac{1}{2} \bar{\sigma}^{2}a^{+}$ for $a\in \mathbb{R}$ with $\bar{\sigma}>0$. The results still hold for $G$-BSDE (\ref{new-e2-4}), and will be given at the end of this section. In the following, the constant $C$ will change from line to line for simplicity. \subsection{Prior estimates of $G$-BSDEs} In this subsection, we give some useful prior estimates of $G$-BSDE (\ref{e2-1}). \begin{proposition} \label{pro2-1}Suppose that $\xi_{i}$, $f_{i}$ and $g_{i}$ satisfy (H1) and (H2) for $i=1$, $2$. Let $(Y^{i},Z^{i},K^{i})$ be the $L^{p}$-solution of $G$-BSDE (\ref{e2-1}) corresponding to $\xi_{i}$, $f_{i}$ and $g_{i}$ for some $p\in(1,\bar{p})$. Then there exists a positive constant $C$ depending on $p$, $\bar{\sigma}$, $L$ and $T$ satisfying \begin{equation} |\hat{Y}_{t}|^{p}\leq C\hat{\mathbb{E}}_{t}\left[ |\hat{\xi}|^{p}+\left( \int_{t}^{T}|\hat{f}_{s}|ds\right) ^{p}+\left( \int_{t}^{T}|\hat{g} _{s}|d\langle B\rangle_{s}\right) ^{p}\right] , \label{e2-2} \end{equation} \begin{equation} |Y_{t}^{i}|^{p}\leq C\hat{\mathbb{E}}_{t}\left[ |\xi_{i}|^{p}+\left( \int_{t}^{T}|f_{i}(s,0)|ds\right) ^{p}+\left( \int_{t}^{T}|g_{i} (s,0,0)|d\langle B\rangle_{s}\right) ^{p}\right] \text{ for }i=1,2, \label{e2-6} \end{equation} \begin{equation} \hat{\mathbb{E}}\left[ \left( \int_{0}^{T}|Z_{s}^{i}|^{2}d\langle B\rangle_{s}\right) ^{p/2}\right] +\hat{\mathbb{E}}\left[ |K_{T}^{i} |^{p}\right] \leq C\Lambda_{i}\text{ for }i=1,2, \label{e2-7} \end{equation} \begin{equation} \hat{\mathbb{E}}\left[ \left( \int_{0}^{T}|\hat{Z}_{s}|^{2}d\langle B\rangle_{s}\right) ^{p/2}\right] \leq C\left \{ \mathbb{\hat{E}}\left[ \sup_{t\leq T}|\hat{Y}_{t}|^{p}\right] +(\Lambda_{1}+\Lambda_{2} )^{1/2}\left( \mathbb{\hat{E}}\left[ \sup_{t\leq T}|\hat{Y}_{t}|^{p}\right] \right) ^{1/2}\right \} , \label{e2-8} \end{equation} where \[ \Lambda_{i}=\mathbb{\hat{E}}\left[ \sup_{t\leq T}|Y_{t}^{i}|^{p}\right] +\mathbb{\hat{E}}\left[ \left( \int_{0}^{T}|f_{i}(s,0)|ds\right) ^{p}\right] +\mathbb{\hat{E}}\left[ \left( \int_{0}^{T}|g_{i} (s,0,0)|d\langle B\rangle_{s}\right) ^{p}\right] \text{ for }i=1,2, \] $\hat{Y}_{t}=Y_{t}^{1}-Y_{t}^{2}$, $\hat{\xi}=\xi_{1}-\xi_{2}$, $\hat{f} _{s}=f_{1}(s,Y_{s}^{2})-f_{2}(s,Y_{s}^{2})$, $\hat{g}_{s}=g_{1}(s,Y_{s} ^{2},Z_{s}^{2})-g_{2}(s,Y_{s}^{2},Z_{s}^{2})$, $\hat{Z}_{t}=Z_{t}^{1} -Z_{t}^{2}$. \end{proposition} \begin{proof} The method is the same as that in \cite{HJPS1}. For convenience of the reader, we sketch the proof. For each given $t<T$, consider the following SDE for $r\in \lbrack t,T]$ \[ X_{r}=\int_{t}^{r}(f_{1}(s,Y_{s}^{2}-X_{s})-f_{2}(s,Y_{s}^{2}))ds+\int_{t} ^{r}(g_{1}(s,Y_{s}^{2}-X_{s},Z_{s}^{2})-g_{2}(s,Y_{s}^{2},Z_{s}^{2}))d\langle B\rangle_{s}. \] Noting that $\langle B\rangle_{t+s}-\langle B\rangle_{t}\leq \bar{\sigma}^{2}s$ for any $t$, $s\geq0$, we obtain \[ |X_{r}|\leq \int_{t}^{T}|\hat{f}_{s}|ds+\int_{t}^{T}|\hat{g}_{s}|d\langle B\rangle_{s}+L(1+\bar{\sigma}^{2})\int_{t}^{r}|X_{s}|ds\text{ for }r\in \lbrack t,T]. \] By the Gronwall inequality, we have \begin{equation} |X_{T}|\leq C\left( \int_{t}^{T}|\hat{f}_{s}|ds+\int_{t}^{T}|\hat{g} _{s}|d\langle B\rangle_{s}\right) , \label{e2-4} \end{equation} where $C$ depends on $\bar{\sigma}$, $L$ and $T$. For each $\varepsilon>0$, noting that \[ p(|x|^{2}+\varepsilon)^{(p/2)-1}+p(p-2)(|x|^{2}+\varepsilon)^{(p/2)-2} |x|^{2}\geq p((p-1)\wedge1)(|x|^{2}+\varepsilon)^{(p/2)-1} \] for $x\in \mathbb{R}$ and taking $\lambda=pL(1+\bar{\sigma}^{2})+pL^{2} \bar{\sigma}^{2}2^{-1}[(p-1)^{-1}\vee1]$, we get by applying It\^{o}'s formula to $(|\hat{Y}_{r}+X_{r}|^{2}+\varepsilon)^{p/2}e^{\lambda r}$ on $[t,T]$ that \begin{equation} (|\hat{Y}_{t}|^{2}+\varepsilon)^{p/2}e^{\lambda t}+M_{T}-M_{t}\leq(|\hat{\xi }+X_{T}|^{2}+\varepsilon)^{p/2}e^{\lambda T}, \label{e2-5} \end{equation} where $M_{T}-M_{t}=\int_{t}^{T}p(|\hat{Y}_{s}+X_{s}|^{2}+\varepsilon )^{(p/2)-1}e^{\lambda s}[(\hat{Y}_{s}+X_{s})\hat{Z}_{s}dB_{s}+(\hat{Y} _{s}+X_{s})^{+}dK_{s}^{1}+(\hat{Y}_{s}+X_{s})^{-}dK_{s}^{2}]$. By Lemma 3.4 in \cite{HJPS1}, we know that $\hat{\mathbb{E}}_{t}[M_{T}-M_{t}]=0$. Taking $\hat{\mathbb{E}}_{t}$ on both sides of (\ref{e2-5}) and letting $\varepsilon \downarrow0$, we get (\ref{e2-2}) by (\ref{e2-4}). Taking $\xi_{j}=f_{j}=g_{j}=0$ for $j\not =i$, we have $(Y^{j},Z^{j},K^{j} )=0$. Thus we obtain (\ref{e2-6}) by (\ref{e2-2}). Applying It\^{o}'s formula to $|Y_{t}^{i}|^{2}$ on $[0,T]$, by the B-D-G inequality, we get \begin{equation} \hat{\mathbb{E}}\left[ \left( \int_{0}^{T}|Z_{s}^{i}|^{2}d\langle B\rangle_{s}\right) ^{p/2}\right] \leq C\left \{ \Lambda_{i}+\left( \mathbb{\hat{E}}\left[ \sup_{t\leq T}|Y_{t}^{i}|^{p}\right] \right) ^{1/2}\left( \hat{\mathbb{E}}\left[ |K_{T}^{i}|^{p}\right] \right) ^{1/2}\right \} , \label{e2-9} \end{equation} where $C$ depends on $p$, $\bar{\sigma}$, $L$ and $T$. It follows from $G$-BSDE (\ref{e2-1}) and the B-D-G inequality that \begin{equation} \hat{\mathbb{E}}\left[ |K_{T}^{i}|^{p}\right] \leq C\left \{ \Lambda _{i}+\hat{\mathbb{E}}\left[ \left( \int_{0}^{T}|Z_{s}^{i}|^{2}d\langle B\rangle_{s}\right) ^{p/2}\right] \right \} , \label{e2-10} \end{equation} where $C$ depends on $p$, $\bar{\sigma}$, $L$ and $T$. Then we deduce (\ref{e2-7}) by (\ref{e2-9}) and (\ref{e2-10}). Applying It\^{o}'s formula to $|\hat{Y}_{t}|^{2}$ on $[0,T]$, by the B-D-G inequality, we get \begin{equation} \hat{\mathbb{E}}\left[ \left( \int_{0}^{T}|\hat{Z}_{s}|^{2}d\langle B\rangle_{s}\right) ^{p/2}\right] \leq C\left \{ \mathbb{\hat{E}}\left[ \sup_{t\leq T}|\hat{Y}_{t}|^{p}\right] +(\tilde{\Lambda}_{1}+\tilde{\Lambda }_{2})^{1/2}\left( \mathbb{\hat{E}}\left[ \sup_{t\leq T}|\hat{Y}_{t} |^{p}\right] \right) ^{1/2}\right \} , \label{e2-11} \end{equation} where $C$ depends on $p$, $\bar{\sigma}$, $L$ and $T$, \[ \tilde{\Lambda}_{i}=\Lambda_{i}+\hat{\mathbb{E}}\left[ \left( \int_{0} ^{T}|Z_{s}^{i}|^{2}d\langle B\rangle_{s}\right) ^{p/2}\right] +\hat {\mathbb{E}}\left[ |K_{T}^{i}|^{p}\right] \text{ for }i=1,2.\text{ } \] Thus we obtain (\ref{e2-8}) by (\ref{e2-7}) and (\ref{e2-11}). \end{proof} \subsection{Solution in the extended $\tilde{G}$-expectation space} Following \cite{HJPS1}, the key point to obtain the solution of $G$-BSDE (\ref{e2-1}) is to study the following type of $G$-BSDE: \begin{equation} Y_{t}=\varphi(B_{T})+\int_{t}^{T}h(Y_{s},Z_{s})d\langle B\rangle_{s}-\int _{t}^{T}Z_{s}dB_{s}-(K_{T}-K_{t}), \label{e2-12} \end{equation} where $\varphi \in C_{0}^{\infty}(\mathbb{R})$, $h\in C_{0}^{\infty} (\mathbb{R}^{2})$. In order to obtain the solution of $G$-BSDE (\ref{e2-12}), we introduce the extended $\tilde{G}$-expectation space. Set $\tilde{\Omega}_{T}=C_{0} ([0,T];\mathbb{R}^{2})$ and the canonical process is denoted by $(B,\tilde {B})$. For each $a_{11}$, $a_{12}$, $a_{22}\in \mathbb{R}$, define \[ \tilde{G}\left( \left( \begin{array} [c]{cc} a_{11} & a_{12}\\ a_{12} & a_{22} \end{array} \right) \right) =G(a_{11})+\frac{1}{2}a_{22}=\frac{1}{2}\sup_{\gamma \in \tilde{\Sigma}}\mathrm{tr}\left[ \left( \begin{array} [c]{cc} a_{11} & a_{12}\\ a_{12} & a_{22} \end{array} \right) \gamma \right] , \] where \[ \tilde{\Sigma}=\left \{ \left( \begin{array} [c]{cc} \sigma^{2} & 0\\ 0 & 1 \end{array} \right) :\sigma \in \lbrack0,\bar{\sigma}]\right \} . \] The $\tilde{G}$-expectation is denoted by $\mathbb{\tilde{E}}$, and the related spaces are denoted by \[ Lip(\tilde{\Omega}_{t})\text{, }L_{\tilde{G}}^{p}(\tilde{\Omega}_{t})\text{, }\tilde{M}^{0}(0,T)\text{, }M_{\tilde{G}}^{p,\bar{p}}(0,T),\text{ } H_{\tilde{G}}^{p,\bar{p}}(0,T;\langle B\rangle)\text{, }S_{\tilde{G}} ^{p}(0,T)\text{.} \] For each $\mathbf{a}=(a_{1},a_{2})^{T}\in \mathbb{R}^{2}$, by Proposition 3.1.5 in Peng \cite{P2019}, we know that $B^{\mathbf{a}}:=a_{1}B+a_{2}\tilde{B}$ is a $G_{\mathbf{a}}$-Brownian motion, where $G_{\mathbf{a}}(b)=\frac{1}{2} [(\bar{\sigma}^{2}|a_{1}|^{2}+|a_{2}|^{2})b^{+}-|a_{2}|^{2}b^{-}]$ for $b\in \mathbb{R}$. In particular, $B$ is a $G$-Browinian motion and $\tilde{B}$ is a classical Browinian motion. Thus $\mathbb{\tilde{E}}|_{Lip(\Omega_{T} )}=\mathbb{\hat{E}}$, which implies that the completion of $M^{0}(0,T)$ (resp. $S^{0}(0,T)$) under the norm $||\cdot||_{H_{\tilde{G}}^{p,\bar{p}}(0,T;\langle B\rangle)}$ (resp. $||\cdot||_{S_{\tilde{G}}^{p}(0,T)}$) is $H_{G}^{p,\bar{p} }(0,T;\langle B\rangle)$ (resp. $S_{G}^{p}(0,T)$). Similar to (\ref{new-e2-3} ), we know that $\langle B,\tilde{B}\rangle_{t}=0$ and $\langle \tilde {B}\rangle_{t}=t$ in the $\tilde{G}$-expectation space. \begin{lemma} \label{pro2-2}Let $\varphi \in C_{0}^{\infty}(\mathbb{R})$ and $h\in C_{0}^{\infty}(\mathbb{R}^{2})$. Then, for each given $p>1$, $G$-BSDE (\ref{e2-12}) has a unique $L^{p}$-solution $(Y,Z,K)$ in the extended $\tilde{G}$-expectation space such that $Y\in S_{G}^{p}(0,T)$, $Z\in H_{\tilde{G}}^{2,p}(0,T;\langle B\rangle)$ and $K_{T}\in L_{\tilde{G}} ^{p}(\tilde{\Omega}_{T})$. \end{lemma} \begin{proof} The uniqueness is due to (\ref{e2-2}) and (\ref{e2-8}) in Proposition \ref{pro2-1}. The proof of existence is divided into two parts. Part 1. The purpose of this part is to find a solution $(Y,Z,K)$ in the extended $\tilde{G}$-expectation space such that $Y\in S_{\tilde{G}}^{p}(0,T)$ and $Z\in H_{\tilde{G}}^{2,p}(0,T;\langle B\rangle)$. For each fixed $\varepsilon \in(0,\bar{\sigma})$, define \[ B_{t}^{\varepsilon}=B_{t}+\varepsilon \tilde{B}_{t}\text{ for }t\in \lbrack0,T]. \] Then $(B_{t}^{\varepsilon})_{t\in \lbrack0,T]}$ is the $G_{\varepsilon} $-Brownian motion under $\mathbb{\tilde{E}}$, where \begin{equation} G_{\varepsilon}(a)=\frac{1}{2}[(\bar{\sigma}^{2}+\varepsilon^{2} )a^{+}-\varepsilon^{2}a^{-}]\text{ for }a\in \mathbb{R}. \label{new-e2-10} \end{equation} Let $u_{\varepsilon}$ be the viscosity solution of the following PDE \begin{equation} \partial_{t}u+G_{\varepsilon}(\partial_{xx}^{2}u+2h(u,\partial_{x}u))=0\text{, }u(T,x)=\varphi(x). \label{new-e2-11} \end{equation} By Theorem 6.4.3 in Krylov \cite{Kr} (see also Theorem C.4.4 in Peng \cite{P2019}), there exists a constant $\alpha \in(0,1)$ satisfying \begin{equation} ||u_{\varepsilon}||_{C^{1+\alpha/2,2+\alpha}([0,T-\delta]\times \mathbb{R} )}<\infty \text{ for any }\delta>0\text{.} \label{new-e2-12} \end{equation} Applying It\^{o}'s formula to $u_{\varepsilon}(t,B_{t}^{\varepsilon})$ on $[0,T-\delta]$, we obtain \begin{equation} Y_{t}^{\varepsilon}=Y_{T-\delta}^{\varepsilon}+\int_{t}^{T-\delta} h(Y_{s}^{\varepsilon},Z_{s}^{\varepsilon})d\langle B^{\varepsilon}\rangle _{s}-\int_{t}^{T-\delta}Z_{s}^{\varepsilon}dB_{s}^{\varepsilon}-(K_{T-\delta }^{\varepsilon}-K_{t}^{\varepsilon}), \label{e2-13} \end{equation} where $Y_{t}^{\varepsilon}=u_{\varepsilon}(t,B_{t}^{\varepsilon})$, $Z_{t}^{\varepsilon}=\partial_{x}u_{\varepsilon}(t,B_{t}^{\varepsilon})$ and \[ K_{t}^{\varepsilon}=\int_{0}^{t}\frac{1}{2}\left[ \partial_{xx} ^{2}u_{\varepsilon}(s,B_{s}^{\varepsilon})+2h(Y_{s}^{\varepsilon} ,Z_{s}^{\varepsilon})\right] d\langle B^{\varepsilon}\rangle_{s}-\int_{0} ^{t}G_{\varepsilon}\left( \partial_{xx}^{2}u_{\varepsilon}(s,B_{s} ^{\varepsilon})+2h(Y_{s}^{\varepsilon},Z_{s}^{\varepsilon})\right) ds. \] By Lemma 4.2.1 in Peng \cite{P2019}, we obtain that $K^{\varepsilon}$ is non-increasing and $K_{t}^{\varepsilon}=\mathbb{\tilde{E}}_{t}[K_{T-\delta }^{\varepsilon}]$ for $t\leq T-\delta$. The same analysis as in the proof of inequality (4.3) in \cite{HJPS1}, we get that there exists a positive constant $C$ depending on $\varphi$, $h$, $\bar{\sigma}$ and $T$ such that \[ |u_{\varepsilon}(t_{1},x_{1})-u_{\varepsilon}(t_{2},x_{2})|\leq C(\sqrt {|t_{1}-t_{2}|}+|x_{1}-x_{2}|)\text{ for }\varepsilon \in(0,\bar{\sigma })\text{, }t_{1},t_{2}\leq T,\text{ }x_{1},x_{2}\in \mathbb{R}. \] From this we can easily deduce that $\mathbb{\tilde{E}}[|Y_{T-\delta }^{\varepsilon}-\varphi(B_{T}^{\varepsilon})|^{2}]\rightarrow0$ as $\delta \downarrow0$ and \begin{equation} |u_{\varepsilon}(t,x)|\leq|\varphi(x)|+C\sqrt{T},\text{ }|\partial _{x}u_{\varepsilon}(t,x)|\leq C\text{ for }\varepsilon \in(0,\bar{\sigma })\text{, }t\leq T,\text{ }x\in \mathbb{R}. \label{e2-14} \end{equation} Taking $\delta \downarrow0$ in (\ref{e2-13}), we obtain \begin{equation} Y_{t}^{\varepsilon}=\varphi(B_{T}^{\varepsilon})+\int_{t}^{T}h(Y_{s} ^{\varepsilon},Z_{s}^{\varepsilon})d\langle B^{\varepsilon}\rangle_{s} -\int_{t}^{T}Z_{s}^{\varepsilon}dB_{s}^{\varepsilon}-(K_{T}^{\varepsilon }-K_{t}^{\varepsilon}), \label{e2-15} \end{equation} where $Y^{\varepsilon}$ and $Z^{\varepsilon}$ are uniformly bounded for $\varepsilon \in(0,\bar{\sigma})$ by (\ref{e2-14}). For each given $\varepsilon$, $\varepsilon^{\prime}\in(0,\bar{\sigma})$, set \[ \hat{Y}_{t}^{\varepsilon,\varepsilon^{\prime}}=Y_{t}^{\varepsilon} -Y_{t}^{\varepsilon^{\prime}},\text{ }\hat{Z}_{t}^{\varepsilon,\varepsilon ^{\prime}}=Z_{t}^{\varepsilon}-Z_{t}^{\varepsilon^{\prime}},\text{ }\hat {K}_{t}^{\varepsilon,\varepsilon^{\prime}}=K_{t}^{\varepsilon}-K_{t} ^{\varepsilon^{\prime}},\text{ }\hat{\xi}^{\varepsilon,\varepsilon^{\prime} }=\varphi(B_{T}^{\varepsilon})-\varphi(B_{T}^{\varepsilon^{\prime}}), \] \[ \hat{h}_{t}^{\varepsilon,\varepsilon^{\prime}}=h(Y_{t}^{\varepsilon} ,Z_{t}^{\varepsilon})-h(Y_{t}^{\varepsilon^{\prime}},Z_{t}^{\varepsilon ^{\prime}})\text{, }\bar{h}_{t}^{\varepsilon,\varepsilon^{\prime}} =\varepsilon^{2}h(Y_{t}^{\varepsilon},Z_{t}^{\varepsilon})-(\varepsilon ^{\prime})^{2}h(Y_{t}^{\varepsilon^{\prime}},Z_{t}^{\varepsilon^{\prime} }),\text{ }\bar{Z}_{t}^{\varepsilon,\varepsilon^{\prime}}=\varepsilon Z_{t}^{\varepsilon}-\varepsilon^{\prime}Z_{t}^{\varepsilon^{\prime}}. \] Then, by (\ref{e2-15}) and $\langle B^{\varepsilon}\rangle_{s}=\langle B\rangle_{s}+\varepsilon^{2}s$, we have \[ \hat{Y}_{t}^{\varepsilon,\varepsilon^{\prime}}=\hat{\xi}^{\varepsilon ,\varepsilon^{\prime}}+\int_{t}^{T}\hat{h}_{s}^{\varepsilon,\varepsilon ^{\prime}}d\langle B\rangle_{s}+\int_{t}^{T}\bar{h}_{s}^{\varepsilon ,\varepsilon^{\prime}}ds-\int_{t}^{T}\hat{Z}_{s}^{\varepsilon,\varepsilon ^{\prime}}dB_{s}-\int_{t}^{T}\bar{Z}_{s}^{\varepsilon,\varepsilon^{\prime} }d\tilde{B}_{s}-(\hat{K}_{T}^{\varepsilon,\varepsilon^{\prime}}-\hat{K} _{t}^{\varepsilon,\varepsilon^{\prime}}). \] Applying It\^{o}'s formula to $|\hat{Y}_{s}^{\varepsilon,\varepsilon^{\prime} }|^{2}e^{\lambda s}$ on $[t,T]$ for some positive constant $\lambda$, we obtain \begin{equation} \begin{array} [c]{l} |\hat{Y}_{t}^{\varepsilon,\varepsilon^{\prime}}|^{2}e^{\lambda t}+\lambda \int_{t}^{T}e^{\lambda s}|\hat{Y}_{s}^{\varepsilon,\varepsilon^{\prime}} |^{2}ds+\int_{t}^{T}e^{\lambda s}|\hat{Z}_{s}^{\varepsilon,\varepsilon ^{\prime}}|^{2}d\langle B\rangle_{s}+M_{T}-M_{t}\\ \leq|\hat{\xi}^{\varepsilon,\varepsilon^{\prime}}|^{2}e^{\lambda T}+2\int _{t}^{T}e^{\lambda s}|\hat{Y}_{s}^{\varepsilon,\varepsilon^{\prime}}||\hat {h}_{s}^{\varepsilon,\varepsilon^{\prime}}|d\langle B\rangle_{s}+2\int_{t} ^{T}e^{\lambda s}|\hat{Y}_{s}^{\varepsilon,\varepsilon^{\prime}}||\bar{h} _{s}^{\varepsilon,\varepsilon^{\prime}}|ds, \end{array} \label{e2-16} \end{equation} where \[ M_{T}-M_{t}=2\int_{t}^{T}e^{\lambda s}\hat{Y}_{s}^{\varepsilon,\varepsilon ^{\prime}}[\hat{Z}_{s}^{\varepsilon,\varepsilon^{\prime}}dB_{s}+\bar{Z} _{s}^{\varepsilon,\varepsilon^{\prime}}d\tilde{B}_{s}]+2\int_{t}^{T}e^{\lambda s}[(\hat{Y}_{s}^{\varepsilon,\varepsilon^{\prime}})^{+}dK_{s}^{\varepsilon }+(\hat{Y}_{s}^{\varepsilon,\varepsilon^{\prime}})^{-}dK_{s}^{\varepsilon ^{\prime}}]. \] Since \[ 2|\hat{Y}_{s}^{\varepsilon,\varepsilon^{\prime}}||\hat{h}_{s}^{\varepsilon ,\varepsilon^{\prime}}|\leq2L_{1}|\hat{Y}_{s}^{\varepsilon,\varepsilon ^{\prime}}|(|\hat{Y}_{s}^{\varepsilon,\varepsilon^{\prime}}|+|\hat{Z} _{s}^{\varepsilon,\varepsilon^{\prime}}|)\leq(|L_{1}|^{2}+2L_{1})|\hat{Y} _{s}^{\varepsilon,\varepsilon^{\prime}}|^{2}+|\hat{Z}_{s}^{\varepsilon ,\varepsilon^{\prime}}|^{2}, \] \[ 2|\hat{Y}_{s}^{\varepsilon,\varepsilon^{\prime}}||\bar{h}_{s}^{\varepsilon ,\varepsilon^{\prime}}|\leq|\hat{Y}_{s}^{\varepsilon,\varepsilon^{\prime} }|^{2}+|\bar{h}_{s}^{\varepsilon,\varepsilon^{\prime}}|^{2}\leq|\hat{Y} _{s}^{\varepsilon,\varepsilon^{\prime}}|^{2}+2|L_{2}|^{2}(\varepsilon ^{4}+(\varepsilon^{\prime})^{4}), \] where $L_{1}=\sup_{(y,z)\in \mathbb{R}^{2}}(|\partial_{y}h(y,z)|+|\partial _{z}h(y,z)|)$ and $L_{2}=\sup_{(y,z)\in \mathbb{R}^{2}}|h(y,z)|$, we get by taking $\lambda=(|L_{1}|^{2}+2L_{1})\bar{\sigma}^{2}+1$ in (\ref{e2-16}) that \begin{equation} |\hat{Y}_{t}^{\varepsilon,\varepsilon^{\prime}}|^{2}e^{\lambda t}+M_{T} -M_{t}\leq|\hat{\xi}^{\varepsilon,\varepsilon^{\prime}}|^{2}e^{\lambda T}+2|L_{2}|^{2}(\varepsilon^{4}+(\varepsilon^{\prime})^{4})Te^{\lambda T}. \label{e2-17} \end{equation} By Lemma 3.4 in \cite{HJPS1}, we know that $\mathbb{\tilde{E}}_{t}[M_{T} -M_{t}]=0$. Taking $\mathbb{\tilde{E}}_{t}$ on both sides of (\ref{e2-17}), we obtain \begin{align*} |\hat{Y}_{t}^{\varepsilon,\varepsilon^{\prime}}|^{2} & \leq C\left( \mathbb{\tilde{E}}_{t}[|\hat{\xi}^{\varepsilon,\varepsilon^{\prime}} |^{2}]+\varepsilon^{4}+(\varepsilon^{\prime})^{4}\right) \\ & \leq C\left( L_{\varphi}^{2}|\varepsilon-\varepsilon^{\prime} |^{2}\mathbb{\tilde{E}}_{t}[|\tilde{B}_{T}|^{2}]+\varepsilon^{4} +(\varepsilon^{\prime})^{4}\right) , \end{align*} where $L_{\varphi}=\sup_{x\in \mathbb{R}}|\varphi^{\prime}(x)|$ and $C$ depends on $\bar{\sigma}$, $h$ and $T$. Thus, for each given $p>1$, we obtain \begin{equation} \mathbb{\tilde{E}}\left[ \sup_{t\leq T}|\hat{Y}_{t}^{\varepsilon ,\varepsilon^{\prime}}|^{p}\right] \leq C\left( |\varepsilon-\varepsilon ^{\prime}|^{p}+\varepsilon^{2p}+(\varepsilon^{\prime})^{2p}\right) \rightarrow0\text{ as }\varepsilon,\text{ }\varepsilon^{\prime}\rightarrow0, \label{e2-18} \end{equation} where $C$ depends on $p$, $\bar{\sigma}$, $\varphi$, $h$ and $T$. Applying It\^{o}'s formula to $|\hat{Y}_{t}^{\varepsilon,\varepsilon^{\prime}}|^{2}$ on $[0,T]$, we get \begin{equation} \begin{array} [c]{rl} \int_{0}^{T}|\hat{Z}_{t}^{\varepsilon,\varepsilon^{\prime}}|^{2}d\langle B\rangle_{t}\leq & |\hat{\xi}^{\varepsilon,\varepsilon^{\prime}}|^{2} +2\int_{0}^{T}|\hat{Y}_{t}^{\varepsilon,\varepsilon^{\prime}}||\hat{h} _{t}^{\varepsilon,\varepsilon^{\prime}}|d\langle B\rangle_{t}-2\int_{0} ^{T}\hat{Y}_{t}^{\varepsilon,\varepsilon^{\prime}}\hat{Z}_{t}^{\varepsilon ,\varepsilon^{\prime}}dB_{t}-2\int_{0}^{T}\hat{Y}_{t}^{\varepsilon ,\varepsilon^{\prime}}\bar{Z}_{t}^{\varepsilon,\varepsilon^{\prime}}d\tilde {B}_{t}\\ & +2\int_{0}^{T}|\hat{Y}_{t}^{\varepsilon,\varepsilon^{\prime}}||\bar{h} _{t}^{\varepsilon,\varepsilon^{\prime}}|dt+2(|K_{T}^{\varepsilon} |+|K_{T}^{\varepsilon^{\prime}}|)\sup_{t\leq T}|\hat{Y}_{t}^{\varepsilon ,\varepsilon^{\prime}}|. \end{array} \label{e2-19} \end{equation} By (\ref{e2-14}), (\ref{e2-15}), (\ref{e2-18}) and (\ref{e2-19}), we obtain \begin{equation} \mathbb{\tilde{E}}\left[ \int_{0}^{T}|\hat{Z}_{t}^{\varepsilon,\varepsilon ^{\prime}}|^{2}d\langle B\rangle_{t}\right] \leq C\left \{ \mathbb{\tilde{E} }\left[ \sup_{t\leq T}|\hat{Y}_{t}^{\varepsilon,\varepsilon^{\prime}} |^{2}\right] +\left( \mathbb{\tilde{E}}\left[ \sup_{t\leq T}|\hat{Y} _{t}^{\varepsilon,\varepsilon^{\prime}}|^{2}\right] \right) ^{1/2}\right \} \rightarrow0\text{ as }\varepsilon,\text{ }\varepsilon^{\prime}\rightarrow0, \label{e2-20} \end{equation} where $C$ depends on $\bar{\sigma}$, $\varphi$, $h$ and $T$. Since $Z^{\varepsilon}$ is uniformly bounded for $\varepsilon \in(0,\bar{\sigma})$, we deduce from (\ref{e2-20}) that, for each given $p>1$, \begin{equation} \mathbb{\tilde{E}}\left[ \left( \int_{0}^{T}|\hat{Z}_{t}^{\varepsilon ,\varepsilon^{\prime}}|^{2}d\langle B\rangle_{t}\right) ^{p/2}\right] \rightarrow0\text{ as }\varepsilon,\text{ }\varepsilon^{\prime}\rightarrow0. \label{e2-21} \end{equation} Thus, for each given $p>1$, there exist $Y\in S_{\tilde{G}}^{p}(0,T)$ and $Z\in H_{\tilde{G}}^{2,p}(0,T;\langle B\rangle)$ such that \begin{equation} \mathbb{\tilde{E}}\left[ \sup_{t\leq T}|Y_{t}^{\varepsilon}-Y_{t} |^{p}+\left( \int_{0}^{T}|Z_{t}^{\varepsilon}-Z_{t}|^{2}d\langle B\rangle _{t}\right) ^{p/2}\right] \rightarrow0\text{ as }\varepsilon \rightarrow0. \label{e2-22} \end{equation} It follows from (\ref{e2-15}) and (\ref{e2-22}) that there exists a $K_{T}\in L_{\tilde{G}}^{p}(\tilde{\Omega}_{T})$ such that $\mathbb{\tilde{E}}\left[ |K_{T}^{\varepsilon}-K_{T}|^{p}\right] \rightarrow0$ as $\varepsilon \rightarrow0$. Taking $\varepsilon \rightarrow0$ in (\ref{e2-15}), we obtain \begin{equation} Y_{t}=\varphi(B_{T})+\int_{t}^{T}h(Y_{s},Z_{s})d\langle B\rangle_{s}-\int _{t}^{T}Z_{s}dB_{s}-(K_{T}-K_{t}), \label{e2-23} \end{equation} where $K$ is non-increasing and $K_{t}=\mathbb{\tilde{E}}_{t}[K_{T}]$ for $t\leq T$. Part 2. The purpose of this part is to prove that $Y\in S_{G}^{p}(0,T)$ for each $p>1$. Noting that $Y_{t}^{\varepsilon}=u_{\varepsilon}(t,B_{t}^{\varepsilon})$ and (\ref{e2-14}), we have \[ \mathbb{\tilde{E}}\left[ \sup_{t\leq T}|Y_{t}^{\varepsilon}-u_{\varepsilon }(t,B_{t})|^{p}\right] \leq C\varepsilon^{p}\mathbb{\tilde{E}}\left[ \sup_{t\leq T}|\tilde{B}_{t}|^{p}\right] \rightarrow0\text{ as } \varepsilon \rightarrow0, \] which implies \begin{equation} \mathbb{\tilde{E}}\left[ \sup_{t\leq T}|u_{\varepsilon}(t,B_{t})-Y_{t} |^{p}\right] \rightarrow0\text{ as }\varepsilon \rightarrow0. \label{e2-24} \end{equation} Thus $Y\in S_{G}^{p}(0,T)$. \end{proof} \subsection{Estimates of partial derivatives of $u_{\varepsilon}$} In order to show that $Z$ obtained in Lemma \ref{pro2-2} belongs to $H_{G}^{2,p}(0,T;\langle B\rangle)$, we need to prove that $\partial_{xx} ^{2}u_{\varepsilon}$ is uniformly bounded from below for $\varepsilon \in(0,\bar{\sigma})$, where $u_{\varepsilon}$ is the solution of PDE (\ref{new-e2-11}). For each fixed $\varepsilon \in(0,\bar{\sigma})$, $G_{\varepsilon}$ is defined in (\ref{new-e2-10}). Let $\hat{\mathbb{E}}^{\varepsilon}$ be the $G_{\varepsilon}$-expectation on $(\Omega_{T},Lip(\Omega_{T}))$. The canonical process $(B_{t})_{t\in \lbrack0,T]}$ is the $1$-dimensional $G_{\varepsilon} $-Brownian motion under $\hat{\mathbb{E}}^{\varepsilon}$. For each given $(t,x)\in \lbrack0,T)\times \mathbb{R}$, denote \[ B_{s}^{t,x}=x+B_{s}-B_{t}\text{ for }s\in \lbrack t,T]. \] Similar to (\ref{e2-15}), applying It\^{o}'s formula to $u_{\varepsilon }(s,B_{s}^{t,x})$ under $\hat{\mathbb{E}}^{\varepsilon}$, we obtain that the following $G_{\varepsilon}$-BSDE \begin{equation} Y_{s}^{t,x}=\varphi(B_{T}^{t,x})+\int_{s}^{T}h(Y_{r}^{t,x},Z_{r} ^{t,x})d\langle B\rangle_{r}-\int_{s}^{T}Z_{r}^{t,x}dB_{r}-(K_{T}^{t,x} -K_{s}^{t,x}) \label{e2-26} \end{equation} has a unique solution $(Y_{s}^{t,x},Z_{s}^{t,x},K_{s}^{t,x})_{s\in \lbrack t,T]}$ satisfying $Y_{s}^{t,x}=u_{\varepsilon}(s,B_{s}^{t,x})$, $Z_{s} ^{t,x}=\partial_{x}u_{\varepsilon}(t,B_{s}^{t,x})$ and $K_{t}^{t,x}=0$. Let $\mathcal{P}^{\varepsilon}$ be a weakly compact and convex set of probability measures on $(\Omega_{T},\mathcal{B}(\Omega_{T}))$ such that \[ \hat{\mathbb{E}}^{\varepsilon}[X]=\sup_{P\in \mathcal{P}^{\varepsilon}} E_{P}[X]\text{ for all }X\in L_{G_{\varepsilon}}^{1}(\Omega_{T}). \] For each given $(t,x)\in \lbrack0,T)\times \mathbb{R}$, denote \[ \mathcal{P}_{t,x}^{\varepsilon}=\{P\in \mathcal{P}^{\varepsilon}:E_{P} [K_{T}^{t,x}]=0\}. \] The following estimates for $G_{\varepsilon}$-BSDE (\ref{e2-26}) are useful. \begin{proposition} \label{pro2-3} Suppose that $\varphi \in C_{0}^{\infty}(\mathbb{R})$ and $h\in C_{0}^{\infty}(\mathbb{R}^{2})$. For each $(t,x,\Delta)\in \lbrack 0,T)\times \mathbb{R}\times \mathbb{R}$, let $(Y_{s}^{t,x},Z_{s}^{t,x} ,K_{s}^{t,x})_{s\in \lbrack t,T]}$ and $(Y_{s}^{t,x+\Delta},Z_{s}^{t,x+\Delta },K_{s}^{t,x+\Delta})_{s\in \lbrack t,T]}$ be two solutions of $G_{\varepsilon }$-BSDE (\ref{e2-26}). Then, for each given $p>1$, \begin{equation} \sup_{s\in \lbrack t,T]}\left \vert Y_{s}^{t,x+\Delta}-Y_{s}^{t,x}\right \vert ^{p}\leq C|\Delta|^{p}, \label{e2-27} \end{equation} \begin{equation} \hat{\mathbb{E}}^{\varepsilon}\left[ \sup_{s\in \lbrack t,T]}\left \vert Y_{s}^{t,x}\right \vert ^{p}+\left( \int_{t}^{T}\left \vert Z_{s} ^{t,x}\right \vert ^{2}d\langle B\rangle_{s}\right) ^{p/2}+\left \vert K_{T}^{t,x}\right \vert ^{p}\right] \leq C(1+|x|^{p}), \label{e2-28} \end{equation} \begin{equation} E_{P}\left[ \left( \int_{t}^{T}\left \vert Z_{s}^{t,x+\Delta}-Z_{s} ^{t,x}\right \vert ^{2}d\langle B\rangle_{s}\right) ^{p/2}+\left \vert K_{T}^{t,x+\Delta}\right \vert ^{p}\right] \leq C|\Delta|^{p}\text{ for } P\in \mathcal{P}_{t,x}^{\varepsilon}, \label{e2-29} \end{equation} \begin{equation} E_{P^{\Delta}}\left[ \left( \int_{t}^{T}\left \vert Z_{s}^{t,x+\Delta} -Z_{s}^{t,x}\right \vert ^{2}d\langle B\rangle_{s}\right) ^{p/2}+\left \vert K_{T}^{t,x}\right \vert ^{p}\right] \leq C|\Delta|^{p}\text{ for }P^{\Delta }\in \mathcal{P}_{t,x+\Delta}^{\varepsilon}, \label{e2-30} \end{equation} where the constant $C>0$ depends on $p$, $\bar{\sigma}$, $\varphi$, $h$ and $T$. \end{proposition} \begin{proof} Similar to the proof of (\ref{e2-2}), (\ref{e2-6}) and (\ref{e2-7}), we obtain \[ \sup_{s\in \lbrack t,T]}\left \vert Y_{s}^{t,x+\Delta}-Y_{s}^{t,x}\right \vert ^{p}\leq C\sup_{s\in \lbrack t,T]}\hat{\mathbb{E}}_{s}^{\varepsilon}\left[ \left \vert \varphi(B_{T}^{t,x+\Delta})-\varphi(B_{T}^{t,x})\right \vert ^{p}\right] \leq C|\Delta|^{p} \] and \begin{align*} & \hat{\mathbb{E}}^{\varepsilon}\left[ \sup_{s\in \lbrack t,T]}\left \vert Y_{s}^{t,x}\right \vert ^{p}+\left( \int_{t}^{T}\left \vert Z_{s} ^{t,x}\right \vert ^{2}d\langle B\rangle_{s}\right) ^{p/2}+\left \vert K_{T}^{t,x}\right \vert ^{p}\right] \\ & \leq C\left( 1+\hat{\mathbb{E}}^{\varepsilon}\left[ \sup_{s\in \lbrack t,T]}\hat{\mathbb{E}}_{s}^{\varepsilon}\left[ \left \vert \varphi(B_{T} ^{t,x})\right \vert ^{p}\right] \right] \right) \\ & \leq C\left( 1+|x|^{p}+\hat{\mathbb{E}}^{\varepsilon}\left[ \sup _{s\in \lbrack t,T]}\hat{\mathbb{E}}_{s}^{\varepsilon}\left[ \left \vert B_{T}-B_{t}\right \vert ^{p}\right] \right] \right) \\ & \leq C(1+|x|^{p}), \end{align*} where the constant $C>0$ depends on $p$, $\bar{\sigma}$, $\varphi$, $h$ and $T$. Set $\hat{Y}_{s}^{\Delta}=Y_{s}^{t,x+\Delta}-Y_{s}^{t,x}$ and $\hat{Z} _{s}^{\Delta}=Z_{s}^{t,x+\Delta}-Z_{s}^{t,x}$ for $s\in \lbrack t,T]$. For each given $P\in \mathcal{P}_{t,x}^{\varepsilon}$, we know that $K^{t,x}=0$ $P$-a.s. by $E_{P}[K_{T}^{t,x}]=0$. Applying It\^{o}'s formula to $|\hat{Y}_{s} ^{\Delta}|^{2}$ on $[t,T]$ under $P$, we obtain \begin{equation} |\hat{Y}_{t}^{\Delta}|^{2}+\int_{t}^{T}|\hat{Z}_{r}^{\Delta}|^{2}d\langle B\rangle_{r}=|\hat{Y}_{T}^{\Delta}|^{2}+2\int_{t}^{T}\hat{Y}_{r}^{\Delta} \hat{h}_{r}d\langle B\rangle_{r}-2\int_{t}^{T}\hat{Y}_{r}^{\Delta}\hat{Z} _{r}^{\Delta}dB_{r}-2\int_{t}^{T}\hat{Y}_{r}^{\Delta}dK_{r}^{t,x+\Delta}, \label{e2-31} \end{equation} where \begin{equation} |\hat{h}_{r}|=|h(Y_{r}^{t,x+\Delta},Z_{r}^{t,x+\Delta})-h(Y_{r}^{t,x} ,Z_{r}^{t,x})|\leq \sup_{(y,z)\in \mathbb{R}^{2}}(|h_{y}^{\prime}(y,z)|+|h_{z} ^{\prime}(y,z)|)(|\hat{Y}_{r}^{\Delta}|+|\hat{Z}_{r}^{\Delta}|). \label{e2-32} \end{equation} Since $K^{t,x+\Delta}$ is non-increasing with $K_{t}^{t,x+\Delta}=0$ and $d\langle B\rangle_{r}\leq(\bar{\sigma}^{2}+\varepsilon^{2})dr\leq2\bar {\sigma}^{2}dr$ under $P$, we deduce by (\ref{e2-31}) and (\ref{e2-32}) that \begin{equation} E_{P}\left[ \left( \int_{t}^{T}|\hat{Z}_{r}^{\Delta}|^{2}d\langle B\rangle_{r}\right) ^{p/2}\right] \leq CE_{P}\left[ \sup_{r\in \lbrack t,T]}\left \vert \hat{Y}_{r}^{\Delta}\right \vert ^{p}+\left( \sup_{r\in \lbrack t,T]}\left \vert \hat{Y}_{r}^{\Delta}\right \vert ^{p/2}\right) \left \vert K_{T}^{t,x+\Delta}\right \vert ^{p/2}\right] , \label{e2-33} \end{equation} where $C>0$ depends on $p$, $\bar{\sigma}$, $h$ and $T$. Noting that \[ K_{T}^{t,x+\Delta}=\hat{Y}_{T}^{\Delta}-\hat{Y}_{t}^{\Delta}+\int_{t}^{T} \hat{h}_{r}d\langle B\rangle_{r}-\int_{t}^{T}\hat{Z}_{r}^{\Delta}dB_{r},\text{ }P\text{-a.s.,} \] we get \begin{equation} E_{P}\left[ \left \vert K_{T}^{t,x+\Delta}\right \vert ^{p}\right] \leq CE_{P}\left[ \sup_{r\in \lbrack t,T]}\left \vert \hat{Y}_{r}^{\Delta }\right \vert ^{p}+\left( \int_{t}^{T}|\hat{Z}_{r}^{\Delta}|^{2}d\langle B\rangle_{r}\right) ^{p/2}\right] , \label{e2-34} \end{equation} where $C>0$ depends on $p$, $\bar{\sigma}$, $h$ and $T$. Thus we obtain by (\ref{e2-33}) and (\ref{e2-34}) that \begin{equation} E_{P}\left[ \left( \int_{t}^{T}|\hat{Z}_{r}^{\Delta}|^{2}d\langle B\rangle_{r}\right) ^{p/2}+\left \vert K_{T}^{t,x+\Delta}\right \vert ^{p}\right] \leq CE_{P}\left[ \sup_{r\in \lbrack t,T]}\left \vert \hat{Y} _{r}^{\Delta}\right \vert ^{p}\right] , \label{e2-35} \end{equation} where $C>0$ depends on $p$, $\bar{\sigma}$, $h$ and $T$. By (\ref{e2-27}) and (\ref{e2-35}), we obtain (\ref{e2-29}). By the same method, we obtain (\ref{e2-30}). \end{proof} In the following theorem, we obtain the formula of $\partial_{x} u_{\varepsilon}$ based on $u_{\varepsilon}(t,x)=Y_{t}^{t,x}$. \begin{theorem} \label{th2-4}Suppose that $\varphi \in C_{0}^{\infty}(\mathbb{R})$ and $h\in C_{0}^{\infty}(\mathbb{R}^{2})$. Let $u_{\varepsilon}$ be the solution of PDE (\ref{new-e2-11}). Then, for each $(t,x)\in \lbrack0,T)\times \mathbb{R}$, we have \begin{equation} \partial_{x}u_{\varepsilon}(t,x)=E_{P}\left[ \Gamma_{T}^{t,x}\varphi^{\prime }(B_{T}^{t,x})\right] \text{ for any }P\in \mathcal{P}_{t,x}^{\varepsilon}, \label{e2-36} \end{equation} where $(\Gamma_{s}^{t,x})_{s\in \lbrack t,T]}$ is the solution of the following $G$-SDE: \begin{equation} d\Gamma_{s}^{t,x}=h_{y}^{\prime}(Y_{s}^{t,x},Z_{s}^{t,x})\Gamma_{s} ^{t,x}d\langle B\rangle_{s}+h_{z}^{\prime}(Y_{s}^{t,x},Z_{s}^{t,x})\Gamma _{s}^{t,x}dB_{s},\text{ }\Gamma_{t}^{t,x}=1. \label{e2-37} \end{equation} \end{theorem} \begin{proof} For each $\Delta \in \mathbb{R}$, we use the notations $(\hat{Y}_{s}^{\Delta })_{s\in \lbrack t,T]}$ and $(\hat{Z}_{s}^{\Delta})_{s\in \lbrack t,T]}$ as in the proof of Proposition \ref{pro2-3}. Then, for any given $P\in \mathcal{P}_{t,x}^{\varepsilon}$, we have \[ \hat{Y}_{s}^{\Delta}=\hat{Y}_{T}^{\Delta}+\int_{s}^{T}\hat{h}_{r}d\langle B\rangle_{r}-\int_{s}^{T}\hat{Z}_{r}^{\Delta}dB_{r}-\int_{s}^{T} dK_{r}^{t,x+\Delta},\text{ }P\text{-a.s.,} \] where \begin{align*} \hat{h}_{r} & =h(Y_{r}^{t,x+\Delta},Z_{r}^{t,x+\Delta})-h(Y_{r}^{t,x} ,Z_{r}^{t,x})\\ & =h_{y}^{\prime}(Y_{r}^{t,x},Z_{r}^{t,x})\hat{Y}_{r}^{\Delta}+h_{z}^{\prime }(Y_{r}^{t,x},Z_{r}^{t,x})\hat{Z}_{r}^{\Delta}+I_{r}^{\Delta}. \end{align*} Since $h\in C_{0}^{\infty}(\mathbb{R}^{2})$, we get $|I_{r}^{\Delta}|\leq C(|\hat{Y}_{r}^{\Delta}|^{2}+|\hat{Z}_{r}^{\Delta}|^{2})$, where $C>0$ depends on $h$. Applying It\^{o}'s formula to $\hat{Y}_{s}^{\Delta}\Gamma_{s}^{t,x}$ on $[t,T]$ under $P$, we obtain \begin{equation} \hat{Y}_{t}^{\Delta}=\hat{Y}_{T}^{\Delta}\Gamma_{T}^{t,x}+\int_{t}^{T} \Gamma_{r}^{t,x}I_{r}^{\Delta}d\langle B\rangle_{r}-\int_{t}^{T}(\Gamma _{r}^{t,x}\hat{Z}_{r}^{\Delta}+h_{z}^{\prime}(Y_{r}^{t,x},Z_{r}^{t,x} )\Gamma_{r}^{t,x}\hat{Y}_{r}^{\Delta})dB_{r}-\int_{t}^{T}\Gamma_{r} ^{t,x}dK_{r}^{t,x+\Delta}. \label{e2-38} \end{equation} Noting that $\hat{Y}_{t}^{\Delta}=u_{\varepsilon}(t,x+\Delta)-u_{\varepsilon }(t,x)$, we get \begin{equation} \frac{u_{\varepsilon}(t,x+\Delta)-u_{\varepsilon}(t,x)}{\Delta}=\frac {1}{\Delta}E_{P}\left[ \hat{Y}_{T}^{\Delta}\Gamma_{T}^{t,x}+\int_{t} ^{T}\Gamma_{r}^{t,x}I_{r}^{\Delta}d\langle B\rangle_{r}-\int_{t}^{T}\Gamma _{r}^{t,x}dK_{r}^{t,x+\Delta}\right] . \label{e2-39} \end{equation} By (\ref{e2-27}), (\ref{e2-29}), $\varphi \in C_{0}^{\infty}(\mathbb{R})$ and $h\in C_{0}^{\infty}(\mathbb{R}^{2})$, we can easily deduce that \begin{equation} \lim_{\Delta \rightarrow0}\frac{1}{\Delta}E_{P}\left[ \hat{Y}_{T}^{\Delta }\Gamma_{T}^{t,x}+\int_{t}^{T}\Gamma_{r}^{t,x}I_{r}^{\Delta}d\langle B\rangle_{r}\right] =E_{P}\left[ \Gamma_{T}^{t,x}\varphi^{\prime} (B_{T}^{t,x})\right] . \label{e2-40} \end{equation} Since $\Gamma_{r}^{t,x}>0$, $dK_{r}^{t,x+\Delta}\leq0$ and $\partial _{x}u_{\varepsilon}(t,x)$ exists, we obtain by (\ref{e2-39}) and (\ref{e2-40}) that \[ E_{P}\left[ \Gamma_{T}^{t,x}\varphi^{\prime}(B_{T}^{t,x})\right] \leq \partial_{x+}u_{\varepsilon}(t,x)=\partial_{x}u_{\varepsilon }(t,x)=\partial_{x-}u_{\varepsilon}(t,x)\leq E_{P}\left[ \Gamma_{T} ^{t,x}\varphi^{\prime}(B_{T}^{t,x})\right] , \] which implies the desired result. \end{proof} Now we give the estimate for $\partial_{xx}^{2}u_{\varepsilon}$. \begin{theorem} \label{th2-5}Suppose that $\varphi \in C_{0}^{\infty}(\mathbb{R})$ and $h\in C_{0}^{\infty}(\mathbb{R}^{2})$. Let $u_{\varepsilon}$ be the solution of PDE (\ref{new-e2-11}). Then \[ \partial_{xx}^{2}u_{\varepsilon}(t,x)\geq-C\text{ for }(t,x)\in \lbrack 0,T)\times \mathbb{R}, \] where the constant $C>0$ depends on $\bar{\sigma}$, $\varphi$, $h$ and $T$. \end{theorem} \begin{proof} For each $(t,x,\Delta)\in \lbrack0,T)\times \mathbb{R}\times \mathbb{R}$, we use the notations $(\hat{Y}_{s}^{\Delta})_{s\in \lbrack t,T]}$ and $(\hat{Z} _{s}^{\Delta})_{s\in \lbrack t,T]}$ as in the proof of Proposition \ref{pro2-3}. For any given $P\in \mathcal{P}_{t,x}^{\varepsilon}$, we obtain by (\ref{e2-38}) that \[ \hat{Y}_{t}^{\Delta}=E_{P}\left[ \hat{Y}_{T}^{\Delta}\Gamma_{T}^{t,x} +\int_{t}^{T}\Gamma_{r}^{t,x}I_{r}^{\Delta}d\langle B\rangle_{r}-\int_{t} ^{T}\Gamma_{r}^{t,x}dK_{r}^{t,x+\Delta}\right] . \] Since $\Gamma_{r}^{t,x}>0$ and $dK_{r}^{t,x+\Delta}\leq0$, we get \[ \hat{Y}_{t}^{\Delta}\geq E_{P}\left[ \hat{Y}_{T}^{\Delta}\Gamma_{T} ^{t,x}+\int_{t}^{T}\Gamma_{r}^{t,x}I_{r}^{\Delta}d\langle B\rangle_{r}\right] . \] Noting that $|\hat{Y}_{T}^{\Delta}-\varphi^{\prime}(B_{T}^{t,x})\Delta|\leq C\Delta^{2}$ and $|I_{r}^{\Delta}|\leq C(|\hat{Y}_{r}^{\Delta}|^{2}+|\hat {Z}_{r}^{\Delta}|^{2})$, where $C>0$ depends on $\varphi$ and $h$, we deduce by Proposition \ref{pro2-3} that \begin{equation} \hat{Y}_{t}^{\Delta}\geq E_{P}\left[ \hat{Y}_{T}^{\Delta}\Gamma_{T} ^{t,x}+\int_{t}^{T}\Gamma_{r}^{t,x}I_{r}^{\Delta}d\langle B\rangle_{r}\right] \geq E_{P}\left[ \Gamma_{T}^{t,x}\varphi^{\prime}(B_{T}^{t,x})\right] \Delta-C\Delta^{2}, \label{e2-43} \end{equation} where $C>0$ depends on $\bar{\sigma}$, $\varphi$, $h$ and $T$. For any given $P^{\Delta}\in \mathcal{P}_{t,x+\Delta}^{\varepsilon}$, applying It\^{o}'s formula to $\hat{Y}_{s}^{\Delta}\Gamma_{s}^{t,x+\Delta}$ on $[t,T]$ under $P^{\Delta}$, we obtain \[ \hat{Y}_{t}^{\Delta}=E_{P^{\Delta}}\left[ \hat{Y}_{T}^{\Delta}\Gamma _{T}^{t,x+\Delta}+\int_{t}^{T}\Gamma_{r}^{t,x+\Delta}\tilde{I}_{r}^{\Delta }d\langle B\rangle_{r}+\int_{t}^{T}\Gamma_{r}^{t,x+\Delta}dK_{r}^{t,x}\right] , \] where $\tilde{I}_{r}^{\Delta}=h(Y_{r}^{t,x+\Delta},Z_{r}^{t,x+\Delta} )-h(Y_{r}^{t,x},Z_{r}^{t,x})-h_{y}^{\prime}(Y_{r}^{t,x+\Delta},Z_{r} ^{t,x+\Delta})\hat{Y}_{r}^{\Delta}-h_{z}^{\prime}(Y_{r}^{t,x+\Delta} ,Z_{r}^{t,x+\Delta})\hat{Z}_{r}^{\Delta}$. Since $\Gamma_{r}^{t,x+\Delta}>0$ and $dK_{r}^{t,x}\leq0$, we get \[ \hat{Y}_{t}^{\Delta}\leq E_{P^{\Delta}}\left[ \hat{Y}_{T}^{\Delta}\Gamma _{T}^{t,x+\Delta}+\int_{t}^{T}\Gamma_{r}^{t,x+\Delta}\tilde{I}_{r}^{\Delta }d\langle B\rangle_{r}\right] . \] Similar to the proof of (\ref{e2-43}), we have \begin{equation} \hat{Y}_{t}^{\Delta}\leq E_{P^{\Delta}}\left[ \hat{Y}_{T}^{\Delta}\Gamma _{T}^{t,x+\Delta}+\int_{t}^{T}\Gamma_{r}^{t,x+\Delta}\tilde{I}_{r}^{\Delta }d\langle B\rangle_{r}\right] \leq E_{P^{\Delta}}\left[ \Gamma _{T}^{t,x+\Delta}\varphi^{\prime}(B_{T}^{t,x+\Delta})\right] \Delta +C\Delta^{2}, \label{e2-44} \end{equation} where $C>0$ depends on $\bar{\sigma}$, $\varphi$, $h$ and $T$. By Theorem \ref{th2-4}, (\ref{e2-43}) and (\ref{e2-44}), we obtain \[ \frac{\partial_{x}u_{\varepsilon}(t,x+\Delta)-\partial_{x}u_{\varepsilon }(t,x)}{\Delta}=\frac{1}{\Delta^{2}}\left \{ E_{P^{\Delta}}\left[ \Gamma _{T}^{t,x+\Delta}\varphi^{\prime}(B_{T}^{t,x+\Delta})\right] \Delta -E_{P}\left[ \Gamma_{T}^{t,x}\varphi^{\prime}(B_{T}^{t,x})\right] \Delta \right \} \geq-C, \] which implies the desired result. \end{proof} \begin{remark} The constant $C$ in the above theorem is independent of $\varepsilon \in (0,\bar{\sigma})$. \end{remark} \subsection{Existence and uniqueness of $G$-BSDEs} We first give the following existence and uniqueness result of $G$-BSDE (\ref{e2-12}). \begin{lemma} \label{pro2-6}Let $\varphi \in C_{0}^{\infty}(\mathbb{R})$ and $h\in C_{0}^{\infty}(\mathbb{R}^{2})$. Then, for each given $p>1$, $G$-BSDE (\ref{e2-12}) has a unique $L^{p}$-solution $(Y,Z,K)$ in the $G$-expectation space. \end{lemma} \begin{proof} The uniqueness is due to (\ref{e2-2}) and (\ref{e2-8}) in Proposition \ref{pro2-1}. In the following, we give the proof of existence. By Lemma \ref{pro2-2}, for each given $p>1$, $G$-BSDE (\ref{e2-12}) has a unique $L^{p}$-solution $(Y,Z,K)$ in the extended $\tilde{G}$-expectation space, i.e., \begin{equation} Y_{t}=\varphi(B_{T})+\int_{t}^{T}h(Y_{s},Z_{s})d\langle B\rangle_{s}-\int _{t}^{T}Z_{s}dB_{s}-(K_{T}-K_{t}). \label{e2-45} \end{equation} Let $u_{\varepsilon}$ be the solution of PDE (\ref{new-e2-11}) for $\varepsilon \in(0,\bar{\sigma})$. Applying It\^{o}'s formula to $u_{\varepsilon}(t,B_{t})$ under $\tilde{G}$-expectation, we get by (\ref{e2-14}) that \begin{equation} \begin{array} [c]{cl} \tilde{Y}_{t}^{\varepsilon}= & \varphi(B_{T})+\int_{t}^{T}h(\tilde{Y} _{s}^{\varepsilon},\tilde{Z}_{s}^{\varepsilon})d\langle B\rangle_{s}-\int _{t}^{T}\frac{1}{2}\varepsilon^{2}\left( \partial_{xx}^{2}u_{\varepsilon }(s,B_{s})+2h(\tilde{Y}_{s}^{\varepsilon},\tilde{Z}_{s}^{\varepsilon})\right) ^{-}ds\\ & -\int_{t}^{T}\tilde{Z}_{s}^{\varepsilon}dB_{s}-(L_{T}^{\varepsilon} -L_{t}^{\varepsilon}), \end{array} \label{e2-25} \end{equation} where $\tilde{Y}_{t}^{\varepsilon}=u_{\varepsilon}(t,B_{t})$, $\tilde{Z} _{t}^{\varepsilon}=\partial_{x}u_{\varepsilon}(t,B_{t})$ and \[ \begin{array} [c]{cl} L_{t}^{\varepsilon}= & \int_{0}^{t}\frac{1}{2}\left[ \partial_{xx} ^{2}u_{\varepsilon}(s,B_{s})+2h(\tilde{Y}_{s}^{\varepsilon},\tilde{Z} _{s}^{\varepsilon})\right] d\langle B\rangle_{s}-\int_{0}^{t}G\left( \partial_{xx}^{2}u_{\varepsilon}(s,B_{s})+2h(\tilde{Y}_{s}^{\varepsilon },\tilde{Z}_{s}^{\varepsilon})\right) ds\\ & -\int_{0}^{t}\frac{1}{2}\varepsilon^{2}\left( \partial_{xx}^{2} u_{\varepsilon}(s,B_{s})+2h(\tilde{Y}_{s}^{\varepsilon},\tilde{Z} _{s}^{\varepsilon})\right) ^{+}ds. \end{array} \] Since $0\leq d\langle B\rangle_{s}\leq \bar{\sigma}^{2}ds$ under $\mathbb{\tilde{E}}$, we deduce that $L^{\varepsilon}$ is non-increasing with $L_{0}^{\varepsilon}=0$ under $\mathbb{\tilde{E}}$. In the proof of Lemma \ref{pro2-2}, we know that, for each given $p>1$, \begin{equation} \mathbb{\tilde{E}}\left[ \sup_{t\leq T}|\tilde{Y}_{t}^{\varepsilon} -Y_{t}|^{p}+\left( \int_{0}^{T}|\partial_{x}u_{\varepsilon}(t,B_{t} +\varepsilon \tilde{B}_{t})-Z_{t}|^{2}d\langle B\rangle_{t}\right) ^{p/2}\right] \rightarrow0\text{ as }\varepsilon \rightarrow0. \label{e2-46} \end{equation} Thus $|Y|+|Z|\leq C$ by (\ref{e2-14}), where $C>0$ depends on $\bar{\sigma}$, $\varphi$, $h$ and $T$. By (\ref{e2-45}), we get \[ \mathbb{\tilde{E}}\left[ |K_{T}|^{2}\right] \leq C\mathbb{\tilde{E}}\left[ \sup_{t\leq T}|Y_{t}|^{2}+1+\int_{0}^{T}|Z_{t}|^{2}d\langle B\rangle _{t}\right] \leq C, \] where $C>0$ depends on $\bar{\sigma}$, $\varphi$, $h$ and $T$. By Theorem \ref{th2-5}, we know $\partial_{xx}^{2}u_{\varepsilon}\geq-C$ for $\varepsilon \in(0,\bar{\sigma})$, where $C>0$ depends on $\bar{\sigma}$, $\varphi$, $h$ and $T$. Thus \[ \left( \partial_{xx}^{2}u_{\varepsilon}(s,B_{s})+2h(\tilde{Y}_{s} ^{\varepsilon},\tilde{Z}_{s}^{\varepsilon})\right) ^{-}\leq C\text{ for } s\in \lbrack0,T]\text{ and }\varepsilon \in(0,\bar{\sigma}), \] where $C>0$ depends on $\bar{\sigma}$, $\varphi$, $h$ and $T$. By (\ref{e2-14}) and (\ref{e2-25}), we have $|\tilde{Y}^{\varepsilon}|+|\tilde {Z}^{\varepsilon}|\leq C$ for $\varepsilon \in(0,\bar{\sigma})$ and \[ \mathbb{\tilde{E}}\left[ |L_{T}^{\varepsilon}|^{2}\right] \leq C\mathbb{\tilde{E}}\left[ \sup_{t\leq T}|\tilde{Y}_{t}^{\varepsilon} |^{2}+1+\int_{0}^{T}|\tilde{Z}_{t}^{\varepsilon}|^{2}d\langle B\rangle _{t}\right] \leq C\text{ for }\varepsilon \in(0,\bar{\sigma}), \] where $C>0$ depends on $\bar{\sigma}$, $\varphi$, $h$ and $T$. Applying It\^{o}'s formula to $|\tilde{Y}_{t}^{\varepsilon}-Y_{t}|^{2}$ on $[0,T]$, we obtain \begin{align*} \mathbb{\tilde{E}}\left[ \int_{0}^{T}|\tilde{Z}_{t}^{\varepsilon}-Z_{t} |^{2}d\langle B\rangle_{t}\right] & \leq C\mathbb{\tilde{E}}\left[ \int_{0}^{T}|\tilde{Y}_{t}^{\varepsilon}-Y_{t}|dt+(|L_{T}^{\varepsilon }|+|K_{T}|)\sup_{t\leq T}|\tilde{Y}_{t}^{\varepsilon}-Y_{t}|\right] \\ & \leq C\left( \mathbb{\tilde{E}}\left[ \sup_{t\leq T}|\tilde{Y} _{t}^{\varepsilon}-Y_{t}|^{2}\right] \right) ^{1/2}, \end{align*} where $C>0$ depends on $\bar{\sigma}$, $\varphi$, $h$ and $T$. By (\ref{e2-46}) and $|Z|+|\tilde{Z}^{\varepsilon}|\leq C$ for $\varepsilon \in(0,\bar{\sigma})$, we get \[ \lim_{\varepsilon \downarrow0}\mathbb{\tilde{E}}\left[ \left( \int_{0} ^{T}|\tilde{Z}_{t}^{\varepsilon}-Z_{t}|^{2}d\langle B\rangle_{t}\right) ^{p/2}\right] =0\text{ for each }p>1. \] Thus $Z\in H_{G}^{2,p}(0,T;\langle B\rangle)$, and then $K_{T}\in L_{G} ^{p}(\Omega_{T})$ by (\ref{e2-45}). \end{proof} Moreover, we extend the above result to the following two lemmas. \begin{lemma} \label{le2-7}Let $t_{1}\in \lbrack0,T)$, $\varphi \in C_{b.Lip}(\mathbb{R})$, $h_{1}\in C_{0}^{\infty}(\mathbb{R})$ and $h_{2}\in C_{0}^{\infty} (\mathbb{R}^{2})$. Then, for each given $p>1$, $G$-BSDE \begin{equation} Y_{t}=\varphi(B_{T}-B_{t_{1}})+\int_{t}^{T}h_{1}(Y_{s})ds+\int_{t}^{T} h_{2}(Y_{s},Z_{s})d\langle B\rangle_{s}-\int_{t}^{T}Z_{s}dB_{s}-(K_{T}-K_{t}) \label{e2-47} \end{equation} has a unique $L^{p}$-solution $(Y,Z,K)$ in the $G$-expectation space. Furthermore, $Y_{t}=u(t,B_{t}-B_{t_{1}})$ for $t\in \lbrack t_{1},T]$, where $u(t,x)=Y_{t}^{t,x}$ and $(Y_{s}^{t,x})_{s\in \lbrack t,T]}$ satisfies the following $G$-BSDE: \begin{equation} Y_{s}^{t,x}=\varphi(x+B_{T}-B_{t})+\int_{s}^{T}h_{1}(Y_{r}^{t,x})dr+\int _{s}^{T}h_{2}(Y_{r}^{t,x},Z_{r}^{t,x})d\langle B\rangle_{r}-\int_{s}^{T} Z_{r}^{t,x}dB_{r}-(K_{T}^{t,x}-K_{s}^{t,x}). \label{e2-48} \end{equation} \end{lemma} \begin{proof} The uniqueness is due to (\ref{e2-2}) and (\ref{e2-8}) in Proposition \ref{pro2-1}. For each given $p>1$, we can find a sequence $\varphi_{n}\in C_{0}^{\infty}(\mathbb{R})$, $n\geq1$, such that $\mathbb{\hat{E}}\left[ |\varphi_{n}(B_{T}-B_{t_{1}})-\varphi(B_{T}-B_{t_{1}})|^{p+1}\right] \rightarrow0$ as $n\rightarrow \infty$. Similar to the proof of Lemma \ref{pro2-6}, the following $G$-BSDE \begin{equation} Y_{t}^{n}=\varphi_{n}(B_{T}-B_{t_{1}})+\int_{t}^{T}h_{1}(Y_{s}^{n})ds+\int _{t}^{T}h_{2}(Y_{s}^{n},Z_{s}^{n})d\langle B\rangle_{s}-\int_{t}^{T}Z_{s} ^{n}dB_{s}-(K_{T}^{n}-K_{t}^{n}) \label{e2-49} \end{equation} has a unique $L^{p}$-solution $(Y^{n},Z^{n},K^{n})$ in the $G$-expectation space. By (\ref{e2-2}), (\ref{e2-8}) in Proposition \ref{pro2-1} and Theorem \ref{th1-1}, we can easily deduce \[ \lim_{n,m\rightarrow \infty}\mathbb{\hat{E}}\left[ \sup_{t\leq T}|Y_{t} ^{n}-Y_{t}^{m}|^{p}+\left( \int_{0}^{T}|Z_{t}^{n}-Z_{t}^{m}|^{2}d\langle B\rangle_{t}\right) ^{p/2}+|K_{T}^{n}-K_{T}^{m}|^{p}\right] =0. \] Thus there exist $Y\in S_{G}^{p}(0,T)$, $Z\in H_{G}^{2,p}(0,T;\langle B\rangle)$ and a non-increasing $G$-martingale $K$ with $K_{0}=0$ and $K_{T}\in L_{G}^{p}(\Omega_{T})$ such that \[ \lim_{n\rightarrow \infty}\mathbb{\hat{E}}\left[ \sup_{t\leq T}|Y_{t} ^{n}-Y_{t}|^{p}+\left( \int_{0}^{T}|Z_{t}^{n}-Z_{t}|^{2}d\langle B\rangle _{t}\right) ^{p/2}+|K_{T}^{n}-K_{T}|^{p}\right] =0. \] From this we can easily get \[ \lim_{n\rightarrow \infty}\mathbb{\hat{E}}\left[ \sup_{t\leq T}\left( \int_{t}^{T}|h_{1}(Y_{s}^{n})-h_{1}(Y_{s})|ds+\int_{t}^{T}|h_{2}(Y_{s} ^{n},Z_{s}^{n})-h_{2}(Y_{s},Z_{s})|d\langle B\rangle_{s}+\left \vert \int _{t}^{T}(Z_{s}^{n}-Z_{s})dB_{s}\right \vert \right) ^{p}\right] =0. \] Thus $(Y,Z,K)$ satisfies $G$-BSDE (\ref{e2-47}) by taking $n\rightarrow \infty$ in (\ref{e2-49}). From the above proof, we know that $G$-BSDE (\ref{e2-48}) has a unique $L^{p} $-solution $(Y^{t,x},Z^{t,x},K^{t,x})$ and $Y_{t}^{t,x}\in \mathbb{R}$. By (\ref{e2-2}) in Proposition \ref{pro2-1}, we obtain that \[ |u(t,x)-u(t,x^{\prime})|\leq C|x-x^{\prime}|\text{ and }|Y_{t}-u(t,x)|\leq C|B_{t}-B_{t_{1}}-x| \] where the constant $C>0$ depends on $\bar{\sigma}$, $\varphi$, $h_{1}$, $h_{2}$ and $T$. Thus we get $Y_{t}=u(t,B_{t}-B_{t_{1}})$ for $t\in \lbrack t_{1},T]$. \end{proof} \begin{lemma} \label{le2-8}Let $\xi \in Lip(\Omega_{T})$, $f(t,y)=\sum_{i=1}^{N_{1}}f_{t} ^{i}h_{1}^{i}(y)$ and $g(t,y,z)=\sum_{j=1}^{N_{2}}g_{t}^{j}h_{2}^{j}(y,z)$ with $f^{i}$, $g^{j}\in M^{0}(0,T)$, $h_{1}^{i}\in C_{0}^{\infty}(\mathbb{R} )$, $h_{2}^{j}\in C_{0}^{\infty}(\mathbb{R}^{2})$, $i\leq N_{1}$, $j\leq N_{2}$. Then $G$-BSDE (\ref{e2-1}) has a unique $L^{p}$-solution $(Y,Z,K)$ for each given $p>1$. \end{lemma} \begin{proof} The uniqueness is due to (\ref{e2-2}) and (\ref{e2-8}) in Proposition \ref{pro2-1}. For the existence, we only prove the special case $\xi =\varphi(B_{t_{1}},B_{T}-B_{t_{1}})$, $f(t,y)=0$ and $g(t,y,z)=(I_{[0,t_{1} )}(t)+\psi(B_{t_{1}})I_{[t_{1},T]}(t))h_{2}(y,z)$, the general case is similar. By Lemma \ref{le2-7}, $G$-BSDE \begin{equation} Y_{t}^{x}=\varphi(x,B_{T}-B_{t_{1}})+\int_{t}^{T}\psi(x)h_{2}(Y_{s}^{x} ,Z_{s}^{x})d\langle B\rangle_{s}-\int_{t}^{T}Z_{s}^{x}dB_{s}-(K_{T}^{x} -K_{t}^{x}) \label{e2-50} \end{equation} has a unique $L^{p}$-solution $(Y^{x},Z^{x},K^{x})$ for each given $p>1$. Furthermore, $Y_{t}^{x}=u(t,x,B_{t}-B_{t_{1}})$ for $t\in \lbrack t_{1},T]$, where $u(t,x,x^{\prime})=Y_{t}^{t,x,x^{\prime}}$ and $(Y_{s}^{t,x,x^{\prime} })_{s\in \lbrack t,T]}$ satisfies the following $G$-BSDE: \[ Y_{s}^{t,x,x^{\prime}}=\varphi(x,x^{\prime}+B_{T}-B_{t})+\int_{s}^{T} \psi(x)h_{2}(Y_{r}^{t,x,x^{\prime}},Z_{r}^{t,x,x^{\prime}})d\langle B\rangle_{r}-\int_{s}^{T}Z_{r}^{t,x,x^{\prime}}dB_{r}-(K_{T}^{t,x,x^{\prime} }-K_{s}^{t,x,x^{\prime}}). \] By (\ref{e2-2}) in Proposition \ref{pro2-1}, we obtain that, for $t\in \lbrack t_{1},T]$, $x$, $x^{\prime}$, $\tilde{x}$, $\tilde{x}^{\prime}\in \mathbb{R}$, \begin{equation} |u(t,x,x^{\prime})|\leq C\text{ and }|u(t,x,x^{\prime})-u(t,\tilde{x} ,\tilde{x}^{\prime})|\leq C(|x-\tilde{x}|+|x^{\prime}-\tilde{x}^{\prime}|) \label{e2-51} \end{equation} where the constant $C>0$ depends on $\bar{\sigma}$, $\varphi$, $\psi$, $h_{2}$ and $T$. For each positive integer $n$, by partition of unity theorem, we can find $l_{i}^{n}\in C_{0}^{\infty}(\mathbb{R})$, $i=1$,$\ldots$,$k_{n}$, such that \[ 0\leq l_{i}^{n}\leq1\text{ and }\lambda(\text{supp}(l_{i}^{n}))\leq \frac{1} {n}\text{ for }i\leq k_{n}\text{, }I_{[-n,n]}(x)\leq \sum_{i=1}^{k_{n}} l_{i}^{n}(x)\leq1, \] where $\lambda(\cdot)$ is the Lebesgue measure. For $t\in \lbrack t_{1},T]$, set \[ Y_{t}^{n}=\sum_{i=1}^{k_{n}}l_{i}^{n}(B_{t_{1}})Y_{t}^{x_{i}^{n}}\text{, }Z_{t}^{n}=\sum_{i=1}^{k_{n}}l_{i}^{n}(B_{t_{1}})Z_{t}^{x_{i}^{n}}\text{, }K_{t}^{n}=\sum_{i=1}^{k_{n}}l_{i}^{n}(B_{t_{1}})K_{t}^{x_{i}^{n}}\text{,} \] where $l_{i}^{n}(x_{i}^{n})>0$. Then, by (\ref{e2-50}), we get that, for $t\in \lbrack t_{1},T]$, \begin{equation} Y_{t}^{n}=Y_{T}^{n}+\int_{t}^{T}\sum_{i=1}^{k_{n}}l_{i}^{n}(B_{t_{1}} )\psi(x_{i}^{n})h_{2}(Y_{s}^{x_{i}^{n}},Z_{s}^{x_{i}^{n}})d\langle B\rangle_{s}-\int_{t}^{T}Z_{s}^{n}dB_{s}-(K_{T}^{n}-K_{t}^{n}). \label{e2-52} \end{equation} It follows from (\ref{e2-51}) that \begin{align*} & |Y_{t}^{n}-u(t,B_{t_{1}},B_{t}-B_{t_{1}})|\\ & \leq \sum_{i=1}^{k_{n}}l_{i}^{n}(B_{t_{1}})|u(t,x_{i}^{n},B_{t}-B_{t_{1} })-u(t,B_{t_{1}},B_{t}-B_{t_{1}})|+\left( 1-\sum_{i=1}^{k_{n}}l_{i} ^{n}(B_{t_{1}})\right) |u(t,B_{t_{1}},B_{t}-B_{t_{1}})|\\ & \leq \frac{C}{n}+\frac{C}{n}|B_{t_{1}}|, \end{align*} which implies \begin{equation} \lim_{n,m\rightarrow \infty}\mathbb{\hat{E}}\left[ \sup_{t\in \lbrack t_{1} ,T]}|Y_{t}^{n}-Y_{t}^{m}|^{p}\right] =0. \label{e2-53} \end{equation} Noting that $|\sum_{i=1}^{k_{n}}l_{i}^{n}(B_{t_{1}})\psi(x_{i}^{n})h_{2} (Y_{s}^{x_{i}^{n}},Z_{s}^{x_{i}^{n}})|\leq C$, where $C$ depends on $\psi$ and $h_{2}$, we obtain by (\ref{e2-8}) in Proposition \ref{pro2-1} that \begin{equation} \lim_{n,m\rightarrow \infty}\mathbb{\hat{E}}\left[ \left( \int_{t_{1}} ^{T}|Z_{t}^{n}-Z_{t}^{m}|^{2}d\langle B\rangle_{t}\right) ^{p/2}\right] =0. \label{e2-54} \end{equation} It is easy to verify that \begin{align*} & \left \vert \sum_{i=1}^{k_{n}}l_{i}^{n}(B_{t_{1}})\psi(x_{i}^{n})h_{2} (Y_{s}^{x_{i}^{n}},Z_{s}^{x_{i}^{n}})-\psi(B_{t_{1}})h_{2}(Y_{s}^{n},Z_{s} ^{n})\right \vert \\ & \leq \frac{C}{n}(1+|B_{t_{1}}|)+C\sum_{i=1}^{k_{n}}l_{i}^{n}(B_{t_{1} })\left \vert h_{2}\left( \sum_{j=1}^{k_{n}}l_{j}^{n}(B_{t_{1}})Y_{s} ^{x_{i}^{n}},\sum_{j=1}^{k_{n}}l_{j}^{n}(B_{t_{1}})Z_{s}^{x_{i}^{n}}\right) -h_{2}(Y_{s}^{n},Z_{s}^{n})\right \vert \\ & \leq \frac{C}{n}(1+|B_{t_{1}}|)+C\sum_{i,j=1}^{k_{n}}l_{i}^{n}(B_{t_{1} })l_{j}^{n}(B_{t_{1}})\left( \left \vert Y_{s}^{x_{i}^{n}}-Y_{s}^{x_{j}^{n} }\right \vert +\left \vert Z_{s}^{x_{i}^{n}}-Z_{s}^{x_{j}^{n}}\right \vert \right) . \end{align*} By (\ref{e2-51}), we know that $\left \vert Y_{s}^{x_{i}^{n}}-Y_{s}^{x_{j}^{n} }\right \vert \leq C|x_{i}^{n}-x_{j}^{n}|$. Similar to the proof of (\ref{e2-8}) in Proposition \ref{pro2-1}, we deduce that, for each $P\in \mathcal{P}$, \begin{align*} & E_{P}\left[ \left( \int_{t_{1}}^{T}\left \vert Z_{t}^{x_{i}^{n}} -Z_{t}^{x_{j}^{n}}\right \vert ^{2}d\langle B\rangle_{t}\right) ^{p/2} \Big{|}\mathcal{B}(\Omega_{t_{1}})\right] \\ & \leq C\left \{ E_{P}\left[ \sup_{t\in \lbrack t_{1},T]}\left \vert Y_{t}^{x_{i}^{n}}-Y_{t}^{x_{j}^{n}}\right \vert ^{p}\Big{|}\mathcal{B} (\Omega_{t_{1}})\right] +\left( E_{P}\left[ \sup_{t\in \lbrack t_{1} ,T]}\left \vert Y_{t}^{x_{i}^{n}}-Y_{t}^{x_{j}^{n}}\right \vert ^{p} \Big{|}\mathcal{B}(\Omega_{t_{1}})\right] \right) ^{1/2}\right \} \\ & \leq C\left( |x_{i}^{n}-x_{j}^{n}|^{p}+|x_{i}^{n}-x_{j}^{n}|^{p/2}\right) . \end{align*} Noting that $l_{i}^{n}(B_{t_{1}})l_{j}^{n}(B_{t_{1}})|x_{i}^{n}-x_{j}^{n}|=0$ if $|x_{i}^{n}-x_{j}^{n}|>\frac{2}{n}$, we obtain \begin{equation} \lim_{n\rightarrow \infty}\sup_{P\in \mathcal{P}}E_{P}\left[ \left( \int_{t_{1}}^{T}\left \vert \sum_{i=1}^{k_{n}}l_{i}^{n}(B_{t_{1}})\psi (x_{i}^{n})h_{2}(Y_{s}^{x_{i}^{n}},Z_{s}^{x_{i}^{n}})-\psi(B_{t_{1}} )h_{2}(Y_{s}^{n},Z_{s}^{n})\right \vert d\langle B\rangle_{s}\right) ^{p}\right] =0. \label{e2-55} \end{equation} By (\ref{e2-52}), (\ref{e2-53}), (\ref{e2-54}) and (\ref{e2-55}), we get $\lim_{n,m\rightarrow \infty}\mathbb{\hat{E}}\left[ |(K_{T}^{n}-K_{t_{1}} ^{n})-(K_{T}^{m}-K_{t_{1}}^{m})|^{p}\right] =0$. Thus there exist $Y\in S_{G}^{p}(t_{1},T)$, $Z\in H_{G}^{2,p}(t_{1},T;\langle B\rangle)$ and a non-increasing $K$ with $K_{t_{1}}=0$ and $K_{T}\in L_{G}^{p}(\Omega_{T})$ such that \[ Y_{t}=\varphi(B_{t_{1}},B_{T}-B_{t_{1}})+\int_{t}^{T}\psi(B_{t_{1}} )h_{2}(Y_{s},Z_{s})d\langle B\rangle_{s}-\int_{t}^{T}Z_{s}dB_{s}-(K_{T} -K_{t})\text{ for }t\in \lbrack t_{1},T]. \] In the following, we prove that $K$ is a $G$-martingale. For each positive integer $n$, set \[ \tilde{l}_{i}^{n}(x)=I_{[-n+\frac{i}{n},-n+\frac{i+1}{n})}(x)\text{ for }i=0,\ldots,2n^{2}-1,\text{ }\tilde{l}_{2n^{2}}^{n}(x)=I_{[-n,n)^{c}}(x) \] and \[ \tilde{Y}_{t}^{n}=\sum_{i=0}^{2n^{2}}\tilde{l}_{i}^{n}(B_{t_{1}} )Y_{t}^{-n+\frac{i}{n}}\text{, }\tilde{Z}_{t}^{n}=\sum_{i=0}^{2n^{2}}\tilde {l}_{i}^{n}(B_{t_{1}})Z_{t}^{-n+\frac{i}{n}}\text{, }\tilde{K}_{t}^{n} =\sum_{i=0}^{2n^{2}}\tilde{l}_{i}^{n}(B_{t_{1}})K_{t}^{-n+\frac{i}{n}}\text{.} \] Then, for $t\in \lbrack t_{1},T]$, \[ \tilde{Y}_{t}^{n}=\tilde{Y}_{T}^{n}+\int_{t}^{T}\sum_{i=0}^{2n^{2}}\tilde {l}_{i}^{n}(B_{t_{1}})\psi \left( -n+\frac{i}{n}\right) h_{2}(\tilde{Y} _{s}^{n},\tilde{Z}_{s}^{n})d\langle B\rangle_{s}-\int_{t}^{T}\tilde{Z}_{s} ^{n}dB_{s}-(\tilde{K}_{T}^{n}-\tilde{K}_{t}^{n}). \] Similar to the above proof, we have $\lim_{n\rightarrow \infty}\mathbb{\hat{E} }\left[ \sup_{t\in \lbrack t_{1},T]}|\tilde{Y}_{t}^{n}-Y_{t}|^{p}\right] =0$, which implies \[ \lim_{n\rightarrow \infty}\mathbb{\hat{E}}\left[ |(\tilde{K}_{T}^{n}-\tilde {K}_{t_{1}}^{n})-(K_{T}-K_{t_{1}})|^{p}\right] =0. \] By Proposition 2.5 in \cite{HJPS1}, we know that, for $t\in \lbrack t_{1},T]$, $\mathbb{\hat{E}}_{t}\left[ \tilde{K}_{T}^{n}-\tilde{K}_{t}^{n}\right] =0$ and \[ \mathbb{\hat{E}}\left[ |\mathbb{\hat{E}}_{t}\left[ K_{T}-K_{t}\right] |\right] =\mathbb{\hat{E}}\left[ |\mathbb{\hat{E}}_{t}\left[ K_{T} -K_{t}\right] -\mathbb{\hat{E}}_{t}[\tilde{K}_{T}^{n}-\tilde{K}_{t} ^{n}]|\right] \leq \mathbb{\hat{E}}\left[ |(K_{T}-K_{t})-(\tilde{K}_{T} ^{n}-\tilde{K}_{t}^{n})|\right] , \] which implies $\mathbb{\hat{E}}_{t}\left[ K_{T}\right] =K_{t}$ by letting $n\rightarrow \infty$. Thus we obtain an $L^{p}$-solution $(Y,Z,K)$ on $[t_{1},T]$. Noting that $Y_{t_{1}}=$ $u(t_{1},B_{t_{1}},0)$, we obtain the desired result by applying Lemma \ref{le2-7} to find an $L^{p}$-solution on $[0,t_{1}]$. \end{proof} Now, we give the following existence and uniqueness result of $G$-BSDE (\ref{e2-1}). \begin{theorem} \label{th2-2}Suppose that $\xi$, $f$ and $g$ satisfy (H1) and (H2). Then $G$-BSDE (\ref{e2-1}) has a unique $L^{p}$-solution $(Y,Z,K)$ for each given $p\in(1,\bar{p})$. \end{theorem} \begin{proof} The uniqueness is due to (\ref{e2-2}) and (\ref{e2-8}) in Proposition \ref{pro2-1}. For each positive integer $n$, by partition of unity theorem, we can find $h_{i}^{n}\in C_{0}^{\infty}(\mathbb{R})$, $\tilde{h}_{j}^{n}\in C_{0}^{\infty}(\mathbb{R}^{2})$, $i\leq k_{n}$, $j\leq \tilde{k}_{n}$, such that \[ 0\leq h_{i}^{n}\leq1\text{ and }\lambda(\text{supp}(h_{i}^{n}))\leq \frac{1} {n}\text{ for }i\leq k_{n}\text{, }I_{[-n,n]}(y)\leq \sum_{i=1}^{k_{n}} h_{i}^{n}(y)\leq1, \] \[ 0\leq \tilde{h}_{j}^{n}\leq1\text{ and }\lambda(\text{supp}(\tilde{h}_{j} ^{n}))\leq \frac{1}{n}\text{ for }j\leq \tilde{k}_{n}\text{, }I_{[-n,n]\times \lbrack-n,n]}(y,z)\leq \sum_{j=1}^{\tilde{k}_{n}}\tilde{h}_{j}^{n}(y,z)\leq1. \] For each $N>0$, set $\tilde{f}(t,y)=f(t,y)-f(t,0)$, $\tilde{g} (t,y,z)=g(t,y,z)-g(t,0,0)$, \[ \tilde{f}^{N}(t,y)=(\tilde{f}(t,y)\wedge N)\vee(-N),\text{ }\tilde{g} ^{N}(t,y,z)=(\tilde{g}(t,y,z)\wedge N)\vee(-N), \] \[ f^{N}(t,y)=f(t,0)+\tilde{f}^{N}(t,y)\text{, }g^{N}(t,y,z)=g(t,0,0)+\tilde {g}^{N}(t,y,z), \] \[ f_{n}^{N}(t,y)=f(t,0)+\sum_{i=1}^{k_{n}}\tilde{f}^{N}(t,y_{i}^{n})h_{i} ^{n}(y),\text{ }g_{n}^{N}(t,y,z)=g(t,0,0)+\sum_{j=1}^{\tilde{k}_{n}}\tilde {g}^{N}(t,\tilde{y}_{j}^{n},\tilde{z}_{j}^{n})\tilde{h}_{j}^{n}(y,z), \] where $h_{i}^{n}(y_{i}^{n})>0$, $\tilde{h}_{j}^{n}(\tilde{y}_{j}^{n},\tilde {z}_{j}^{n})>0$ for $i\leq k_{n}$, $j\leq \tilde{k}_{n}$. By Proposition \ref{pro2-1} and Lemma \ref{le2-8}, we can easily deduce that $G$-BSDE \begin{equation} Y_{t}^{N,n}=\xi+\int_{t}^{T}f_{n}^{N}(s,Y_{s}^{N,n})ds+\int_{t}^{T}g_{n} ^{N}(s,Y_{s}^{N,n},Z_{s}^{N,n})d\langle B\rangle_{s}-\int_{t}^{T}Z_{s} ^{N,n}dB_{s}-(K_{T}^{N,n}-K_{t}^{N,n}) \label{e2-56} \end{equation} has a unique $L^{p}$-solution $(Y^{N,n},Z^{N,n},K^{N,n})$ for each given $p\in(1,\bar{p})$. Noting that \[ f_{n}^{N}(s,Y_{s}^{N,n})=f^{N}(s,Y_{s}^{N,n})+\hat{f}_{n}^{N}(s)\text{ and }g_{n}^{N}(s,Y_{s}^{N,n},Z_{s}^{N,n})=g^{N}(s,Y_{s}^{N,n},Z_{s}^{N,n})+\hat {g}_{n}^{N}(s), \] where $|\hat{f}_{n}^{N}(s)|=|f_{n}^{N}(s,Y_{s}^{N,n})-f^{N}(s,Y_{s} ^{N,n})|\leq(\frac{L}{n}+\frac{N}{n}|Y_{s}^{N,n}|)\wedge(2N)$, $|\hat{g} _{n}^{N}(s)|\leq \lbrack \frac{L}{n}+\frac{N}{n}(|Y_{s}^{N,n}|+|Z_{s} ^{N,n}|)]\wedge(2N)$. By Proposition \ref{pro2-1}, we can easily deduce that, for each given $p\in(1,\bar{p})$, \[ \lim_{n,m\rightarrow \infty}\mathbb{\hat{E}}\left[ \sup_{t\in \lbrack 0,T]}|Y_{t}^{N,n}-Y_{t}^{N,m}|^{p}+\left( \int_{0}^{T}|Z_{t}^{N,n} -Z_{t}^{N,m}|^{2}d\langle B\rangle_{t}\right) ^{p/2}+|K_{T}^{N,n}-K_{T} ^{N,m}|^{p}\right] =0, \] which implies that $G$-BSDE \begin{equation} Y_{t}^{N}=\xi+\int_{t}^{T}f^{N}(s,Y_{s}^{N})ds+\int_{t}^{T}g^{N}(s,Y_{s} ^{N},Z_{s}^{N})d\langle B\rangle_{s}-\int_{t}^{T}Z_{s}^{N}dB_{s}-(K_{T} ^{N}-K_{t}^{N}) \label{e2-57} \end{equation} has a unique $L^{p}$-solution $(Y^{N},Z^{N},K^{N})$ for each given $p\in(1,\bar{p})$. By (\ref{e2-6}), (\ref{e2-7}) in Proposition \ref{pro2-1} and Theorem \ref{th1-1}, we obtain that, for each $p\in(1,\bar{p})$, \begin{equation} \sup_{N>0}\mathbb{\hat{E}}\left[ \sup_{t\in \lbrack0,T]}|Y_{t}^{N} |^{p}+\left( \int_{0}^{T}|Z_{t}^{N}|^{2}d\langle B\rangle_{t}\right) ^{p/2}+|K_{T}^{N}|^{p}\right] \leq C, \label{e2-58} \end{equation} where the constant $C>0$ depends on $p$, $\bar{p}$, $\bar{\sigma}$, $L$ and $T$. For each fixed $p\in(1,\bar{p})$, we have \[ |f^{N_{1}}(s,Y_{s}^{N_{1}})-f^{N_{2}}(s,Y_{s}^{N_{1}})|\leq(N_{1}\wedge N_{2})^{-\delta}|\tilde{f}(s,Y_{s}^{N_{1}})|^{1+\delta}\leq L^{1+\delta} (N_{1}\wedge N_{2})^{-\delta}|Y_{s}^{N_{1}}|^{1+\delta}, \] \[ |g^{N_{1}}(s,Y_{s}^{N_{1}},Z_{s}^{N_{1}})-g^{N_{2}}(s,Y_{s}^{N_{1}} ,Z_{s}^{N_{1}})|\leq L^{1+\delta}(N_{1}\wedge N_{2})^{-\delta}(|Y_{s}^{N_{1} }|+|Z_{s}^{N_{1}}|)^{1+\delta}, \] where $\delta=[\frac{1}{2}(\frac{\bar{p}}{p}-1)]\wedge1$. Thus, by (\ref{e2-2}), (\ref{e2-8}) in Proposition \ref{pro2-1}, (\ref{e2-58}) and Theorem \ref{th1-1}, we get \[ \lim_{N_{1},N_{2}\rightarrow \infty}\mathbb{\hat{E}}\left[ \sup_{t\in \lbrack0,T]}|Y_{t}^{N_{1}}-Y_{t}^{N_{2}}|^{p}+\left( \int_{0}^{T} |Z_{t}^{N_{1}}-Z_{t}^{N_{2}}|^{2}d\langle B\rangle_{t}\right) ^{p/2} +|K_{T}^{N_{1}}-K_{T}^{N_{2}}|^{p}\right] =0, \] which implies the desired result by letting $N\rightarrow \infty$ in (\ref{e2-57}). \end{proof} The following example shows that $f$ can not contain $z$ in $G$-BSDE (\ref{e2-1}). \begin{example} Let $B$ be a $1$-dimensional $G$-Brownian motion with $G(a):=\frac{1}{2} \bar{\sigma}^{2}a^{+}$ for $a\in \mathbb{R}$. For each $n\geq1$, we know that $((n^{-1}+\langle B\rangle_{s})^{-1/5})_{s\in \lbrack0,T]}\in H_{G} ^{2,p}(0,T;\langle B\rangle)$ for each $p>1$. Since \[ \left \vert (n^{-1}+\langle B\rangle_{s})^{-1/5}-(\langle B\rangle_{s} )^{-1/5}\right \vert \leq(\langle B\rangle_{s})^{-2/5}\left \vert (n^{-1} +\langle B\rangle_{s})^{1/5}-(\langle B\rangle_{s})^{1/5}\right \vert \leq n^{-1/5}(\langle B\rangle_{s})^{-2/5}, \] we have \[ \int_{0}^{T}|(n^{-1}+\langle B\rangle_{s})^{-1/5}-(\langle B\rangle _{s})^{-1/5}|^{2}d\langle B\rangle_{s}\leq n^{-2/5}\int_{0}^{T}(\langle B\rangle_{s})^{-4/5}d\langle B\rangle_{s}=5n^{-2/5}(\langle B\rangle _{T})^{1/5}. \] Thus $((\langle B\rangle_{s})^{-1/5})_{s\in \lbrack0,T]}\in H_{G} ^{2,p}(0,T;\langle B\rangle)$ for each $p>1$, which implies $\int_{0} ^{T}(\langle B\rangle_{s})^{-1/5}dB_{s}\in L_{G}^{p}(\Omega_{T})$ for each $p>1$. Consider the following linear $G$-BSDE: \begin{equation} Y_{t}=\int_{0}^{T}(\langle B\rangle_{s})^{-1/5}dB_{s}+\int_{t}^{T}Z_{s} ds-\int_{t}^{T}Z_{s}dB_{s}-(K_{T}-K_{t}), \label{e2-59} \end{equation} we assert that, for each given $p>1$, the above $G$-BSDE has no $L^{p} $-solution $(Y,Z,K)$. Otherwise, there exists an $L^{p}$-solution $(Y,Z,K)$ for some $p>1$. For each $\varepsilon>0$, we introduce the following $\tilde{G}^{\varepsilon} $-expectation $\mathbb{\tilde{E}}^{\varepsilon}$. Set $\tilde{\Omega} _{T}=C_{0}([0,T];\mathbb{R}^{2})$ and the canonical process is denoted by $(B,\tilde{B})$. For each $A\in \mathbb{S}_{2}$, define \[ \tilde{G}^{\varepsilon}\left( A\right) =\frac{1}{2}\sup_{\varepsilon^{2}\leq v\leq \bar{\sigma}^{2}}\mathrm{tr}\left[ A\left( \begin{array} [c]{cc} v & 1\\ 1 & \varepsilon^{-2} \end{array} \right) \right] . \] By Proposition 3.1.5 in Peng \cite{P2019}, we know that $\varepsilon \tilde{B}$ is a classical $1$-dimensional standard Brownian motion under $\mathbb{\tilde {E}}^{\varepsilon}$ and $\mathbb{\tilde{E}}^{\varepsilon}|_{Lip(\Omega_{T} )}\leq \mathbb{\hat{E}}$. Thus $G$-BSDE (\ref{e2-59}) still holds under $\mathbb{\tilde{E}}^{\varepsilon}$. Similar to (\ref{new-e2-3}), we know that $\langle B,\tilde{B}\rangle_{t}=t$ and $\langle \tilde{B}\rangle_{t} =\varepsilon^{-2}t$ under $\mathbb{\tilde{E}}^{\varepsilon}$. Consider the following $G$-SDE under $\mathbb{\tilde{E}}^{\varepsilon}$: \[ dX_{t}=X_{t}d\tilde{B}_{t}\text{, }X_{0}=1. \] The solution is $X_{t}=\exp(\tilde{B}_{t}-2^{-1}\varepsilon^{-2}t)>0$. Applying It\^{o}'s formula to $X_{t}Y_{t}$ on $[0,T]$ under $\mathbb{\tilde {E}}^{\varepsilon}$, we get \[ X_{T}Y_{T}=Y_{0}+\int_{0}^{T}X_{t}Z_{t}dB_{t}+\int_{0}^{T}X_{t}Y_{t}d\tilde {B}_{t}+\int_{0}^{T}X_{t}dK_{t}. \] Since $\int_{0}^{T}X_{t}dK_{t}\leq0$, we obtain \[ Y_{0}\geq \mathbb{\tilde{E}}^{\varepsilon}[X_{T}Y_{T}]=\mathbb{\tilde{E} }^{\varepsilon}\left[ X_{T}\int_{0}^{T}(\langle B\rangle_{s})^{-1/5} dB_{s}\right] \text{ for each }\varepsilon>0. \] Applying It\^{o}'s formula to $X_{t}\int_{0}^{t}(\langle B\rangle_{s} )^{-1/5}dB_{s}$ on $[0,T]$ under $\mathbb{\tilde{E}}^{\varepsilon}$, we deduce \[ Y_{0}\geq \mathbb{\tilde{E}}^{\varepsilon}\left[ X_{T}\int_{0}^{T}(\langle B\rangle_{s})^{-1/5}dB_{s}\right] =\mathbb{\tilde{E}}^{\varepsilon}\left[ \int_{0}^{T}X_{s}(\langle B\rangle_{s})^{-1/5}ds\right] \text{ for each }\varepsilon>0. \] Let $E^{\varepsilon}$ be the linear $\bar{G}^{\varepsilon}$-expectation with \[ \bar{G}^{\varepsilon}\left( A\right) =\frac{1}{2}\mathrm{tr}\left[ A\left( \begin{array} [c]{cc} \varepsilon^{2} & 1\\ 1 & \varepsilon^{-2} \end{array} \right) \right] \text{ for }A\in \mathbb{S}_{2}. \] Since $\bar{G}^{\varepsilon}\leq \tilde{G}^{\varepsilon}$, we know that $E^{\varepsilon}\leq \mathbb{\tilde{E}}^{\varepsilon}$. By Proposition 3.1.5 in Peng \cite{P2019}, we know that $\varepsilon^{-1}B$ and $\varepsilon \tilde{B}$ are two classical $1$-dimensional standard Brownian motion under $E^{\varepsilon}$. Then we get \[ Y_{0}\geq E^{\varepsilon}\left[ X_{T}\int_{0}^{T}(\langle B\rangle _{s})^{-1/5}dB_{s}\right] =\frac{5}{4}T^{4/5}\varepsilon^{-2/5}\text{ for each }\varepsilon>0, \] which contradicts to $Y_{0}\in \mathbb{R}$. Thus, for each given $p>1$, $G$-BSDE (\ref{e2-59}) has no $L^{p}$-solution $(Y,Z,K)$. \end{example} Finally, we give the following existence and uniqueness result of $G$-BSDE (\ref{new-e2-4}). \begin{theorem} Suppose that $\xi$, $f$, $g_{ij}$, $g_{l}$, $i$, $j\leq d^{\prime}$, $d^{\prime}<l\leq d$, satisfy (H1) and (H2). Then $G$-BSDE (\ref{new-e2-4}) has a unique $L^{p}$-solution $(Y,Z,K)$ for each given $p\in(1,\bar{p})$. \end{theorem} \begin{proof} The proof of this theorem is similar to Theorem \ref{th2-2}, we omit it. \end{proof} \section{Application to the regularity of fully nonlinear PDEs} For simplicity of representation, we only consider $1$-dimensional $G$-Brownian motion with $G(a)=\frac{1}{2}\bar{\sigma}^{2}a^{+}$, the methods still hold for the $d$-dimensional $G$-Brownian motion with $G(\cdot)$ given in (\ref{new-e2-2}). For each fixed $t\in \lbrack0,T]$ and $\xi \in \cap_{p\geq 2}L_{G}^{p}(\Omega_{t})$, consider the following $G$-FBSDE: \begin{equation} dX_{s}^{t,\xi}=b(s,X_{s}^{t,\xi})ds+h(s,X_{s}^{t,\xi})d\langle B\rangle _{s}+\sigma(s,X_{s}^{t,\xi})dB_{s},\text{ }X_{t}^{t,\xi}=\xi,\text{ } s\in \lbrack t,T], \label{e3-1} \end{equation} \begin{equation} \begin{array} [c]{rl} Y_{s}^{t,\xi}= & \varphi(X_{T}^{t,\xi})+\int_{s}^{T}f(r,X_{r}^{t,\xi} ,Y_{r}^{t,\xi})dr+\int_{s}^{T}g(r,X_{r}^{t,\xi},Y_{r}^{t,\xi},Z_{r}^{t,\xi })d\langle B\rangle_{r}\\ & -\int_{s}^{T}Z_{r}^{t,\xi}dB_{r}-(K_{T}^{t,\xi}-K_{s}^{t,\xi}), \end{array} \label{e3-2} \end{equation} where $b$, $h$, $\sigma:[0,T]\times \mathbb{R}\rightarrow \mathbb{R}$, $\varphi:\mathbb{R}\rightarrow \mathbb{R}$, $f:[0,T]\times \mathbb{R} ^{2}\rightarrow \mathbb{R}$, $g:[0,T]\times \mathbb{R}^{3}\rightarrow \mathbb{R}$ satisfy the following conditions: \begin{description} \item[(A1)] $b$, $h$, $\sigma$, $f$, $g$ are continuous in $(s,x,y,z)$. \item[(A2)] There exist a constant $L_{1}>0$ and a positive integer $m$ such that for any $s\in \lbrack0,T]$, $x$, $x^{\prime}$, $y$, $y^{\prime}$, $z$, $z^{\prime}\in \mathbb{R}$, \[ \begin{array} [c]{l} |b(s,x)-b(s,x^{\prime})|+|h(s,x)-h(s,x^{\prime})|+|\sigma(s,x)-\sigma (s,x^{\prime})|\leq L_{1}|x-x^{\prime}|,\\ |\varphi(x)-\varphi(x^{\prime})|\leq L_{1}(1+|x|^{m}+|x^{\prime} |^{m})|x-x^{\prime}|,\\ |f(s,x,y)-f(s,x^{\prime},y^{\prime})|+|g(s,x,y,z)-g(s,x^{\prime},y^{\prime },z^{\prime})|\\ \leq L_{1}[(1+|x|^{m}+|x^{\prime}|^{m})|x-x^{\prime}|+|y-y^{\prime }|+|z-z^{\prime}|]. \end{array} \] \end{description} Under the assumptions (A1) and (A2), for each $p\geq2$, SDE (\ref{e3-1}) has a unique solution $(X_{s}^{t,\xi})_{s\in \lbrack t,T]}\in S_{G}^{p}(t,T)$ and $G$-BSDE (\ref{e3-2}) has a unique $L^{p}$-solution $(Y_{s}^{t,\xi} ,Z_{s}^{t,\xi},K_{s}^{t,\xi})_{s\in \lbrack t,T]}$ with $K_{t}^{t,\xi}=0$. The following standard estimates of SDE can be found in Chapter 5 in Peng \cite{P2019}. \begin{proposition} \label{pro3-1}Suppose that (A1) and (A2) hold. Let $\xi$, $\xi^{\prime}\in \cap_{p\geq2}L_{G}^{p}(\Omega_{t})$ with $t<T$. Then, for each $p\geq2$ and $\delta \in \lbrack0,T-t]$, we have \[ \mathbb{\hat{E}}_{t}\left[ \left \vert X_{t+\delta}^{t,\xi}-X_{t+\delta }^{t,\xi^{\prime}}\right \vert ^{p}\right] \leq C|\xi-\xi^{\prime}|^{p}\text{ and }\mathbb{\hat{E}}_{t}\left[ \left \vert X_{t+\delta}^{t,\xi}\right \vert ^{p}\right] \leq C(1+|\xi|^{p}), \] where the constant $C>0$ depends on $L_{1}$, $\bar{\sigma}$, $p$ and $T$. \end{proposition} Set $\xi=x\in \mathbb{R}$, define \begin{equation} u(t,x)=Y_{t}^{t,x}\text{ for }(t,x)\in \lbrack0,T]\times \mathbb{R}. \label{e3-3} \end{equation} Since $(B_{t+r}-B_{t})_{r\geq0}$ is still a $G$-Brownian motion, we have $Y_{t}^{t,x}\in \mathbb{R}$. \begin{proposition} \label{pro3-2}Suppose that (A1) and (A2) hold. Then \begin{description} \item[(i)] For each $(t,x)\in \lbrack0,T)\times \mathbb{R}$, we have $Y_{s}^{t,x}=u(s,X_{s}^{t,x})$ for $s\in \lbrack t,T]$. \item[(ii)] $u(\cdot,\cdot)$ is the unique viscosity solution of the following fully nonlinear PDE: \[ \left \{ \begin{array} [c]{l} \partial_{t}u+G(\sigma^{2}(t,x)\partial_{xx}^{2}u+2h(t,x)\partial _{x}u+2g(t,x,u,\sigma(t,x)\partial_{x}u))\\ +b(t,x)\partial_{x}u+f(t,x,u)=0,\\ u(T,x)=\varphi(x). \end{array} \right. \] \end{description} \end{proposition} \begin{proof} The proof is the same as Theorems 4.4 and 4.5 in \cite{HJPS}, we omit it. \end{proof} In the following, we discuss the regularity properties of $u(\cdot,\cdot)$. First, we study $\partial_{x}u(t,x)$. For each $(t,x)\in \lbrack0,T)\times \mathbb{R}$ and $\Delta \in \lbrack-1,1]$, by Proposition \ref{pro3-1}, we have, for each $p\geq2$, \begin{equation} \sup_{s\in \lbrack t,T]}\mathbb{\hat{E}}\left[ \left \vert X_{s}^{t,x+\Delta }-X_{s}^{t,x}\right \vert ^{p}\right] \leq C|\Delta|^{p}\text{ and }\sup _{s\in \lbrack t,T]}\mathbb{\hat{E}}\left[ \left \vert X_{s}^{t,x}\right \vert ^{p}\right] \leq C(1+|x|^{p}), \label{e3-4} \end{equation} where $C>0$ depends on $L_{1}$, $\bar{\sigma}$, $p$ and $T$. It follows from Proposition \ref{pro2-1}, Theorem \ref{th1-1} and (\ref{e3-4}) that, for each $p\geq2$, \begin{equation} \mathbb{\hat{E}}\left[ \sup_{s\in \lbrack t,T]}\left \vert Y_{s}^{t,x+\Delta }-Y_{s}^{t,x}\right \vert ^{p}\right] \leq C(1+|x|^{mp})|\Delta|^{p}, \label{e3-5} \end{equation} where $C>0$ depends on $L_{1}$, $\bar{\sigma}$, $p$ and $T$. Let $\mathcal{P}$ be a weakly compact and convex set of probability measures on $(\Omega_{T},\mathcal{B}(\Omega_{T}))$ satisfying \[ \mathbb{\hat{E}}\left[ \xi \right] =\sup_{P\in \mathcal{P}}E_{P}[\xi]\text{ for }\xi \in L_{G}^{1}(\Omega_{T}). \] For each $(t,x)\in \lbrack0,T)\times \mathbb{R}$, set \[ \mathcal{P}_{t,x}=\{P\in \mathcal{P}:E_{P}[K_{T}^{t,x}]=0\}. \] Similar to the proof of Proposition \ref{pro2-3}, we obtain that, for each $p\geq2$, \begin{equation} E_{P}\left[ \left( \int_{t}^{T}\left \vert Z_{s}^{t,x+\Delta}-Z_{s} ^{t,x}\right \vert ^{2}d\langle B\rangle_{s}\right) ^{p/2}+\left \vert K_{T}^{t,x+\Delta}\right \vert ^{p}\right] \leq C(1+|x|^{mp})|\Delta |^{p}\text{ for }P\in \mathcal{P}_{t,x}, \label{e3-6} \end{equation} \begin{equation} E_{P^{\Delta}}\left[ \left( \int_{t}^{T}\left \vert Z_{s}^{t,x+\Delta} -Z_{s}^{t,x}\right \vert ^{2}d\langle B\rangle_{s}\right) ^{p/2}+\left \vert K_{T}^{t,x}\right \vert ^{p}\right] \leq C(1+|x|^{mp})|\Delta|^{p}\text{ for }P^{\Delta}\in \mathcal{P}_{t,x+\Delta}. \label{e3-7} \end{equation} In order to obtain $\partial_{x}u(t,x)$, we need the following assumption. \begin{description} \item[(A3)] $b_{x}^{\prime}$, $h_{x}^{\prime}$, $\sigma_{x}^{\prime}$, $\varphi^{\prime}$, $f_{x}^{\prime}$, $f_{y}^{\prime}$, $g_{x}^{\prime}$, $g_{y}^{\prime}$, $g_{z}^{\prime}$ are continuous in $(s,x,y,z)$. \end{description} \begin{remark} Under the assumptions (A2) and (A3), we can easily deduce that, for any $s\in \lbrack0,T]$, $x$, $y$, $z\in \mathbb{R}$, \[ \begin{array} [c]{l} |b_{x}^{\prime}(s,x)|+|h_{x}^{\prime}(s,x)|+|\sigma_{x}^{\prime}(s,x)|\leq L_{1}\text{, }|\varphi^{\prime}(x)|\leq L_{1}(1+2|x|^{m})\text{, } |g_{z}^{\prime}(s,x,y,z)|\leq L_{1},\\ |f_{x}^{\prime}(s,x,y)|+|g_{x}^{\prime}(s,x,y,z)|\leq L_{1}(1+2|x|^{m})\text{, }|f_{y}^{\prime}(s,x,y)|+|g_{y}^{\prime}(s,x,y,z)|\leq L_{1}. \end{array} \] \end{remark} \begin{lemma} \label{le3-3}Suppose that (A1)-(A3) hold. Then, for each $(t,x)\in \lbrack0,T)\times \mathbb{R}$ and $p\geq2$, we have \begin{equation} \lim_{\Delta \rightarrow0}\sup_{s\in \lbrack t,T]}\mathbb{\hat{E}}\left[ \left \vert \frac{X_{s}^{t,x+\Delta}-X_{s}^{t,x}}{\Delta}-\hat{X}_{s} ^{t,x}\right \vert ^{p}\right] =0, \label{e3-8} \end{equation} where $(\hat{X}_{s}^{t,x})_{s\in \lbrack t,T]}$ is the solution of the following $G$-SDE: \begin{equation} d\hat{X}_{s}^{t,x}=b_{x}^{\prime}(s,X_{s}^{t,x})\hat{X}_{s}^{t,x} ds+h_{x}^{\prime}(s,X_{s}^{t,x})\hat{X}_{s}^{t,x}d\langle B\rangle_{s} +\sigma_{x}^{\prime}(s,X_{s}^{t,x})\hat{X}_{s}^{t,x}dB_{s}\text{, }\hat{X} _{t}^{t,x}=1. \label{e3-9} \end{equation} \end{lemma} \begin{proof} Set $\hat{X}_{s}^{\Delta}=$ $X_{s}^{t,x+\Delta}-X_{s}^{t,x}$, $\tilde{X} _{s}^{\Delta}=$ $\hat{X}_{s}^{\Delta}-\hat{X}_{s}^{t,x}\Delta$ for $s\in \lbrack t,T]$, we have \[ d\tilde{X}_{s}^{\Delta}=(b_{x}^{\prime}(s)\tilde{X}_{s}^{\Delta}+\tilde {b}(s))ds+(h_{x}^{\prime}(s)\tilde{X}_{s}^{\Delta}+\tilde{h}(s))d\langle B\rangle_{s}+(\sigma_{x}^{\prime}(s)\tilde{X}_{s}^{\Delta}+\tilde{\sigma }(s))dB_{s}\text{, }\tilde{X}_{t}^{\Delta}=1, \] where $b_{x}^{\prime}(s)=b_{x}^{\prime}(s,X_{s}^{t,x})$, \begin{align*} \tilde{b}(s) & =b(s,X_{s}^{t,x+\Delta})-b(s,X_{s}^{t,x})-b_{x}^{\prime }(s,X_{s}^{t,x})\hat{X}_{s}^{\Delta}\\ & =\hat{X}_{s}^{\Delta}\int_{0}^{1}\left[ b_{x}^{\prime}(s,X_{s} ^{t,x}+\theta \hat{X}_{s}^{\Delta})-b_{x}^{\prime}(s,X_{s}^{t,x})\right] d\theta, \end{align*} similar for $h_{x}^{\prime}(s)$, $\tilde{h}(s)$, $\sigma_{x}^{\prime}(s)$ and $\tilde{\sigma}(s)$. By standard estimates of SDE, we get \begin{align*} \sup_{s\in \lbrack t,T]}\mathbb{\hat{E}}\left[ \left \vert \tilde{X} _{s}^{\Delta}\right \vert ^{p}\right] & \leq C\mathbb{\hat{E}}\left[ \left( \int_{t}^{T}|\tilde{b}(s)|ds\right) ^{p}+\left( \int_{t}^{T} |\tilde{h}(s)|d\langle B\rangle_{s}\right) ^{p}+\left( \int_{t}^{T} |\tilde{\sigma}(s)|^{2}d\langle B\rangle_{s}\right) ^{p/2}\right] \\ & \leq C\int_{t}^{T}\mathbb{\hat{E}}[|\tilde{b}(s)|^{p}+|\tilde{h} (s)|^{p}+|\tilde{\sigma}(s)|^{p}]ds, \end{align*} where $C>0$ depends on $L_{1}$, $\bar{\sigma}$, $p$ and $T$. By (\ref{e3-4}) and H\"{o}lder's inequality, we obtain \begin{equation} \mathbb{\hat{E}}[|\tilde{b}(s)|^{p}]\leq C|\Delta|^{p}\left( \mathbb{\hat{E} }\left[ \left( \int_{0}^{1}|b_{x}^{\prime}(s,X_{s}^{t,x}+\theta \hat{X} _{s}^{\Delta})-b_{x}^{\prime}(s,X_{s}^{t,x})|d\theta \right) ^{2p}\right] \right) ^{1/2}, \label{e3-10} \end{equation} where $C>0$ depends on $L_{1}$, $\bar{\sigma}$, $p$ and $T$. For each $N>0$ and $\varepsilon>0$, define \begin{equation} \omega_{N}(\varepsilon)=\sup \{|b_{x}^{\prime}(r,x_{1})-b_{x}^{\prime} (r,x_{2})|:r\in \lbrack0,T],\text{ }|x_{1}|\leq N,\text{ }|x_{1}-x_{2} |\leq \varepsilon \}. \label{e3-11} \end{equation} Under assumption (A3), we know that $\omega_{N}(\varepsilon)\rightarrow0$ as $\varepsilon \downarrow0$. Noting that \begin{equation} \begin{array} [c]{l} |b_{x}^{\prime}(s,X_{s}^{t,x}+\theta \hat{X}_{s}^{\Delta})-b_{x}^{\prime }(s,X_{s}^{t,x})|\\ \leq|b_{x}^{\prime}(s,X_{s}^{t,x}+\theta \hat{X}_{s}^{\Delta})-b_{x}^{\prime }(s,X_{s}^{t,x})|I_{\{|\hat{X}_{s}^{\Delta}|\leq \varepsilon \}}+2L_{1} I_{\{|\hat{X}_{s}^{\Delta}|>\varepsilon \}}\\ \leq|b_{x}^{\prime}(s,X_{s}^{t,x}+\theta \hat{X}_{s}^{\Delta})-b_{x}^{\prime }(s,X_{s}^{t,x})|I_{\{|\hat{X}_{s}^{\Delta}|\leq \varepsilon,|X_{s}^{t,x}|\leq N\}}+2L_{1}I_{\{|X_{s}^{t,x}|>N\}}+2L_{1}I_{\{|\hat{X}_{s}^{\Delta }|>\varepsilon \}}\\ \leq \omega_{N}(\varepsilon)+2L_{1}(|X_{s}^{t,x}|/N+|\hat{X}_{s}^{\Delta }|/\varepsilon), \end{array} \label{e3-12} \end{equation} we obtain by (\ref{e3-4}), (\ref{e3-10}) and (\ref{e3-12}) that \[ \mathbb{\hat{E}}[|\tilde{b}(s)|^{p}]\leq C|\Delta|^{p}\left( |\omega _{N}(\varepsilon)|^{p}+\frac{1+|x|^{p}}{N^{p}}+\frac{|\Delta|^{p}} {\varepsilon^{p}}\right) , \] where $C>0$ depends on $L_{1}$, $\bar{\sigma}$, $p$ and $T$. Thus \[ \underset{\Delta \rightarrow0}{\lim \sup}\frac{1}{|\Delta|^{p}}\int_{t} ^{T}\mathbb{\hat{E}}[|\tilde{b}(s)|^{p}]ds\leq C\left( |\omega_{N} (\varepsilon)|^{p}+\frac{1+|x|^{p}}{N^{p}}\right) , \] which implies $\lim_{\Delta \rightarrow0}\frac{1}{|\Delta|^{p}}\int_{t} ^{T}\mathbb{\hat{E}}[|\tilde{b}(s)|^{p}]ds=0$ by letting $\varepsilon \downarrow0$ first and then $N\rightarrow \infty$. Similarly, we can obtain \[ \lim_{\Delta \rightarrow0}\frac{1}{|\Delta|^{p}}\int_{t}^{T}\mathbb{\hat{E} }[|\tilde{h}(s)|^{p}+|\tilde{\sigma}(s)|^{p}]ds=0, \] which implies the desired result. \end{proof} \begin{theorem} \label{th3-4}Suppose that (A1)-(A3) hold. Then, for each $(t,x)\in \lbrack0,T)\times \mathbb{R}$, we have \begin{equation} \partial_{x+}u(t,x)=\sup_{P\in \mathcal{P}_{t,x}}E_{P}\left[ \varphi^{\prime }(X_{T}^{t,x})\hat{X}_{T}^{t,x}\Gamma_{T}^{t,x}+\int_{t}^{T}f_{x}^{\prime }(s)\hat{X}_{s}^{t,x}\Gamma_{s}^{t,x}ds+\int_{t}^{T}g_{x}^{\prime}(s)\hat {X}_{s}^{t,x}\Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] , \label{e3-13} \end{equation} \begin{equation} \partial_{x-}u(t,x)=\inf_{P\in \mathcal{P}_{t,x}}E_{P}\left[ \varphi^{\prime }(X_{T}^{t,x})\hat{X}_{T}^{t,x}\Gamma_{T}^{t,x}+\int_{t}^{T}f_{x}^{\prime }(s)\hat{X}_{s}^{t,x}\Gamma_{s}^{t,x}ds+\int_{t}^{T}g_{x}^{\prime}(s)\hat {X}_{s}^{t,x}\Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] , \label{e3-14} \end{equation} where $(\hat{X}_{s}^{t,x})_{s\in \lbrack t,T]}$ satisfies (\ref{e3-9}), $(\Gamma_{s}^{t,x})_{s\in \lbrack t,T]}$ satisfies the following $G$-SDE: \begin{equation} d\Gamma_{s}^{t,x}=f_{y}^{\prime}(s)\Gamma_{s}^{t,x}ds+g_{y}^{\prime} (s)\Gamma_{s}^{t,x}d\langle B\rangle_{s}+g_{z}^{\prime}(s)\Gamma_{s} ^{t,x}dB_{s},\text{ }\Gamma_{t}^{t,x}=1, \label{new-e3-14} \end{equation} $g_{x}^{\prime}(s)=g_{x}^{\prime}(s,X_{s}^{t,x},Y_{s}^{t,x},Z_{s}^{t,x})$, similar for $g_{y}^{\prime}(s)$, $g_{z}^{\prime}(s)$, $f_{x}^{\prime}(s)$ and $f_{y}^{\prime}(s)$. \end{theorem} \begin{proof} Set $\hat{X}_{s}^{\Delta}=$ $X_{s}^{t,x+\Delta}-X_{s}^{t,x}$, $\hat{Y} _{s}^{\Delta}=Y_{s}^{t,x+\Delta}-Y_{s}^{t,x}$, $\hat{Z}_{s}^{\Delta} =Z_{s}^{t,x+\Delta}-Z_{s}^{t,x}$ for $\Delta>0$ and $s\in \lbrack t,T]$. For each $P\in \mathcal{P}_{t,x}$, we have \[ \begin{array} [c]{rl} \hat{Y}_{s}^{\Delta}= & \varphi^{\prime}(X_{T}^{t,x})\hat{X}_{T}^{t,x} \Delta+\tilde{\varphi}(T)+\int_{s}^{T}[f_{x}^{\prime}(r)\hat{X}_{r} ^{t,x}\Delta+f_{y}^{\prime}(r)\hat{Y}_{r}^{\Delta}+\tilde{f}(r)]dr\\ & +\int_{s}^{T}[g_{x}^{\prime}(r)\hat{X}_{r}^{t,x}\Delta+g_{y}^{\prime} (r)\hat{Y}_{r}^{\Delta}+g_{z}^{\prime}(r)\hat{Z}_{r}^{\Delta}+\tilde {g}(r)]d\langle B\rangle_{r}-\int_{s}^{T}\hat{Z}_{r}^{\Delta}dB_{r}-\int _{s}^{T}dK_{r}^{t,x+\Delta}, \end{array} \] where $\tilde{g}(r)=g(r,X_{r}^{t,x+\Delta},Y_{r}^{t,x+\Delta},Z_{r} ^{t,x+\Delta})-g(r,X_{r}^{t,x},Y_{r}^{t,x},Z_{r}^{t,x})-g_{x}^{\prime} (r)\hat{X}_{r}^{t,x}\Delta-g_{y}^{\prime}(r)\hat{Y}_{r}^{\Delta}-g_{z} ^{\prime}(r)\hat{Z}_{r}^{\Delta}$, similar for $\tilde{\varphi}(T)$ and $\tilde{f}(r)$. Applying It\^{o}'s formula to $\hat{Y}_{s}^{\Delta}\Gamma _{s}^{t,x}$ on $[t,T]$ under $P$, we obtain \begin{equation} \begin{array} [c]{rl} \Delta^{-1}\hat{Y}_{t}^{\Delta}= & E_{P}\left[ \varphi^{\prime}(X_{T} ^{t,x})\hat{X}_{T}^{t,x}\Gamma_{T}^{t,x}+\int_{t}^{T}f_{x}^{\prime}(s)\hat {X}_{s}^{t,x}\Gamma_{s}^{t,x}ds+\int_{t}^{T}g_{x}^{\prime}(s)\hat{X}_{s} ^{t,x}\Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] \\ & +\Delta^{-1}E_{P}\left[ \tilde{\varphi}(T)\Gamma_{T}^{t,x}+\int_{t} ^{T}\tilde{f}(s)\Gamma_{s}^{t,x}ds+\int_{t}^{T}\tilde{g}(s)\Gamma_{s} ^{t,x}d\langle B\rangle_{s}-\int_{t}^{T}\Gamma_{s}^{t,x}dK_{s}^{t,x+\Delta }\right] . \end{array} \label{e3-15} \end{equation} Noting that $\tilde{\varphi}(T)=\varphi^{\prime}(X_{T}^{t,x})(\hat{X} _{T}^{\Delta}-\hat{X}_{T}^{t,x}\Delta)+\hat{X}_{T}^{\Delta}\int_{0} ^{1}[\varphi^{\prime}(X_{T}^{t,x}+\theta \hat{X}_{T}^{\Delta})-\varphi^{\prime }(X_{T}^{t,x})]d\theta$, similar for $\tilde{f}(s)$ and $\tilde{g}(s)$, by (\ref{e3-4}), (\ref{e3-5}), (\ref{e3-6}), (\ref{e3-8}) and using the method in (\ref{e3-12}), we get \begin{equation} \lim_{\Delta \downarrow0}\Delta^{-1}E_{P}\left[ \tilde{\varphi}(T)\Gamma _{T}^{t,x}+\int_{t}^{T}\tilde{f}(s)\Gamma_{s}^{t,x}ds+\int_{t}^{T}\tilde {g}(s)\Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] =0. \label{e3-16} \end{equation} Since $\Delta>0$, $\Gamma_{s}^{t,x}\geq0$ and $dK_{s}^{t,x+\Delta}\leq0$, we deduce by (\ref{e3-15}) and (\ref{e3-16}) that \begin{equation} \underset{\Delta \downarrow0}{\lim \inf}\frac{\hat{Y}_{t}^{\Delta}}{\Delta} \geq \sup_{P\in \mathcal{P}_{t,x}}E_{P}\left[ \varphi^{\prime}(X_{T}^{t,x} )\hat{X}_{T}^{t,x}\Gamma_{T}^{t,x}+\int_{t}^{T}f_{x}^{\prime}(s)\hat{X} _{s}^{t,x}\Gamma_{s}^{t,x}ds+\int_{t}^{T}g_{x}^{\prime}(s)\hat{X}_{s} ^{t,x}\Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] . \label{e3-17} \end{equation} For each $P^{\Delta}\in \mathcal{P}_{t,x+\Delta}$ for $\Delta>0$, similar to (\ref{e3-15}), we have \begin{equation} \begin{array} [c]{rl} \Delta^{-1}\hat{Y}_{t}^{\Delta}= & E_{P^{\Delta}}\left[ \varphi^{\prime }(X_{T}^{t,x})\hat{X}_{T}^{t,x}\Gamma_{T}^{t,x}+\int_{t}^{T}f_{x}^{\prime }(s)\hat{X}_{s}^{t,x}\Gamma_{s}^{t,x}ds+\int_{t}^{T}g_{x}^{\prime}(s)\hat {X}_{s}^{t,x}\Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] \\ & +\Delta^{-1}E_{P^{\Delta}}\left[ \tilde{\varphi}(T)\Gamma_{T}^{t,x} +\int_{t}^{T}\tilde{f}(s)\Gamma_{s}^{t,x}ds+\int_{t}^{T}\tilde{g}(s)\Gamma _{s}^{t,x}d\langle B\rangle_{s}+\int_{t}^{T}\Gamma_{s}^{t,x}dK_{s} ^{t,x}\right] . \end{array} \label{e3-18} \end{equation} Similar to (\ref{e3-16}), we get \begin{equation} \lim_{\Delta \downarrow0}\Delta^{-1}E_{P^{\Delta}}\left[ \tilde{\varphi }(T)\Gamma_{T}^{t,x}+\int_{t}^{T}\tilde{f}(s)\Gamma_{s}^{t,x}ds+\int_{t} ^{T}\tilde{g}(s)\Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] =0. \label{e3-19} \end{equation} Since $\mathcal{P}$ is weakly compact, for any sequence $\Delta_{j} \downarrow0$, we can find a subsequence $\Delta_{i}\downarrow0$ such that $P^{\Delta_{i}}$ converges weakly to $P^{\ast}\in \mathcal{P}$. By Proposition \ref{pro2-1} and (\ref{e3-5}), we have $\mathbb{\hat{E}}\left[ |K_{T} ^{t,x+\Delta}-K_{T}^{t,x}|\right] \rightarrow0$ as $\Delta \downarrow0$. Due to \[ |E_{P^{\ast}}[K_{T}^{t,x}]|=|E_{P^{\ast}}[K_{T}^{t,x}]-E_{P^{\Delta_{i}} }[K_{T}^{t,x+\Delta_{i}}]|\leq|E_{P^{\ast}}[K_{T}^{t,x}]-E_{P^{\Delta_{i}} }[K_{T}^{t,x}]|+\mathbb{\hat{E}}\left[ |K_{T}^{t,x+\Delta_{i}}-K_{T} ^{t,x}|\right] \] and $E_{P^{\Delta_{i}}}[K_{T}^{t,x}]\rightarrow E_{P^{\ast}}[K_{T}^{t,x}]$ as $\Delta_{i}\downarrow0$, we get $E_{P^{\ast}}[K_{T}^{t,x}]=0$, which implies $P^{\ast}\in \mathcal{P}_{t,x}$. Noting that $\int_{t}^{T}\Gamma_{s} ^{t,x}dK_{s}^{t,x}\leq0$ and \[ \varphi^{\prime}(X_{T}^{t,x})\hat{X}_{T}^{t,x}\Gamma_{T}^{t,x}+\int_{t} ^{T}f_{x}^{\prime}(s)\hat{X}_{s}^{t,x}\Gamma_{s}^{t,x}ds+\int_{t}^{T} g_{x}^{\prime}(s)\hat{X}_{s}^{t,x}\Gamma_{s}^{t,x}d\langle B\rangle_{s}\in L_{G}^{1}(\Omega_{T}), \] we deduce by (\ref{e3-18}) and (\ref{e3-19}) that \begin{equation} \underset{\Delta \downarrow0}{\lim \sup}\frac{\hat{Y}_{t}^{\Delta}}{\Delta} \leq \sup_{P\in \mathcal{P}_{t,x}}E_{P}\left[ \varphi^{\prime}(X_{T}^{t,x} )\hat{X}_{T}^{t,x}\Gamma_{T}^{t,x}+\int_{t}^{T}f_{x}^{\prime}(s)\hat{X} _{s}^{t,x}\Gamma_{s}^{t,x}ds+\int_{t}^{T}g_{x}^{\prime}(s)\hat{X}_{s} ^{t,x}\Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] . \label{e3-20} \end{equation} Thus we obtain (\ref{e3-13}) by (\ref{e3-17}) and (\ref{e3-20}). Similarly, we can get (\ref{e3-14}). \end{proof} Now we study $\partial_{t}u(t,x)$. For each $(t,x)\in(0,T)\times \mathbb{R}$ and $|\Delta|<t\wedge(T-t)$, noting that \[ \sqrt{\frac{T-t}{T-t-\Delta}}\left( B_{\frac{T-t-\Delta}{T-t}(s-t)+t+\Delta }-B_{t+\Delta}\right) _{s\in \lbrack t,T]} \] and $(B_{s}-B_{t})_{s\in \lbrack t,T]}$ have the same distribution, we obtain $u(t+\Delta,x)=\bar{Y}_{t}^{t,x,\Delta}$, where $(\bar{X}^{t,x,\Delta}$, $\bar{Y}^{t,x,\Delta}$, $\bar{Z}^{t,x,\Delta}$, $\bar{K}^{t,x,\Delta})$ satisfies the following $G$-FBSDE: \[ \begin{array} [c]{rl} \bar{X}_{s}^{t,x,\Delta}= & x+\frac{T-t-\Delta}{T-t}\left[ \int_{t} ^{s}b(r+\frac{T-r}{T-t}\Delta,\bar{X}_{r}^{t,x,\Delta})dr+\int_{t} ^{s}h(r+\frac{T-r}{T-t}\Delta,\bar{X}_{r}^{t,x,\Delta})d\langle B\rangle _{r}\right] \\ & +\sqrt{\frac{T-t-\Delta}{T-t}}\int_{t}^{s}\sigma(r+\frac{T-r}{T-t} \Delta,\bar{X}_{r}^{t,x,\Delta})dB_{r}, \end{array} \] \[ \begin{array} [c]{rl} \bar{Y}_{s}^{t,x,\Delta}= & \varphi(\bar{X}_{T}^{t,x,\Delta})+\frac {T-t-\Delta}{T-t}\int_{s}^{T}g(r+\frac{T-r}{T-t}\Delta,\bar{X}_{r} ^{t,x,\Delta},\bar{Y}_{r}^{t,x,\Delta},\sqrt{\frac{T-t}{T-t-\Delta}}\bar {Z}_{r}^{t,x,\Delta})d\langle B\rangle_{r}\\ & +\frac{T-t-\Delta}{T-t}\int_{s}^{T}f(r+\frac{T-r}{T-t}\Delta,\bar{X} _{r}^{t,x,\Delta},\bar{Y}_{r}^{t,x,\Delta})dr-\int_{s}^{T}\bar{Z} _{r}^{t,x,\Delta}dB_{r}-(\bar{K}_{T}^{t,x,\Delta}-\bar{K}_{s}^{t,x,\Delta}). \end{array} \] In order to obtain $\partial_{t}u(t,x)$, we need the following assumption. \begin{description} \item[(A4)] $b_{t}^{\prime}$, $h_{t}^{\prime}$, $\sigma_{t}^{\prime}$, $f_{t}^{\prime}$, $g_{t}^{\prime}$ are continuous in $(s,x,y,z)$, and there exist a constant $L_{2}>0$ and a positive integer $m_{1}$ such that for any $s\in \lbrack0,T]$, $x$, $y$, $z\in \mathbb{R}$, \[ |b_{t}^{\prime}(s,x)|+|h_{t}^{\prime}(s,x)|+|\sigma_{t}^{\prime} (s,x)|+|f_{t}^{\prime}(s,x,y)|+|g_{t}^{\prime}(s,x,y,z)|\leq L_{2} (1+|x|^{m_{1}}+|y|^{m_{1}}+|z|^{2}). \] \end{description} \begin{lemma} \label{le3-5}Suppose that (A1)-(A4) hold. Then, for each $(t,x)\in (0,T)\times \mathbb{R}$ and $p\geq2$, we have \[ \lim_{\Delta \rightarrow0}\sup_{s\in \lbrack t,T]}\mathbb{\hat{E}}\left[ \left \vert \frac{\bar{X}_{s}^{t,x,\Delta}-X_{s}^{t,x}}{\Delta}-\bar{X} _{s}^{t,x}\right \vert ^{p}\right] =0, \] where $(\bar{X}_{s}^{t,x})_{s\in \lbrack t,T]}$ is the solution of the following $G$-SDE: \begin{equation} \begin{array} [c]{rl} \bar{X}_{s}^{t,x}= & \int_{t}^{s}\left[ b_{x}^{\prime}(r,X_{r}^{t,x})\bar {X}_{r}^{t,x}+\frac{T-r}{T-t}b_{t}^{\prime}(r,X_{r}^{t,x})-\frac{1} {T-t}b(r,X_{r}^{t,x})\right] dr\\ & +\int_{t}^{s}\left[ h_{x}^{\prime}(r,X_{r}^{t,x})\bar{X}_{r}^{t,x} +\frac{T-r}{T-t}h_{t}^{\prime}(r,X_{r}^{t,x})-\frac{1}{T-t}h(r,X_{r} ^{t,x})\right] d\langle B\rangle_{r}\\ & +\int_{t}^{s}\left[ \sigma_{x}^{\prime}(r,X_{r}^{t,x})\bar{X}_{r} ^{t,x}+\frac{T-r}{T-t}\sigma_{t}^{\prime}(r,X_{r}^{t,x})-\frac{1} {2(T-t)}\sigma(r,X_{r}^{t,x})\right] dB_{r}. \end{array} \label{e3-21} \end{equation} \end{lemma} \begin{proof} The proof is similar to Lemma \ref{le3-3}, we omit it. \end{proof} \begin{theorem} \label{th3-6}Suppose that (A1)-(A4) hold. Then, for each $(t,x)\in (0,T)\times \mathbb{R}$, we have \begin{align*} \partial_{t+}u(t,x) & =\sup_{P\in \mathcal{P}_{t,x}}E_{P}\left[ \varphi^{\prime}(X_{T}^{t,x})\bar{X}_{T}^{t,x}\Gamma_{T}^{t,x}+\int_{t} ^{T}\left( f_{x}^{\prime}(s)\bar{X}_{s}^{t,x}+\frac{T-s}{T-t}f_{t}^{\prime }(s)-\frac{1}{T-t}f(s)\right) \Gamma_{s}^{t,x}ds\right. \\ & \left. \text{ \ \ \ \ }+\int_{t}^{T}\left( \frac{g_{z}^{\prime} (s)Z_{s}^{t,x}}{2(T-t)}+g_{x}^{\prime}(s)\bar{X}_{s}^{t,x}+\frac{T-s} {T-t}g_{t}^{\prime}(s)-\frac{1}{T-t}g(s)\right) \Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] , \end{align*} \begin{align*} \partial_{t-}u(t,x) & =\inf_{P\in \mathcal{P}_{t,x}}E_{P}\left[ \varphi^{\prime}(X_{T}^{t,x})\bar{X}_{T}^{t,x}\Gamma_{T}^{t,x}+\int_{t} ^{T}\left( f_{x}^{\prime}(s)\bar{X}_{s}^{t,x}+\frac{T-s}{T-t}f_{t}^{\prime }(s)-\frac{1}{T-t}f(s)\right) \Gamma_{s}^{t,x}ds\right. \\ & \left. \text{ \ \ \ \ }+\int_{t}^{T}\left( \frac{g_{z}^{\prime} (s)Z_{s}^{t,x}}{2(T-t)}+g_{x}^{\prime}(s)\bar{X}_{s}^{t,x}+\frac{T-s} {T-t}g_{t}^{\prime}(s)-\frac{1}{T-t}g(s)\right) \Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] , \end{align*} where $(\bar{X}_{s}^{t,x})_{s\in \lbrack t,T]}$ satisfies (\ref{e3-21}), $(\Gamma_{s}^{t,x})_{s\in \lbrack t,T]}$ satisfies (\ref{new-e3-14}), $f_{t}^{\prime}(s)=f_{t}^{\prime}(s,X_{s}^{t,x},Y_{s}^{t,x})$, similar for $f(s)$, $f_{x}^{\prime}(s)$, $g(s)$, $g_{x}^{\prime}(s)$, $g_{z}^{\prime}(s)$ and $g_{t}^{\prime}(s)$. \end{theorem} \begin{proof} The proof is similar to Theorem \ref{th3-4}, we omit it. \end{proof} The following theorem gives the condition for $\partial_{x+}u(t,x)=\partial _{x-}u(t,x)$. \begin{theorem} \label{th3-7}Suppose that (A1)-(A4) hold. If $\sigma(t,x)\not =0$ for some $(t,x)\in(0,T)\times \mathbb{R}$, then $\partial_{x+}u(t,x)=\partial _{x-}u(t,x)$. \end{theorem} \begin{proof} We first sketch the properties of $u$, which is the same as in the proof of Theorem 4.5 in \cite{HJPS}. By Propositions \ref{pro2-1} and \ref{pro3-1}, we can get that, for $s\in \lbrack0,T]$, $x_{1}$, $x_{2}\in \mathbb{R}$, $p\geq2$, \begin{equation} |u(s,x_{1})-u(s,x_{2})|\leq C(1+|x_{1}|^{m}+|x_{2}|^{m})|x_{1}-x_{2}|\text{, }|u(s,x_{1})|\leq C(1+|x_{1}|^{m+1}), \label{e3-23} \end{equation} \begin{equation} \mathbb{\hat{E}}\left[ \sup_{s\leq r\leq T}|Y_{r}^{s,x_{1}}|^{p}+\left( \int_{s}^{T}|Z_{r}^{s,x_{1}}|^{2}d\langle B\rangle_{r}\right) ^{p/2} +|K_{T}^{s,x_{1}}|^{p}\right] \leq C(1+|x_{1}|^{(m+1)p}), \label{e3-24} \end{equation} where $C>0$ depends on $L_{1}$, $\bar{\sigma}$, $p$ and $T$. For each $0\leq t_{1}<t_{2}\leq T$ and $x_{1}\in \mathbb{R}$, by (i) of Proposition \ref{pro3-2}, we know \begin{equation} u(t_{1},x_{1})=\mathbb{\hat{E}}\left[ u(t_{2},X_{t_{2}}^{t_{1},x_{1}} )+\int_{t_{1}}^{t_{2}}f(r,X_{r}^{t_{1},x_{1}},Y_{r}^{t_{1},x_{1}} )dr+\int_{t_{1}}^{t_{2}}g(r,X_{r}^{t_{1},x_{1}},Y_{r}^{t_{1},x_{1}} ,Z_{r}^{t_{1},x_{1}})d\langle B\rangle_{r}\right] . \label{e3-22} \end{equation} It follows from (\ref{e3-4}), (\ref{e3-23}), (\ref{e3-24}), (\ref{e3-22}) and H\"{o}lder's inequality, we obtain \begin{equation} |u(t_{1},x_{1})-u(t_{2},x_{1})|\leq C(1+|x_{1}|^{m+1})\sqrt{t_{2}-t_{1}}, \label{new-e3-25} \end{equation} where $C>0$ depends on $L_{1}$, $\bar{\sigma}$ and $T$. We then take $t_{1}=t-\delta$ with $\delta \in(0,t)$, $t_{2}=t$ and $x_{1}=x$ in (\ref{e3-22}). By Theorem \ref{th3-6}, we know that \begin{equation} \lim_{\delta \downarrow0}\delta^{-1}(u(t-\delta,x)-u(t,x))=-\partial _{t-}u(t,x)\in \mathbb{R}. \label{e3-25} \end{equation} In the following, we will prove that \begin{equation} \mathbb{\hat{E}}\left[ |u(t,X_{t}^{t-\delta,x})-u(t,x+\sigma(t,x)(B_{t} -B_{t-\delta}))|\right] \leq C\delta, \label{e3-26} \end{equation} \begin{equation} \mathbb{\hat{E}}\left[ \int_{t-\delta}^{t}|f(r,X_{r}^{t-\delta,x} ,Y_{r}^{t-\delta,x})|dr+\int_{t-\delta}^{t}|g(r,X_{r}^{t-\delta,x} ,Y_{r}^{t-\delta,x},Z_{r}^{t-\delta,x})|d\langle B\rangle_{r}\right] \leq C\delta, \label{e3-27} \end{equation} \begin{equation} \lim_{\delta \downarrow0}\delta^{-1}\mathbb{\hat{E}}\left[ u(t,x+\sigma (t,x)(B_{t}-B_{t-\delta}))-u(t,x)\right] =\infty \text{ if }\partial _{x+}u(t,x)>\partial_{x-}u(t,x), \label{e3-28} \end{equation} where the constant $C>0$ depends on $x$, $L_{1}$, $L_{2}$, $m$, $m_{1}$, $\bar{\sigma}$ and $T$. If (\ref{e3-26}), (\ref{e3-27}) and (\ref{e3-28}) hold, we can get $\partial_{x+}u(t,x)=\partial_{x-}u(t,x)$ by (\ref{e3-25}). Noting that \[ \mathbb{\hat{E}}\left[ \int_{t-\delta}^{t}|\sigma(r,X_{r}^{t-\delta ,x})-\sigma(t,x)|^{2}d\langle B\rangle_{r}\right] \leq C\int_{t-\delta} ^{t}\mathbb{\hat{E}}[|X_{r}^{t-\delta,x}-x|^{2}]dr+C\delta^{3}\leq C\delta ^{2}, \] we get (\ref{e3-26}) by (\ref{e3-23}). By (i) of Proposition \ref{pro3-2}, we know $Y_{r}^{t-\delta,x}=u(r,X_{r}^{t-\delta,x})$. Then we get \[ \begin{array} [c]{rl} Y_{s}^{t-\delta,x}-u(t,x)= & u(t,X_{t}^{t-\delta,x})-u(t,x)+\int_{s} ^{t}g(r,X_{r}^{t-\delta,x},u(r,X_{r}^{t-\delta,x}),Z_{r}^{t-\delta,x})d\langle B\rangle_{r}\\ & +\int_{s}^{t}f(r,X_{r}^{t-\delta,x},u(r,X_{r}^{t-\delta,x}))dr-\int_{s} ^{t}Z_{r}^{t-\delta,x}dB_{r}-(K_{t}^{t-\delta,x}-K_{s}^{t-\delta,x}). \end{array} \] By (\ref{e2-7}) in Proposition \ref{pro2-1}, (\ref{e3-23}) and (\ref{new-e3-25}), we obtain \begin{align*} \mathbb{\hat{E}}\left[ \int_{t-\delta}^{t}|Z_{r}^{t-\delta,x}|^{2}d\langle B\rangle_{r}\right] & \leq C\mathbb{\hat{E}}\left[ \sup_{s\in \lbrack t-\delta,t]}|u(s,X_{s}^{t-\delta,x})-u(t,x)|^{2}\right] +C\delta^{2}\\ & \leq C\mathbb{\hat{E}}\left[ \sup_{s\in \lbrack t-\delta,t]}|u(s,X_{s} ^{t-\delta,x})-u(s,x)|^{2}\right] +C\delta \\ & \leq C\delta. \end{align*} Then we can easily get (\ref{e3-27}) by H\"{o}lder's inequality. Now we prove (\ref{e3-28}). Set $\xi_{\delta}=\sigma(t,x)(B_{t}-B_{t-\delta} )$, we have \begin{align*} \frac{u(t,x+\xi_{\delta})-u(t,x)}{\delta}= & \frac{[u(t,x+\xi_{\delta })-u(t,x)-\partial_{x+}u(t,x)\xi_{\delta}]I_{\{ \xi_{\delta}>0\}} +\partial_{x+}u(t,x)\xi_{\delta}^{+}}{\delta}\\ & +\frac{[u(t,x+\xi_{\delta})-u(t,x)-\partial_{x-}u(t,x)\xi_{\delta}]I_{\{ \xi_{\delta}<0\}}-\partial_{x-}u(t,x)\xi_{\delta}^{-}}{\delta}. \end{align*} If $\partial_{x+}u(t,x)>\partial_{x-}u(t,x)$, then there exists an $l>0$ such that \[ |u(t,x+x^{\prime})-u(t,x)-\partial_{x+}u(t,x)x^{\prime}|\leq \frac{\gamma} {4}x^{\prime}\text{ for }x^{\prime}\in \lbrack0,l], \] \[ |u(t,x+x^{\prime})-u(t,x)-\partial_{x-}u(t,x)x^{\prime}|\leq-\frac{\gamma} {4}x^{\prime}\text{ for }x^{\prime}\in \lbrack-l,0], \] where $\gamma=\partial_{x+}u(t,x)-\partial_{x-}u(t,x)$. Then, by (\ref{e3-23}), we obtain \begin{align*} & \frac{|u(t,x+\xi_{\delta})-u(t,x)-\partial_{x+}u(t,x)\xi_{\delta}|I_{\{ \xi_{\delta}>0\}}}{\delta}\\ & \leq C(1+|\xi_{\delta}|^{m})\frac{|\xi_{\delta}|}{\delta}I_{\{ \xi_{\delta }>l\}}+\frac{\gamma}{4}\frac{\xi_{\delta}}{\delta}I_{\{0<\xi_{\delta}\leq l\}}\\ & \leq C(1+|\xi_{\delta}|^{m})\frac{|\xi_{\delta}|^{3}}{\delta l^{2}} +\frac{\gamma}{4}\frac{\xi_{\delta}^{+}}{\delta}, \end{align*} where the constant $C>0$ depends on $x$, $L_{1}$, $m$, $\bar{\sigma}$ and $T$. Similarly, we have \[ \frac{|u(t,x+\xi_{\delta})-u(t,x)-\partial_{x-}u(t,x)\xi_{\delta}|I_{\{ \xi_{\delta}<0\}}}{\delta}\leq C(1+|\xi_{\delta}|^{m})\frac{|\xi_{\delta} |^{3}}{\delta l^{2}}+\frac{\gamma}{4}\frac{\xi_{\delta}^{-}}{\delta}. \] Noting that $\partial_{x+}u(t,x)\xi_{\delta}^{+}-\partial_{x-}u(t,x)\xi _{\delta}^{-}=\frac{\gamma}{2}|\xi_{\delta}|+\frac{1}{2}[\gamma+2\partial _{x-}u(t,x)]\xi_{\delta}$ we get \[ \frac{u(t,x+\xi_{\delta})-u(t,x)}{\delta}\geq \frac{\gamma}{4}\frac {|\xi_{\delta}|}{\delta}+\frac{1}{2}[\gamma+2\partial_{x-}u(t,x)]\frac {\xi_{\delta}}{\delta}-2C(1+|\xi_{\delta}|^{m})\frac{|\xi_{\delta}|^{3} }{\delta l^{2}}. \] Since $\mathbb{\hat{E}}[\xi_{\delta}]=\mathbb{\hat{E}}[-\xi_{\delta}]=0$, $\mathbb{\hat{E}}[|\xi_{\delta}|]=|\sigma(t,x)|\mathbb{\hat{E}}[|B_{1} |]\sqrt{\delta}$, $\mathbb{\hat{E}}[|\xi_{\delta}|^{6}]=|\sigma(t,x)|^{6} \mathbb{\hat{E}}[|B_{1}|^{6}]\delta^{3}$ and \[ \delta^{-1}\mathbb{\hat{E}}\left[ u(t,x+\xi_{\delta})-u(t,x)\right] \geq \delta^{-1}\left( \frac{\gamma}{4}\mathbb{\hat{E}}\left[ |\xi_{\delta }|\right] -\frac{2C}{l^{2}}\sqrt{\mathbb{\hat{E}}[(1+|\xi_{\delta}|^{m} )^{2}]\mathbb{\hat{E}}[|\xi_{\delta}|^{6}]}\right) , \] we obtain (\ref{e3-28}). The proof is completed. \end{proof} Finally, we study $\partial_{xx}^{2}u(t,x)$. We need the following assumption. \begin{description} \item[(A5)] $b_{xx}^{\prime \prime}$, $h_{xx}^{\prime \prime}$, $\sigma _{xx}^{\prime \prime}$, $f_{xx}^{\prime \prime}$, $f_{xy}^{\prime \prime}$, $f_{yy}^{\prime \prime}$, $g_{xx}^{\prime \prime}$, $g_{xy}^{\prime \prime}$, $g_{xz}^{\prime \prime}$, $g_{yy}^{\prime \prime}$, $g_{yz}^{\prime \prime}$, $g_{zz}^{\prime \prime}$ are continuous in $(s,x,y,z)$ and bounded by a constant $L_{3}>0$. \end{description} \begin{theorem} Suppose that (A1)-(A3) and (A5) hold. Then, for each $(t,x)\in \lbrack 0,T)\times \mathbb{R}$, we have \begin{equation} \Delta^{-1}\left[ \partial_{x-}u(t,x+\Delta)-\partial_{x+}u(t,x)\right] \geq-C(1+|x|^{2m})\text{ for }\Delta \in(0,1], \label{e3-29} \end{equation} \begin{equation} \Delta^{-1}\left[ \partial_{x+}u(t,x+\Delta)-\partial_{x-}u(t,x)\right] \geq-C(1+|x|^{2m})\text{ for }\Delta \in \lbrack-1,0), \label{e3-30} \end{equation} where the constant $C>0$ depends on $L_{1}$, $L_{3}$, $\bar{\sigma}$ and $T$. \end{theorem} \begin{proof} By the definition of $\mathcal{P}_{t,x}$, it is easy to verify that $\mathcal{P}_{t,x}$ is weakly compact. Then we can choose a $P\in \mathcal{P}_{t,x}$ such that \[ \partial_{x+}u(t,x)=E_{P}\left[ \varphi^{\prime}(X_{T}^{t,x})\hat{X} _{T}^{t,x}\Gamma_{T}^{t,x}+\int_{t}^{T}f_{x}^{\prime}(s)\hat{X}_{s} ^{t,x}\Gamma_{s}^{t,x}ds+\int_{t}^{T}g_{x}^{\prime}(s)\hat{X}_{s}^{t,x} \Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] \] in (\ref{e3-13}). Using the same notations as in the proof of Theorem \ref{th3-4}, for $\Delta \in(0,1]$, we get by (\ref{e3-15}) that \[ \hat{Y}_{t}^{\Delta}\geq \Delta \partial_{x+}u(t,x)+E_{P}\left[ \tilde{\varphi }(T)\Gamma_{T}^{t,x}+\int_{t}^{T}\tilde{f}(s)\Gamma_{s}^{t,x}ds+\int_{t} ^{T}\tilde{g}(s)\Gamma_{s}^{t,x}d\langle B\rangle_{s}\right] . \] Under the assumption (A5), it is easy to check that \[ |\tilde{\varphi}(T)|\leq C(1+|X_{T}^{t,x}|^{m})|\hat{X}_{T}^{\Delta}-\hat {X}_{T}^{t,x}\Delta|+C|\hat{X}_{T}^{\Delta}|^{2}\text{, }|\tilde{f}(s)|\leq C(1+|X_{s}^{t,x}|^{m})|\hat{X}_{s}^{\Delta}-\hat{X}_{s}^{t,x}\Delta |+C(|\hat{X}_{s}^{\Delta}|^{2}+|\hat{Y}_{s}^{\Delta}|^{2}), \] \[ |\tilde{g}(s)|\leq C(1+|X_{s}^{t,x}|^{m})|\hat{X}_{s}^{\Delta}-\hat{X} _{s}^{t,x}\Delta|+C(|\hat{X}_{s}^{\Delta}|^{2}+|\hat{Y}_{s}^{\Delta} |^{2}+|\hat{Z}_{s}^{\Delta}|^{2}), \] where $C>0$ depends on $L_{1}$ and $L_{3}$. We can also get \[ \sup_{s\in \lbrack t,T]}\mathbb{\hat{E}}\left[ \left \vert \hat{X}_{s}^{\Delta }-\hat{X}_{s}^{t,x}\Delta \right \vert ^{p}\right] \leq C\Delta^{2p} \] for $p\geq2$ in the proof of Lemma \ref{le3-3} by $|\tilde{b}(s)|+|\tilde {h}(s)|+|\tilde{\sigma}(s)|\leq C|\hat{X}_{s}^{\Delta}|^{2}$, where $C>0$ depends on $L_{1}$, $L_{3}$, $\bar{\sigma}$, $p$ and $T$. It follows from (\ref{e3-4}), (\ref{e3-5}) and (\ref{e3-6}) that \begin{equation} \left \vert E_{P}\left[ \tilde{\varphi}(T)\Gamma_{T}^{t,x}+\int_{t}^{T} \tilde{f}(s)\Gamma_{s}^{t,x}ds+\int_{t}^{T}\tilde{g}(s)\Gamma_{s} ^{t,x}d\langle B\rangle_{s}\right] \right \vert \leq C(1+|x|^{2m})\Delta^{2}, \label{e3-32} \end{equation} where $C>0$ depends on $L_{1}$, $L_{3}$, $\bar{\sigma}$ and $T$. Thus \begin{equation} \hat{Y}_{t}^{\Delta}\geq \Delta \partial_{x+}u(t,x)-C(1+|x|^{2m})\Delta^{2}. \label{e3-31} \end{equation} We can also choose a $P^{\Delta}\in \mathcal{P}_{t,x+\Delta}$ such that \begin{align*} \partial_{x-}u(t,x+\Delta)= & E_{P^{\Delta}}\left[ \varphi^{\prime} (X_{T}^{t,x+\Delta})\hat{X}_{T}^{t,x+\Delta}\Gamma_{T}^{t,x+\Delta}+\int _{t}^{T}f_{x}^{\prime}(s,X_{s}^{t,x+\Delta},Y_{s}^{t,x+\Delta})\hat{X} _{s}^{t,x+\Delta}\Gamma_{s}^{t,x+\Delta}ds\right. \\ & \left. +\int_{t}^{T}g_{x}^{\prime}(s,X_{s}^{t,x+\Delta},Y_{s}^{t,x+\Delta },Z_{s}^{t,x+\Delta})\hat{X}_{s}^{t,x+\Delta}\Gamma_{s}^{t,x+\Delta}d\langle B\rangle_{s}\right] . \end{align*} Applying It\^{o}'s formula to $\hat{Y}_{s}^{\Delta}\Gamma_{s}^{t,x+\Delta}$ on $[t,T]$ under $P^{\Delta}$, similar to (\ref{e3-15}) and the analysis of (\ref{e3-32}), we can get \begin{equation} \hat{Y}_{t}^{\Delta}\leq \Delta \partial_{x-}u(t,x+\Delta)+C(1+|x|^{2m} )\Delta^{2}, \label{e3-33} \end{equation} where $C>0$ depends on $L_{1}$, $L_{3}$, $\bar{\sigma}$ and $T$. Then we obtain (\ref{e3-29}) by (\ref{e3-31}) and (\ref{e3-33}). Similarly, we can deduce (\ref{e3-30}). \end{proof} \begin{remark} We can get similar estimates under the assumption \[ |b_{xx}^{\prime \prime}(s,x)|\leq L_{4}(1+|x|^{m_{2}}) \] for positive constant $L_{4}$ and positive integer $m_{2}$, similar for the second derivatives of $h$, $\sigma$, $f$ and $g$. \end{remark} \end{document}
arXiv
Influence of boundary conditions on acoustic emission propagation characteristics of Zelkova schneideriana Yue Zhao1, Ming Li2,3, Saiyin Fang1, Shaochun Zhang1, Changlin Huang1, Tingting Deng1, Feilong Mao1, Gezhou Qin1 & Daigen Zhu1 To study the propagation characteristics of acoustic emission signals in Zelkova schneideriana under different boundary conditions, three types of boundary conditions were generated by applying aluminum plates and sound-absorbing cotton on the surface of Zelkova schneideriana specimens. Firstly, the sudden and continuous acoustic emission (AE) sources were simulated by PLB (pencil–lead break) tests and signal generator on the specimen surface, and the AE signals were collected by 5 sensors equally spaced on the surface of the specimen, and the sampling frequency was set to 500 kHz. Then, the detailed signals of different frequency bands were obtained by wavelet decomposition, and TDOA (the time difference of arrival) and correlation analysis method were used to calculate the time difference of longitudinal wave and surface transverse wave and the corresponding propagation velocity, respectively. Finally, the pulse trains with different energy levels generated by the signal generator were used as AE sources to study the attenuation law of AE signal energy with distance under different boundary conditions. The results show that the boundary changes can lead to a significant increase in the surface transverse wave velocity, and have no significant effect on the longitudinal wave velocity. At the same time, the energy attenuation of surface and longitudinal waves is faster after the aluminum plate and sound-absorbing cotton are affixed, and the distance of longitudinal waves attenuation to 90% is reduced from 186 to 139 mm, and the distance of surface transverse waves propagation is reduced from 312 to 226 mm. Acoustic emission (AE) refers to the phenomenon that the strain energy is released in the form of transient elastic wave when the material is deformed and fractured by external force or internal force [1]. AE technology as a non-destructive testing method, has been widely used in metals, wood, and composites [2,3,4]. AE technology is widely used in wood processing monitoring, drying and fracture damage. For example, Kawamoto et al. used AE to monitor defects during wood drying [5]. Kim et al. used principal component analysis and artificial neural network to classify AE signals in oak drying process, and the results showed that AE technology could effectively monitor the drying process of wood [6]. In recent years, AE has also been applied to wood processing monitoring. Vahid et al. effectively monitored the sawing process of fir under extreme cutting conditions by extracting AE signal characteristics [7]. Bucur et al. used AE technology to study the relationship between wood internal crack propagation and AE signal characteristics [8]. Lamy et al. [9] used AE technology to study the failure process of wood under monotonic loading [9]. Li et al. studied the acoustic emission signal propagation characteristics of Pinus massoniana plywood using AE technology [10]. In the current study, most researchers have used pencil–lead break (PLB) as the AE source to simulate the damage of materials and to clarify the propagation characteristics of the AE signal generated by the simulated damage source [11,12,13,14,15]. Wang et al. verified that PLB tests can effectively simulate wood damage. At the same time, the effect of surface cracks on AE signals was studied by PLB tests. The results showed that surface cracks had an effect on the spectrum characteristics and propagation speed of AE signals [16, 17]. Shen et al. [18] used the method of spectral analysis to study the AE signals in the process of wood damage and fracture. The results showed that it can be divided into three categories according to the characteristic spectral analysis of AE signals in the process of wood damage and fracture. The propagation of acoustic emission in a structure was accompanied by reflection, attenuation and phenomena [19]. To study the energy attenuation law of different waveforms in wood, Zhao et al. established an energy attenuation model to study the location of the damage source of wood tenon structure by simulating the acoustic emission source through PLB tests. The location of the damage source of wood tenon structure was determined using the energy attenuation model and the method of two-point localization method, and the results of the study showed that the energy attenuation model was applicable to wood [20]. Li et al. [21] and Ding et al. [22] studied the energy attenuation law of surface shear wave and internal longitudinal wave in wood by designing the separation test of surface shear wave and internal longitudinal wave. The results showed that the energy of surface shear wave and internal longitudinal wave decreased exponentially. There are two methods for processing and analyzing AE signals: one is parametric analysis and the other is wavelet analysis [23, 24]. Li et al. [25] used wavelet analysis method to study the AE signal characteristics of Pseudotsuga menziesii plywood beam, and calculated the propagation velocity of AE signal in the surface direction of Pseudotsuga menziesii plywood beam. Liu et al. [26] carried on the noise reduction processing to the particleboard compression acoustic emission signal through the wavelet analysis method, the results showed that the wavelet analysis can well retain the mutation part of the AE signal, which can effectively reduce the influence of noise. Li et al. [27] used wavelet analysis to extract AE signals in different frequency bands on the surface and inside of camphor pine, and used the AE signals with the effective frequency band to calculate the propagation velocity, and the research results showed that the accuracy of calculating the propagation velocity of AE signals could be effectively improved. The propagation speed of AE signal is calculated according to the propagation time difference between the two sensors and the distance between the sensors, the commonly used method to calculate the propagation time difference is signal correlation analysis [28,29,30]. At present, most studies on the AE characteristics of wood only focus on the AE signals of wood cracks and wood surface. However, in practical engineering applications, wood is often closely connected with other types of materials. To study the influence of different boundary conditions on the propagation characteristics of AE signals in wood, this paper takes Zelkova schneideriana as the experimental material, selects isotropic metal aluminum plate and anisotropic porous media material sound-absorbing cotton, and closely adheres them to the wood surface. The original boundary conditions of wood are changed. Lead core fracture and signal generator are used as simulated AE sources, and on the basis of lead core fracture. The propagation velocity of AE longitudinal wave and surface wave is calculated by the time difference of arrival (TDOA) and correlation analysis. Finally, the 150 kHz pulse signals with different voltage levels generated by the signal generator are used as AE sources to study the influence of boundary conditions on AE energy attenuation. Experimental materials The Zelkova schneideriana specimen with smooth surface and no defect was selected, and its size was 800 mm × 60 mm × 30 mm, the density was 0.72 g/cm3, moisture content (MC) was 11.8%. The test specimens were divided into four groups, the boundary conditions of 3 groups were changed, and the specimens without boundary conditions were recorded as T1, while the specimens with aluminum plate, sound-absorbing cotton, aluminum plate and sound-absorbing cotton on the wood surface were recorded as T2, T3and T4, respectively, as shown in Fig. 1. The 5-channel AE signal acquisition system was built by NI USB-6366 high-speed acquisition card and Lab VIEW 2017 software (National Instruments of America). The RS-2A single-end resonant AE sensor was selected, whose bandwidth is 50 to 400 kHz. To realize the long-distance transmission of AE signals, the PAI front-end amplifier with gain of 40 dB was used, as shown in Fig. 2. During the test, the sampling frequency of the system was set to 500 kHz, and the output range of the output voltage was set to (− 5 V, 5 V). The signal generator model was SIGLENT-SDG805, the sampling channel is single channel, the maximum output frequency is 5 MHz, the maximum sampling rate is 125 MSa/s, and the output voltage range is 4 mVpp to 20 Vpp. Classification of specimens Schematic diagram of the test scheme. ① Ruler ②AE source ③AE signal acquisition computer ④PAI front-end amplifier ⑤Sound-absorbing cotton ⑥Sensors ⑦Aluminum plate ⑧Vise ⑨NI acquisition card ⑩Arbitrary waveform generator According to of the United States [31], the automatic pencil lead with a diameter of 0.5 mm was placed at an angle of 30° with the specimen surface, and the sudden AE source was broken at 2.5 mm away from the contact point to calculate the propagation velocity of AE signal in wood. A signal generator was used to generate a 150 kHz burst to analyze the energy attenuation of the AE signal in the wood. As shown in Fig. 1, the acquisition sensor was placed at an equal distance of 150 mm in the experiment, and the distance between the sensors on both sides of the sensor and the left and right end faces was 100 mm, which was S1 to S5 from right to left. According to the mechanical wave vibration theory, the particle vibration direction of shear wave is perpendicular to the propagation direction of wave, and the particle vibration direction of longitudinal wave is parallel to the propagation direction of wave. Therefore, when the burst and continuous AE sources are generated at the a1 position of the wood surface, the signal detected by the sensor S1 to S5 is mainly the shear wave of the wood surface [21], and when the burst and continuous AE sources are generated at the a2 position of the wood end face, the signal detected by the sensor S1 to S5 is mainly the longitudinal wave of the wood. The frequency components of AE signals produced by PLB are complex and the distribution frequency domain is wide, to extract clearer AE signals, wavelet analysis is used to decompose them into four layers. According to the multi-resolution analysis theory of the wavelet transform, the frequency ranges of the four-layer detail signals decomposed by wavelet are (125 kHz, 250 kHz), (62.5 kHz, 125 kHz), (31.25 kHz, 62.5 kHz), (15.625 kHz, 31.25 kHz). To study the propagation velocity of AE surface transverse wave and longitudinal wave in wood, the propagation time difference Δt was determined by TDOA. On the basis of wavelet analysis, the propagation velocity of AE surface wave was calculated by signal correlation. According to the elastic wave theory, the propagation velocity of the longitudinal wave in the material is greater than that of the surface transverse wave. The first wave received by the sensor is mainly composed of the longitudinal wave, and the longitudinal wave velocity is calculated by TDOA. The distance Δs of the sensors remains unchanged during the test, and the propagation speed of AE signal can be calculated according to \(v=\mathrm{\Delta s}/\Delta t\). Cross correlation function represents the similarity between two signals, the cross correlation function of signals x(t) and y(t) is defined as $$\begin{array}{c}{R}_{xy}\left(\uptau \right)=\underset{T\to \infty }{\mathrm{lim}}\frac{1}{T}{\int }_{0}^{T}x\left(t\right)y\left(t+\tau \right){d}_{t}\end{array}$$ When τ = τ0, the absolute value of the cross-correlation function | Rxy (τ0) | reaches the maximum, and it means that the signal y (t) has the highest similarity with the signal x (t) after τ0 units are shifted on the time axis. To calculate the energy of the AE signal of the specimen, the AE signal is regarded as the alternating current. The energy of the AE signal is the heat generated by the AE signal through the unit resistance in a certain time: $$\begin{array}{c}W={\int }_{0}^{t}\frac{{U}^{2}}{R}d\tau \end{array}$$ The AE signal collected through the system is discontinuous, Eq. (2) needs to be discretized, and the two data are separated by 1/fs second. Assuming that the discrete process adopts zero-order retainer, that is, the amplitude of the signal remains unchanged during this period, then the AE signal energy can be calculated as Eq. (3): $$\begin{array}{c}W=\sum_{i=1}^{n}\Delta {t}_{i}\cdot{u}_{i}^{2}=T\sum_{i=1}^{n}{u}_{i}^{2}\end{array}$$ \(\Delta {t}_{i}=T=1/{f}_{s}\left(i=\mathrm{1,2},\dots ,n\right)\), where fs is the sampling frequency and n is the data length. Surface transverse wave velocity of AE signal under different boundary conditions Teodorovich et al., Pang et al. and Calvet et al. extracted and analyzed the features of the fading signal using AE signal features, spectrum and pattern recognition [32,33,34]. Figure 3 shows the time-domain signal and standing wave spectrum generated by the PLB, from top to bottom are sensors S1 to S5, respectively. As shown in Fig. 3a, all five sensors have a relatively stable wave in the time domain signal, which is considered as a "standing wave". According to the theory of elastic wave, standing wave refers to two kinds of waves with the same frequency and opposite transmission direction. One wave is generally the reflected wave of another wave, and the standing wave always exists in the propagation process of AE signal. To clarify the frequency domain characteristics of the standing wave, the stable waves corresponding to the five sensors are intercepted and analyzed by fast Fourier transform. As shown in Fig. 3b, the principal components of the five sensors are concentrated around 5 kHz. When calculating the surface wave velocity, to eliminate the influence of standing wave, wavelet analysis was used to denoise the AE signal. AE waveform and standing wave frequency domain of PLB. a AE waveform produced by lead breaking. b Standing wave spectrum Figure 4 shows the wavelet analysis layering diagram, based on wavelet theory, the AE signal is decomposed by 4 layers of wavelets to remove the standing wave components in the frequency band centered at 5.9 kHz. AE signals collected by sensor are mainly distributed in d1 and d3 frequency bands, and the proportion of d2 and d4 components in the signal is small. d1 is mainly dominated by high frequency, because the porous structure and viscoelasticity of wood have filtering effect on AE signals in certain frequency bands [16]. So when the stress wave propagates in viscoelastic medium, viscosity is the main factor that causes the attenuation of stress wave at any time and space, so the high-frequency component of the signal attenuates rapidly [34]. Due to high-frequency components decay faster in wood specimens, using correlation analysis can increase the calculation error, so the AE signal in the d3 band was used to calculate the surface wave propagation velocity using correlation. Therefore, the AE signal in the d3 frequency band was used to calculate the surface wave propagation velocity by correlation. Ten independent tests were carried out on four specimens, where v1 and v2 denote the propagation velocity at a distance of 150 mm and 300 mm, respectively, and the test results are shown in Table 1. Layering diagram of wavelet analysis Table 1 Propagation velocity of the third layer surface wave under different boundary conditions Table 1 is the surface transverse wave velocity corresponding to different boundary conditions. It can be seen from Table 1 that the velocity of the surface transverse waves varies with the boundary conditions. In the 10 independent tests of T1 specimens without boundary conditions, the average values of v1 and v2 are 938 m/s and 949 m/s, respectively. The average values of v1 and v2 of sound-absorbing cotton T3 specimens are 938 m/s, the sound-absorbing cotton is loose and porous, which can absorb part of the signal, and the reflection is less than T1, so the speed tends to be stable. However, in T2 and T4 specimens with aluminum plate, the velocity increases obviously. The reason for this change is that although the same reflection phenomenon exists in T2 and T4 specimens, since the propagation speed of AE signal in aluminum plate is greater than that in wood [36], when the AE signal is transmitted to the interface of aluminum plate, the signal is totally reflected, and most of the AE signal is transmitted inside the wood. According to the elastic wave theory, the internal propagation velocity of wood is greater than the surface propagation velocity, which leads to the increase of surface transverse wave velocity when the aluminum plate is added. Longitudinal velocity of AE signal under different boundary conditions To explore the effect of boundary conditions on longitudinal wave velocity, the propagation velocity of AE longitudinal wave was calculated by TDOA. Ten independent experiments were carried out on the four specimens, the experimental results are shown in Table 2, where vi1 and vi2 (i = 1, 2, 3, 4) represent the propagation velocity at a distance of 150 mm and 300 mm, respectively. Table 2 Propagation velocity of longitudinal waves under different boundary conditions It can be seen from Table 2 that the average propagation velocities of vi1 and vi2 are 4688 m/s and 5000 m/s, respectively, indicating that there is no significant change in the propagation velocity of AE signals by changing the boundary conditions. According to the vibration theory of mechanical waves, longitudinal waves can propagate in solid, liquid and gas media. When AE signal propagates from wood to another medium, there is little effect on longitudinal wave. Energy attenuation law of AE signal under different boundary conditions To study the energy attenuation law of AE signal under different boundary conditions, the pulse signal with the emission frequency of 150 kHz and the cycle number of 15,000 was generated by the signal generator. In this experiment, the position of the sensor and the AE source was kept unchanged, and the initial energy emitted by the AE source changes as the voltage level was set to 20 V, 15 V, 10 V and 5 V, thus, the energy attenuation can be studied under different boundary conditions and different amplitude conditions. To more intuitively reflect the energy attenuation of each specimen at different amplitudes, Figs. 5 and 6 show the energy attenuation curves of surface and longitudinal waves at different source voltages, respectively. Since the AE energy decays too fast in wood, the real energy is taken as the longitudinal coordinate after logarithm. Energy attenuation curve of AE surface transverse wave Energy attenuation curve of AE longitudinal wave The energy attenuation curves of AE surface transverse wave and longitudinal wave under different source voltage levels corresponding to test pieces T1–T4 are shown in (a) to (d) of Figs. 5 and 6, respectively. Although the voltage amplitude is different, the energy attenuation law is basically the same under the same boundary condition. To intuitively express the attenuation trend of energy, the relative distance between each sensor and AE source is taken as the independent variable, and the exponential function is used to numerically fit the energy value of AE signal. Figures 7 and 8 are the fitting curves of energy attenuation under different voltage levels, which clearly characterize the energy attenuation of different specimens. In the fitting process, the energy measured by the sensor closest to the AE source is regarded as 1, and the energy measured by other sensors is normalized to obtain the fitting curve. In the fitting equations shown in Figs. 7 and 8, the coefficient before x is the AE attenuation coefficient under different voltage levels. After calculating the average value, the absolute value is taken, which is called the attenuation rate of this specimen. It is expressed by K. The greater the absolute value of K is, the faster the AE energy attenuation rate is. To show the energy attenuation law of AE signal, the distance from energy attenuation to 50% and 90% and the energy attenuation rate are used to illustrate the energy attenuation. Fitting curve of AE surface transverse wave energy attenuation Fitting curve of AE longitudinal wave energy attenuation When characterizing the attenuation law of energy with distance, due to the large attenuation of AE energy, the AE energy is logarithmically processed. There is a significant difference in the order of magnitude between the energy value after logarithm and the distance, and direct fitting is easy to produce a large fitting error. For this reason, the distance is linearly processed in the following: $$\begin{array}{c}x=\frac{D-\mathrm{mean}\left(D\right)}{\mathrm{std}\left(D\right)}\end{array}$$ where D is the actual distance of AE sensor placement, D ∈ [0,600 mm], at this time is the sensor S1 as the origin; x is the transformed equivalent distance, that is, the horizontal axis coordinates in Figs. 7 and 8; mean(D) is the expectation of D; std(D) is the variance of D. In this paper, the expectation and variance are 300 mm and 237.2 mm, respectively. The K value in Figs. 7 and 8 is the average slope of the fitting curve of different voltage levels, which represents the attenuation rate of energy. The greater the absolute value of K, the faster the attenuation rate of energy. Comparing Figs. 7 and 8, it can be seen that the |K| of surface transverse waves is greater than that the |K| of longitudinal waves, which means that the surface transverse wave energy decays faster than the longitudinal wave. This is because the longitudinal wave mainly propagates along the internal texture direction of the wood, the propagation resistance is small, so the energy attenuation is slow. But the surface transverse wave mainly propagates along the surface of the wood, which will produce energy conversion with the surface fluid, and the energy attenuation rate is faster than the longitudinal wave. It can be seen from Figs. 7 and 8 that in the T1 specimen without any added boundary conditions, the energy attenuation rate |K| of both surface and longitudinal waves is smaller than that of other specimens, and as the boundary conditions are changed, |K| increases gradually, which is mainly due to the different reflection and transmission intensities at different boundaries. Compared with the specimen T2 with aluminum plate, the attenuation rate of T2 was greater than that of T1, which was because the surface transverse wave mainly propagated near the specimen surface, but not completely on the specimen surface, and was greatly affected by the medium in the propagation process [37]. When the AE signal propagates to the boundary condition, the reflection will occur. The propagation velocity of AE signal in aluminum plate is greater than that in wood, so the addition of aluminum plates will make the reflection enhanced and the energy decays fast. Compared with T1 and T3, sound-absorbing cotton is a loose porous medium, adding sound-absorbing cotton will make part of the energy transmission, making the energy decay faster. Compared with the propagation in a single wood specimen T1, the specimen T4 with the addition of aluminum plate and sound-absorbing cotton can result in enhanced reflection and transmission and fast energy decay rate. To clearly show the attenuation law of energy, the energy attenuation is expressed by the distance of energy attenuation to 50% and 90%, as shown in Fig. 7, the attenuation rate of surface transverse wave in single wood T1 specimen is the slowest, and the attenuation distances to 50% and 90% are 56 mm and 186 mm, respectively. When the boundary conditions were changed, the surface transverse wave attenuation rate of T4 specimen with aluminum plate and sound-absorbing cotton is the fastest, and the attenuation distances to 50% and 90% are 42 mm and 139 mm, respectively. As shown in Fig. 8, the attenuation law of energy during the propagation of longitudinal waves is similar to that of surface transverse waves. The attenuation rate of longitudinal wave in single wood T1 specimen is the slowest, the attenuation distance to 50% and 90% are 94 mm and 312 mm, respectively. When the boundary conditions are changed, the attenuation rate of longitudinal wave in T4 specimen with aluminum plate and sound-absorbing is the fastest, the attenuation distance to 50% and 90% are 68 mm and 226 mm, respectively. According to the attenuation distance in Figs. 7 and 8, the distance from energy attenuation to 50% is shorter than the distance used to increase the attenuation by 40%, indicating that the energy attenuation rate of AE signal is faster in the early stage of propagation, and slower in the late stage. This is mainly because at the initial stage of propagation, AE signals are mainly concentrated in the high frequency band, and the proportion of low-frequency signal components is small, and the high-frequency signal attenuation is obvious in the process of forward propagation. This paper designed the surface and longitudinal wave extraction experiments for wood based on the theory of elastic and mechanical wave vibrations. The propagation rates of different types of AE signals under different boundary conditions were calculated, and the decay characteristics of AE energy were analyzed. In the analysis of the velocity, wavelet analysis was used for the surface transverse waves to perform a 4-layer wavelet decomposition of the original AE signal, and it can be found that the main components of the surface transverse waves were concentrated at 37.7 kHz and 165.3 kHz. Since the high frequency signal decays quickly, the third layer detail signal was used to calculate the surface transverse wave velocity. When the boundary conditions were aluminum plate and aluminum plate with sound-absorbing cotton, the surface transverse wave velocity increased in varying degrees compared with the original wood specimen. Since the longitudinal wave and the surface transverse wave propagation medium are different, the longitudinal wave propagation velocity was calculated based on TDOA, and the results of the study showed that the change of boundary conditions had no significant effect on the longitudinal wave velocity. When exploring the AE energy attenuation law, the signal generator was used as the simulated AE source to study the decay of AE energy under different boundary conditions. The change in the initial energy value did not affect the attenuation law of AE energy, and the change in the boundary conditions caused the change in the energy attenuation rate. Under the four different boundary conditions, the surface transverse waves' energy attenuation rates are 2.94, 3.24, 3.15, and 3.93, respectively, and the longitudinal waves' energy attenuation rates are 1.75, 1.89, 2.12, and 2.24, respectively. The energy attenuation of surface transverse wave is more obvious than that of longitudinal wave. The results of this paper have a great practical significance for AE detection under complex boundary conditions, and provide a basic theoretical basis for how to deal with AE detection data under complex boundary conditions. In the subsequent studies, the propagation and attenuation of AE signals under different boundary conditions can be further quantitatively analyzed. The experimental data used in this study are available on request from the corresponding Author. AE: Acoustic emission PLB: Pencil–lead break TDOA: The time difference of arrival Baensch Z, Sanabria SJ, Sause MGR, Pinzer BR, Brunner AJ (2015) Damage evolution in wood: synchrotron radiation micro-computed tomography (SR μ CT) as a complementary tool for interpreting acoustic emission (AE) behavior. Holzforschung 69(8):1015–1025. https://doi.org/10.1515/hf-2014-0152 Bobrov AL (2017) Methodical principles of recognition different source types in an acoustic-emission testing of metal objects. J Phys Conf Ser 881(1):012020. https://doi.org/10.1088/1742-6596/881/1/012020 Diakhate M, Bastidas-Arteaga E, Pitti RM, Schoefs F (2017) Cluster analysis of acoustic emission activity within wood material: towards a real-time monitoring of crack tip propagation. Eng Fract Mech 180:254–267. https://doi.org/10.1016/j.engfracmech.2017.06.006 Kong X, Wang Y, Yang Q, Zhang X, Yang R (2020) Damage identification in fiber reinforced titanium matrix composites using acoustic emission. J Alloy Compd 826:153928. https://doi.org/10.1016/j.jallcom.2020.153928 Kawamoto S, Williams RS (2002) Acoustic emission and acousto-ultrasonic techniques for wood and wood-based composites: a review. For Prod. https://doi.org/10.2737/FPL-GTR-134 Kim KB, Kang HY, Dong JY, Man YC (2005) Pattern classification of acoustic emission signals during wood drying by principal component analysis and artificial neural network. Key Eng Mater 297–300(Pt3):1962–1967. https://doi.org/10.4028/www.scientific.net/KEM.297-300.1962 Nasir V, Cool J, Sassani F (2019) Acoustic emission monitoring of sawing process: artificial intelligence approach for optimal sensory feature selection. Int J Adv Manuf Technol 102(9):4179–4197. https://doi.org/10.1007/s00170-019-03526-3 Bucur V, Declercp NF (2006) The anisotropy of biological composites studied with ultrasonic technique. Ultrasonics 44(4):e829–e831 Lamy F, Takarli M, Angellier N, Dubois F, Pop O (2015) Acoustic emission technique for fracture analysis in wood materials. Int J Fract 192(1):57–70. https://doi.org/10.1007/s10704-014-9985-x Li XC, Ju S, Luo TF, Li M (2019) Influence of adhesive layer at masson pine glulam on acoustic emission signal propagation characteristics. J Northwest For Univ 34(3):185–190. https://doi.org/10.3969/j.issn.1001-7461.2019.03.29 Dong L, Hu Q, Tong X, Liu Y (2020) Velocity-free MS/AE source location method for three-dimensional hole-containing structures. Engineering 6(7):827–834. https://doi.org/10.1016/j.eng.2019.12.016 Jeong K, Park KJ (2019) One sensor source localisation of acoustic emissions in thin plates using mode analysis. Insight 61(5):264–270. https://doi.org/10.1784/insi.2019.61.5.264 Kuwahara R, Ojima H, Matsuo T, Cho H (2013) Development of acoustic emission waveform simulation technique utilizing a sensor response and finite-difference time-domain method. J Solid Mech Mater Eng 7(2):176–186. https://doi.org/10.1299/jmmp.7.176 Markus GR (2011) Investigation of pencil-lead breaks as acoustic emission sources. J Acoust Emiss 29:184–196 Yu H, Xiao D, Ma X, Tian H (2014) Near-field beamforming performance analysis for acoustic emission source localization. J Vibroengineering 158(4):127–139. https://doi.org/10.1007/978-1-4939-1239-1_12 Wang MH, Deng TT, Fang SY, Li XS, Lai F, Li M (2021) Generation and characteristics of simulated acoustic emission source of wood. J Northeast For Univ 49(6):96–101. https://doi.org/10.13759/j.cnki.dlxb.2021.06.019 Wang MH, Deng TT, Ju S, Li XC, Li XS, Li M (2020) Effect of wood surface crack on acoustic emission signal propagation characteristics. J Northeast For Univ 48(10):82–88. https://doi.org/10.13759/j.cnki.dlxb.2020.10.015 Shen KN, Zhao HL, Ding XC, Li M (2015) Acoustic emission signal wavelet disjunction in wood damage and fracture process. J Henan Univ Sci Technol 36(3):33–37 Fan X, Hu S, Lu J, Wei C (2016) Acoustic emission properties of concrete on dynamic tensile test. Constr Build Mater 114:66–75. https://doi.org/10.1016/j.conbuildmat.2016.03.065 Zhao XM, Jiao LL, Zhao J, Zhao D (2017) Acoustic emission attenuation and source location on the bending failure of the rectangular mortise-tenon joint for wood structures. J Beijing For Univ 39(1):107–111. https://doi.org/10.13332/j.1000-1522.20160150 Li M, Wang MH, Ding R, Fang SY, Lai F, Luo RH (2021) Study of acoustic emission propagation characteristics and energy attenuation of surface transverse wave and internal longitudinal wave of wood. Wood Sci Technol 55(6):1619–1637. https://doi.org/10.1007/s00226-006-0117-2 Ding R, Fang SY, Luo RH, Lai F, Yang ZL, Huang CL, Li M (2022) Propagation characteristics and energy attenuation law of surface shear waves and internal longitudinal waves in Mongolian Scotch Pine sawn timber based on acoustic emission. Chin J Wood Sci Technol 36(1):36–42. https://doi.org/10.12326/j.2096-9694.2021104 Ebrahimiana Z, Ahmadi M, Sadri S, Li BQ, Moradian O (2019) Wavelet analysis of acoustic emissions associated with cracking in rocks. Eng Fract Mech 217:106516–106526. https://doi.org/10.1016/j.engfracmech.2019.106516 Yu SS, Shen LJ, Li Y, Li M (2017) Acquisition and characteristic analysis of the surface of pinus yunnanensis acoustic emission signal. J Northwest For Univ 32(2):247–251. https://doi.org/10.3969/j.issn.1001-7461.2017.02.42 Li Y, Yu SS, Dai L, Luo TF, Li M (2018) Acoustic emission signal source localization on plywood surface with cross-correlation method. J Wood Sci 64(2):78–84. https://doi.org/10.1007/s10086-017-1672-x Liu YF, Yin DM (2005) The depression of the noise in AE from particleboard based on wavelet analysis. J Nanjing For Univ 29(6):91–94. https://doi.org/10.3969/j.issn.1000-2006.2005.06.023 Li XS, Deng TT, Wang MH, Luo RH, Li M (2021) Frequency domain identification of acoustic emission signals on surfaceand interior of Pinus sylvestris var mongolica based on wavelet analysis. J Northwest For Univ 36(4):209–213. https://doi.org/10.3969/j.issn.1001-7461.2021.04.30 Dou CF, Li M, Zhu DG (2021) The detection of hole defects in the simulation of wood borer based on acoustic emission technology. J Cent South Univ For Technol 41(2):162–170. https://doi.org/10.14067/j.cnki.1673-923x.2021.02.019 Jing ZW, Jiang MS, Sui QM, Sai YZ, Lu SZ, Cao YQ, Jia L (2013) Acoustic emission localization technique based on generalized cross-correlation time difference estimation algorithm. Chin J Sens Actuator 26(11):1513–1518. https://doi.org/10.3969/j.issn.1004-1699.2013.11.009 Shen KN, Ding XC, Zhao HL, Li M (2015) Acoustic emission signal source localization in wood surface with triangle positioning method. J Northeast For Univ 43(4):77–81. https://doi.org/10.13759/j.cnki.dlxb.20150116.029 American National Standard, ASTM-E976 (1993) Standard guide for determining the reproducibility of acoustic emission sensor response Calvet M, Margerin L (2012) Velocity and attenuation of scalar and elastic waves in random media: a spectral function approach. J Acoust Soc Am 131(3):1843–1862. https://doi.org/10.1121/1.3682048 Pang HD, Zhang XM, Jiang FX (2004) The spectrum analysis of acoustic emission signal in rock materials. J China Coal Soc 29(5):540–544. https://doi.org/10.1007/BF02911033 Teodorovich SB (2003) Technique of measurements of elastic wave attenuation parameters. Russ J Nondestr Test 39(6):427–435. https://doi.org/10.1023/B:RUNT.0000011623.75582.cc Wang GS, Li CH, Hu SL, Feng C, Li SH (2010) A study of time-and spatial-attenuation of stress wave amplitude in rock mass. Rock Soil Mech 31(11):3487–3492. https://doi.org/10.16285/j.rsm.2010.11.023 Zhou ZG, Feng ZY, Gao YF, Zhu Z (2008) Application of ultrasonic guided waves to defect inspection of large thin aluminum plate. Acta Aeronaut Astronaut Sin 29(4):1044–1048. https://doi.org/10.3321/j.issn:1000-6893.2008.04.045 Qian ZH, Jin F, Hirose S (2011) Dispersion characteristics of transverse surface waves in piezoelectric coupled solid media with hard metal interlayer. Ultrasonics 51(8):853–856. https://doi.org/10.1016/j.ultras.2011.06.005 The authors are grateful for the support of the China Natural Science Foundation (NO: 32160345, 31760182) and Department of Education of Yunnan Provincial (NO: 2021J0156, NO: 2021J0158). Startup fund for introducing talents and scientific research of Anhui University of Engineering (NO: 2021YQQ037). School of Machinery and Transportation, Southwest Forestry University, Kunming, 650224, Yunnan, China Yue Zhao, Saiyin Fang, Shaochun Zhang, Changlin Huang, Tingting Deng, Feilong Mao, Gezhou Qin & Daigen Zhu Key Laboratory of Advanced Perception and Intelligent Control of High-End Equipment of Ministry of Education, Anhui Polytechnic University, Wuhu, 241000, Anhui, China School of Electrical Engineering, Anhui Polytechnic University, Wuhu, 241000, Anhui, China Yue Zhao Saiyin Fang Shaochun Zhang Changlin Huang Tingting Deng Feilong Mao Gezhou Qin Daigen Zhu ZY conceived the study and designed the methodology; HCL, ZSC, MFL and QGZ conducted the lab work; ZY collected the data, conducted the statistical analysis and led the writing of the manuscript; LM, FSY, DTT and ZDG devised conceptual ideas and provided project support and added substantial edits to the manuscript. All authors contributed critically to the drafts and gave final approval for publication. All the authors read and approved the final manuscript. Correspondence to Ming Li. Conflicts of interest the author declares that he has no conflict of interest. Zhao, Y., Li, M., Fang, S. et al. Influence of boundary conditions on acoustic emission propagation characteristics of Zelkova schneideriana. J Wood Sci 68, 62 (2022). https://doi.org/10.1186/s10086-022-02070-1 Wavelet transform Attenuation rate
CommonCrawl
Find the positive integer $n$ such that $$\arctan\frac {1}{3} + \arctan\frac {1}{4} + \arctan\frac {1}{5} + \arctan\frac {1}{n} = \frac {\pi}{4}.$$ Note that $\arctan \frac{1}{3},$ $\arctan \frac{1}{4},$ and $\arctan \frac{1}{5}$ are all less than $\arctan \frac{1}{\sqrt{3}} = \frac{\pi}{6},$ so their sum is acute. By the tangent addition formula, \[\tan (\arctan a + \arctan b) = \frac{a + b}{1 - ab}.\]Then \[\tan \left( \arctan \frac{1}{3} + \arctan \frac{1}{4} \right) = \frac{\frac{1}{3} + \frac{1}{4}}{1 - \frac{1}{3} \cdot \frac{1}{4}} = \frac{7}{11},\]so \[\arctan \frac{1}{3} + \arctan \frac{1}{4} = \arctan \frac{7}{11}.\]Then \[\tan \left( \arctan \frac{1}{3} + \arctan \frac{1}{4} + \arctan \frac{1}{5} \right) = \tan \left( \arctan \frac{7}{11} + \arctan \frac{1}{5} \right) = \frac{\frac{7}{11} + \frac{1}{5}}{1 - \frac{7}{11} \cdot \frac{1}{5}} = \frac{23}{24},\]so \[\arctan \frac{1}{3} + \arctan \frac{1}{4} + \arctan \frac{1}{5} = \arctan \frac{23}{24}.\]Then \begin{align*} \frac{1}{n} &= \tan \left( \frac{\pi}{4} - \arctan \frac{1}{3} - \arctan \frac{1}{4} - \arctan \frac{1}{5} \right) \\ &= \tan \left( \frac{\pi}{4} - \arctan \frac{23}{24} \right) = \frac{1 - \frac{23}{24}}{1 + \frac{23}{24}} = \frac{1}{47}, \end{align*}so $n = \boxed{47}.$
Math Dataset
\begin{document} \begin{abstract} In this article, we prove integration by parts formulae (IbPFs) for the laws of Bessel bridges from $0$ to $0$ over the interval $[0,1]$ of dimension smaller than $3$. As an application, we construct a weak version of a stochastic PDE having the law of a one-dimensional Bessel bridge (i.e. the law of a reflected Brownian bridge) as reversible measure, the dimension 1 being particularly relevant in view of applications to scaling limits of dynamical critical pinning models. We also exploit the IbPFs to conjecture the structure of the stochastic PDEs associated with Bessel bridges of all dimensions smaller than $3$. \end{abstract} \title{Bessel SPDEs and renormalised local times} \section{Introduction} The classical stochastic calculus due to Kiyosi It\^o was originally created as a tool to define and solve stochastic differential equations (SDEs). In classical monographs on the subject, see e.g. \cite{revuz2013continuous,KS,RW2}, Bessel processes play a prominent role as a fundamental example on which the extraordinary power of the theory can be tested. Stochastic partial differential equations (SPDEs) were invented around fifty years ago as a natural function-valued analog of SDEs, and are by now a well-established field which is increasingly active and lively. SPDEs driven by a space-time white noise have recently received much attention, because they are naturally associated with {\it ultraviolet divergences} and {\it renormalisation}, phenomena which are now mathematically well-understood in many circumstances using the recent theories of regularity structures \cite{Hairer2014d,BHZ} and of paracontrolled distributions \cite{GIP}. In particular, the classical stochastic calculus for semimartingales and SDEs has no analog for space-time-white-noise driven SPDEs, despite some early and more recent attempts \cite{Zambotti2006,Bellingeri}, because of the divergences created by the white noise. A partial substitute is given by the Fukushima stochastic calculus associated with Dirichlet forms \cite{fukushima2010dirichlet,ma2012introduction}, but the formulae that one obtains are often less explicit than one would hope. The marvellous power of the It\^o calculus for the study of fine properties of semimartingales remains without proper analog in genuinely infinite-dimensional processes. In this paper we discuss a particular class of equations which seems a natural analog of Bessel processes in the context of SPDEs driven by a space-time white noise. As we explain below, the standard approach to Bessel processes does not work at all for these Bessel SPDEs, and we have to apply a different method, with necessarily weaker results, at least in comparison with the finite-dimensional situation. We will rely on Dirichlet forms methods and on integration by parts formulae on path spaces. These will include distributional terms - rather than $\sigma$-finite measures - as in the theory of white noise calculus \cite{hida}. The processes that we consider have interesting path properties, as it is the case for Bessel processes, but with the enhanced richness of infinite-dimensional objects, see e.g. \cite{zambotti2017random} for a recent account. We hope that this work will further motivate the study of infinite-dimensional stochastic calculus, which is still in its infancy. \subsection{From Bessel SDEs to Bessel SPDEs} A squared Bessel process of dimension $\delta\geq 0$ is defined as the unique continuous non-negative process $(Y_t)_{t\geq 0}$ solving the SDE \begin{equation}\label{sqB1} Y_t=Y_0+\int_0^t2\sqrt{Y_s} \, \mathrm{d} B_s +\delta \,t, \quad t\geq 0, \qquad (\delta\geq 0) \end{equation} for $Y_0 \geq 0$, where $(B_t)_{t\geq 0}$ is a standard Brownian motion. Squared Bessel processes enjoy a remarkable additivity property (see \cite{shiga1973bessel} and \eqref{additivity_sqred_bes} below), and play a prominent in several areas of probability theory. For instance, in population dynamics, they arise as the scaling limit of Galton-Watson processes with immigration. On the other hand, they play an important role in the study of the fine properties of Brownian motion, see e.g. the sections VI.3 and XI.2 in \cite{revuz2013continuous}. Moreover their fascinating behavior at the boundary point 0 can be studied in great detail, see e.g. \cite{zambotti2017random} for a recent account. Well-posedness of the SDE \eqref{sqB1} satisfied by $(Y_t)_{t\geq 0}$ follows from the classical Yamada-Watanabe theorem \cite[Theorem IX.3.5]{revuz2013continuous}. If we consider the {\it Bessel process} $X_t:=\sqrt{Y_t}$, $t\geq 0$, the situation is more involved. For $\delta>1$, by the It\^o formula, $X$ is solution to \begin{equation}\label{sde1} X_t=X_0+\frac{\delta-1}2\int_0^t \frac1{X_s}\, \mathrm{d} s+ B_t, \quad t\geq 0, \qquad (\delta>1) \end{equation} and this equation satisfies pathwise uniqueness and existence of strong solutions since the drift is monotone decreasing, see Prop 3.1 in \cite{zambotti2017random} or section V.48 in \cite{RW2}. On the other hand, for $\delta=1$, $X=\sqrt{Y}$ satisfies \[ X_t=X_0+L_t+ B_t, \qquad \quad t\geq 0, \qquad (\delta=1) \] where $(L_t)_{t\geq 0}$ is continuous and monotone non-decreasing, with $L_0=0$ and \begin{equation}\label{sde2} X\geq 0, \qquad \int_0^\infty X_s \, \mathrm{d} L_s=0. \end{equation} In other words $X$ is a reflecting Brownian motion, and the above equation has a unique solution by the Skorokhod Lemma \cite[Lemma VI.2.1]{revuz2013continuous}. For $\delta\in (0,1)$, the situation is substantially more difficult and it turns out that the relation \eqref{sde1} is not valid anymore in this regime. One can show, see e.g. \cite[Proposition 3.12]{zambotti2017random}, that $X$ admits {\it diffusion local times}, namely continuous processes $(\ell^a_t)_{t\geq 0,a\geq 0}$ such that \begin{equation}\label{otfo} \int_0^t \varphi(X_s)\, \mathrm{d} s =\int_0^\infty \varphi(a) \, \ell^a_t \, a^{\delta-1} \, \mathrm{d} a, \end{equation} for all Borel $\varphi:\mathbb{R}_+\to\mathbb{R}_+$, and that $X$ satisfies \begin{equation}\label{sde3} X_t=X_0+\frac{\delta-1}2\int_0^\infty\frac{\ell^a_t-\ell^0_t}a \, a^{\delta-1}\, \mathrm{d} a+B_t, \quad t\geq 0, \qquad (0<\delta<1). \end{equation} Note that by the occupation time formula \eqref{otfo} we have \[ \begin{split} &\int_0^\infty\frac{\ell^a_t-\ell^0_t}a \, a^{\delta-1}\, \mathrm{d} a = \lim_{\varepsilon\downarrow 0} \int_\varepsilon^\infty\frac{\ell^a_t-\ell^0_t}a \, a^{\delta-1}\, \mathrm{d} a = \\ & = \lim_{\varepsilon\downarrow 0} \left(\int_0^t \un{(X_s\geq\varepsilon)} \frac1{X_s}\, \mathrm{d} s - \ell^0_t \int_\varepsilon^\infty a^{\delta-2}\, \mathrm{d} a\right) \end{split} \] and in the latter expression both terms diverge as $\varepsilon\downarrow 0$, while the difference converges since $|\ell^a_t-\ell^0_t|\lesssim a^{1-\frac\delta2-\kappa}$ for any $\kappa>0$: this is why we speak of {\it renormalised local times}. The formula \eqref{sde3} is not really an SDE, and to our knowledge one cannot (so far) characterize $X$ as the unique process satisfying this property, unless one manages to prove that $X^2$ is a solution to \eqref{sqB1}. We stress again that the relation between \eqref{sqB1} and \eqref{sde1}-\eqref{sde2}-\eqref{sde3} is based on It\^o's stochastic calculus. In a series of papers \cite{Z01,zambotti2002integration,zambotti2003integration,zambotti2004occupation} the second author of this article studied a class of stochastic partial differential equations (SPDEs) with analogous properties. For a parameter $\delta>3$ the equation, that we call {\it Bessel SPDE}, is \begin{equation}\label{spde>3} \left\{ \begin{array}{ll} {\displaystyle \frac{\partial u}{\partial t}=\frac 12 \frac{\partial^2 u}{\partial x^2} + \frac {\kappa(\delta)}{2 \, u^3} + \xi } \\ \\ u(0,\cdot)=u_0, \ u(t,0)=u(t,1)=0 \end{array} \right. \qquad \qquad (\delta>3) \end{equation} where $u\geq 0$ is continuous and $\xi$ is a space-time white noise on $\mathbb{R}_+\times[0,1]$, and \begin{equation}\label{kappadelta} \kappa(\delta) := \frac{(\delta-3)(\delta-1)}{4}. \end{equation} As $\delta\downarrow 3$, the solution to \eqref{spde>3} converges to the solution of the Nualart-Pardoux equation \cite{nualart1992white}, namely the random obstacle problem \begin{equation}\label{spde=3} \left\{ \begin{array}{ll} {\displaystyle \frac{\partial u}{\partial t}= \frac 12\frac{\partial^2 u}{\partial x^2} + \eta+ \xi } \\ \\ u(0,\cdot)=u_0, \ u(t,0)=u(t,1)=0 \\ \\ u\geq 0, \ d\eta\geq 0, \ \int_{\mathbb{R}_+\times[0,1]} u\, \, \mathrm{d}\eta=0, \end{array} \right. \qquad \qquad (\delta=3) \end{equation} where $\eta$ is a Radon measure on $]0,\infty[\,\times\,]0,1[$. The unique invariant measure of \eqref{spde>3} for $\delta>3$, respectively \eqref{spde=3}, is the Bessel bridge of dimension $\delta$, resp. 3. In other words, the invariant measure has the law of $(X_t)_{t\in[0,1]}$ conditioned to return to 0 at time 1, where $X$ solves \eqref{sde1} with $X_0=0$ and $\delta>3$, respectively $\delta=3$. Equation \eqref{spde=3} also describes the fluctuations of an effective $(1+1)$ interface model near a wall \cite{funakiolla,funakistflour} and also arises as the scaling limit of several weakly asymmetric interface models, see \cite{etheridge2015scaling}. While \eqref{spde>3} for $\delta>3$ is the analog of \eqref{sde1} for $\delta>1$, \eqref{spde=3} is the analog of \eqref{sde2}. The analogy can be justified in terms of scaling invariance: the equations \eqref{sde1} and \eqref{sde2} are invariant (in law) under the rescaling $X_t\mapsto \lambda^{-1} X_{\lambda^2t}$ for $\lambda>0$, while \eqref{spde>3} and \eqref{spde=3} are invariant under $u(t,x)\mapsto\lambda^{-1} u(\lambda^4t,\lambda^2x)$ (apart from the fact that the space interval changes from $[0,1]$ to $[0,\lambda^{-2}]$). It has been an open problem for over 15 years to complete the above picture. Namely, what is an SPDE whose invariant measure is the Bessel bridge of dimension $\delta\in\,]0,3[$ ? Is it an SPDE analogue of \eqref{sde3} ? We stress that equations \eqref{spde>3} and \eqref{spde=3} enjoy nice properties (pathwise uniqueness, continuity with respect to initial data, the Strong Feller property) because of the {\it dissipative}, namely monotone non-increasing, character of the drift. This is however true only as long as the coefficient $\kappa(\delta)$ is positive, and fails for $\delta\in\,]1,3[$. In the regime $\delta<3$, we shall see that even the notion of solution becomes highly non-trivial, as for Bessel processes in the regime $\delta<1$. The nice properties mentioned above may still be true but the known techniques become ineffective. This problem is particularly interesting for $\delta=1$, which corresponds to the reflecting Brownian bridge as an invariant measure. Indeed, the reflecting Brownian bridge arises as the scaling limit of critical pinning models, see \cite{dgz}, \cite[Chapter 15.2]{funakistflour} and \cite{fattler2016construction,grothaus18feller}. Dynamical pinning models are believed to have a scaling limit, which would be an infinite-dimensional diffusion having the law of a reflecting Brownian motion as reversible measure. What kind of SPDE that limit should satisfy has however remained a very open question so far. Another application of a Bessel SPDE corresponding to $\delta=1$ could be the description of the scaling limits of the spin flip dynamics considered in \cite{caputo2008approach}. Note that the one-dimensional trick of considering a power of $u$, in this case for instance $v:=u^4$, in order to find a more tractable SPDE fails because one obtains rather frightening equations of the form \[ \frac{\partial v}{\partial t}=\frac 12 \frac{\partial^2 v}{\partial x^2} + 2\,\kappa(\delta) - \frac3{8 \, v} \, : \left(\frac{\partial v}{\partial x}\right)^2: + 4v^{\frac34}\, \xi \] where the $: \ :$ notation denotes a KPZ-type renormalisation. Even the theory of regularity structures \cite{Hairer2014d} does not cover this kind of equations, due to the non-Lipschitz character of the coefficients. One could hope that a Yamada-Watanabe result could be proved for this class of equations; it is an inspiring fact that the exponent $\frac34$ in the noise-term is known to be critical for pathwise uniqueness of parabolic SPDEs (without the KPZ-type term), see \cite{mytnik1,mytnik2}. This approach is, at present, completely out of reach. Therefore, in this paper, rather than tackling these difficulties, we answer the above questions by exploiting the specific, very nice structure underlying Bessel processes. More precisely, we derive integration by parts formulae for the law of Bessel bridges of dimension $\delta<3$. These formulae turn out to involve the laws of pinned Bessel bridges (or, more precisely, the measures $\Sigma^\delta_r(\,\cdot\,|\,a)$ defined in \eqref{Sigma} below) which should correspond to the local times of the solution $u$ to our would-be SPDEs, that is the process $(\ell^a_{t,x})_{a \geq 0}$ defined, at least formally, by \begin{equation} \label{otf} \int_0^t \varphi(u(s,x))\, \mathrm{d} s =\int_0^\infty \varphi(a) \, \ell^a_{t,x} \, a^{\delta-1} \, \mathrm{d} a, \end{equation} for all Borel $\varphi:\mathbb{R}_+ \to \mathbb{R}_+$. Some explicit computations on the measures $\Sigma^\delta_r(\,\cdot\,|\, a)$ suggest that this process should moreover have a vanishing first-order derivative at $0$, that is \begin{equation} \label{vanishing_derivative} \frac{\partial}{\partial a} \ell^{a}_{t,x}\, \biggr\rvert_{a=0} = 0, \quad \quad t \geq 0, \quad x \in (0,1). \end{equation} Finally, the integration by parts formulae that we find enable us to identify the structure of the corresponding Bessel SPDEs. Thus, in view of these formulae, for $1<\delta<3$, the SPDE should have the form \begin{equation}\label{1<spde<3} \frac{\partial u}{\partial t}=\frac 12 \frac{\partial^2 u}{\partial x^2} + \frac {\kappa(\delta)}{2}\frac\partial{\partial t}\int_0^\infty \frac1{a^3}\left(\ell^a_{t,x}-\ell^0_{t,x}\right) a^{\delta-1}\, \mathrm{d} a + \xi, \qquad (1<\delta<3). \end{equation} Then \eqref{1<spde<3} is the SPDE analog of \eqref{sde3}. On the other hand, for $\delta=1$, we find that the SPDE should be of the form \begin{equation}\label{spde=1} \frac{\partial u}{\partial t}=\frac 12 \frac{\partial^2 u}{\partial x^2} - \frac{1}{8} \frac\partial{\partial t}\frac{\partial^{2}}{\partial a^{2}} \, \ell^{a}_{t,x}\, \biggr\rvert_{a=0} + \xi, \qquad (\delta=1), \end{equation} while for $0<\delta<1$ \begin{equation}\label{0<spde<1} \begin{split} & \frac{\partial u}{\partial t}=\ \frac 12 \frac{\partial^2 u}{\partial x^2} + \xi \qquad\qquad\qquad\qquad\qquad\qquad\qquad (0<\delta<1) \\ & + \frac {\kappa(\delta)}{2}\frac\partial{\partial t}\int_0^\infty \frac1{a^3}\left(\ell^a_{t,x}-\ell^0_{t,x}-\frac{a^2}2\frac{\partial^{2}}{\partial a^{2}} \, \ell^{a}_{t,x}\, \biggr\rvert_{a=0}\right) a^{\delta-1}\, \mathrm{d} a. \end{split} \end{equation} In \eqref{1<spde<3}, as in \eqref{sde3}, we have a Taylor expansion at order 0 of the local times functions $a\mapsto\ell^a$. By contrast, equations \eqref{spde=1} and \eqref{0<spde<1} have no analog in the context of one-dimensional Bessel processes. In \eqref{0<spde<1} the Taylor expansion is at order 2, while \eqref{spde=1} is a limit case, like \eqref{spde=3}. In all the above, we say "the SPDE should have the form..." since existence and uniqueness of solutions to such equations are still open problems, as we discuss below. In the case $\delta=1$, we show below that our integration by parts formula and Dirichlet forms techniques allow to construct a Markov process $(u_t)_{t \geq 0}$ with the reflected Brownian bridge as reversible measure, and satisfying a modified version of equation \eqref{spde=1} above, namely \begin{equation}\label{formal1} \frac{\partial u}{\partial t}=\frac 12 \frac{\partial^2 u}{\partial x^2} - \frac{1}{4} \, \underset{\epsilon \to 0}{\lim} \, \rho''_{\epsilon}(u) + \xi, \qquad (\delta=1) \end{equation} where $\rho_{\epsilon}(x) = \frac{1}{\epsilon} \rho(\frac{x}{\epsilon})$ is a smooth approximation of the Dirac measure at $0$, see Theorem \ref{fukushima_decomposition} for the precise statements. Note that \eqref{formal1} is a weak form of \eqref{spde=1}, since the former does not require the existence of the local time process $\ell^a$. Similar arguments allow us to treat the case $\delta=2$: this will be done in a forthcoming article. In the cases $\delta\in\,]0,3[\,\setminus\{1,2\}$, we do not know how to prove that the associated Dirichlet form is well-defined and associated with a Markov process (namely that it is closable and quasi-regular); once this is done, our integration by parts formulae allow to show that the associated Markov process satisfies \eqref{1<spde<3} or \eqref{0<spde<1} according to the value of $\delta$. Although they seem quite different, all the above SPDEs can be written in a unified way as follows. We introduce for $\alpha\in\mathbb R$ the following distributions on $[0,\infty)$ \begin{itemize} \item if $ \alpha = -k$ with $k \in \mathbb{N}$, then \[ \langle \mu_{\alpha}, \varphi \rangle := (-1)^{k} \varphi^{(k)}(0), \qquad \forall \, \varphi \in C^\infty_0([0,\infty)) \] \item else, \[ \langle \mu_{\alpha} , \varphi \rangle := \int_{0}^{+ \infty} \left( \varphi(a) - \sum_{0\leq j\leq -\alpha} \frac{a^{j}}{j!} \, \varphi^{(j)}(0) \right) \frac{a^{\alpha -1}}{\Gamma(\alpha)} \, \mathrm{d} a, \quad \forall \, \varphi \in C^\infty_0([0,\infty)). \] \end{itemize} Note that, for all $\alpha \in \mathbb{R}$, $\mu_\alpha$ coincides with the distribution $\frac{x_{+}^{\alpha-1}}{\Gamma(\alpha)}$ considered in Section 3.5 of \cite{gelfand1964generalized}. Then for all $\varphi \in C^\infty_0([0,\infty))$, the map $\alpha\mapsto\langle \mu_{\alpha} , \varphi \rangle$ is analytic. Moreover, for $\delta>3$, the non-linearity in \eqref{spde>3} can be expressed by the occupation time formula \eqref{otf} and the definition of $\mu_\alpha$ as \[\frac{\kappa(\delta)}2\int_0^t\frac1{(u(s,x))^3}\, \mathrm{d} s = \frac{\kappa(\delta)}2\int_0^\infty \frac1{a^3} \, \ell^a_{t,x}\, a^{\delta-1}\, \mathrm{d} a = \frac{\kappa(\delta)\,\Gamma(\delta-3)}2\langle \mu_{\delta-3},\ell^{\boldsymbol{\cdot}}_{t,x}\rangle,\] which, by \eqref{kappadelta}, we can in turn rewrite as \begin{equation} \label{unified_expr_nonlinearity} \frac{\Gamma (\delta)}{8(\delta-2)} \langle \mu_{\delta-3},\ell^{\boldsymbol{\cdot}}_{t,x}\rangle, \end{equation} an expression which, at least formally, makes sense for any $\delta \in (0,\infty) \setminus \{2\}$. Note moreover that the singularity at $\delta=2$ is compensated by the cancellation of $\langle \mu_{\delta-3},\ell^{\boldsymbol{\cdot}}_{t,x}\rangle$ at $\delta=2$ as a consequence of \eqref{vanishing_derivative}. Then, the expression \eqref{unified_expr_nonlinearity} encapsulates, in a unified way, the non-linearities of \eqref{spde=3}-\eqref{1<spde<3}-\eqref{spde=1}-\eqref{0<spde<1}. In particular, for $\delta=3$, it equals $\frac{1}{4} \ell^0_{t,x}$, which is consistent with the results about the structure of the reflection measure $\eta$ in \eqref{spde=3} proved in \cite{zambotti2004occupation} and showing that a.s. \[ \eta([0,t]\times{\rm d}x) = \frac14\, \ell^0_{t,x} \, \mathrm{d} x. \] At least formally, the $\delta$-Bessel SPDEs for $\delta <3$ correspond to the unique analytic continuation of the $\delta$-Bessel SPDEs for $\delta \geq 3$. This is justified by considering the corresponding integration by parts formulae on a specific set of test functions, where every term depends in an analytic way on $\delta$, see \eqref{exp_fst_part_ibpf0} below. \subsection{Integration by parts formulae for the laws of Bessel bridges} Integration by parts plays a fundamental role in analysis, and most notably in stochastic analysis. For instance, it lies at the core of Malliavin Calculus and the theory of Dirichlet forms, see e.g. \cite{nualart,fukushima2010dirichlet,ma2012introduction}. While it is relatively easy in finite dimension, where the standard rules of calculus apply, obtaining integration by parts formulae (IbPFs for short) for measures on infinite-dimensional spaces can be a difficult task, one of the main reasons being the absence of Lebesgue measure in that context. The most celebrated example is the IbPF associated with Brownian motion, or its corresponding bridge, on the interval $[0,1]$, which reads \[ E \left[\partial_{h} \Phi (B) \right] = - E \left[\langle h'', B \rangle \, \Phi (B) \right], \] for all Fr\'{e}chet differentiable $\Phi : L^{2}(0,1) \to \mathbb{R}$ and all $h \in C^{2}_{c}(0,1)$, where $ \langle \cdot, \cdot \rangle$ denotes the canonical scalar product in $L^2(0,1)$. This formula follows for instance from the quasi-invariance property of the Wiener measure on $[0,1]$ along the Cameron-Martin space, by differentiating at $\varepsilon=0$ the formula \[ E[ \Phi(B+\varepsilon h)] = E\left[\Phi(B)\, \exp\left(-\varepsilon\langle h'',B\rangle -\frac{\varepsilon^2}2\|h'\|_{L^2(0,1)}^2\right) \right]. \] In \cite{zambotti2002integration}, the second author exploited the relation between the law of the Brownian bridge and the law $P^3$ of the $3$-dimensional Bessel bridge (also known as the normalised Brownian excursion) on $[0,1]$ to deduce an IbPF for the latter measure; other proofs were given later, see e.g. \cite{FuIs,zambotti2017random}. In \cite{zambotti2003integration}, exploiting an absolute continuity relation with respect to the $3$-dimensional Bessel bridge, the second author obtained IbPFs for the law $P^\delta$ of Bessel bridges of dimension $\delta>3$. Put in a nutshell, these formulae read as follows: \begin{equation} \label{ibpf_larger_three} E^{\delta} \left[\partial_{h} \Phi (X) \right] + E^{\delta} \left[\langle h'', X \rangle \, \Phi (X) \right] = - \kappa(\delta) \, E^{\delta} \left[\langle h, X^{-3} \rangle \, \Phi (X) \right] \end{equation} for all $\delta >3$, and \begin{equation}\label{ibpf_three} \begin{split} & E^{3} \left[\partial_{h} \Phi (X) \right] + E^{3} \left[\langle h'', X \rangle \, \Phi (X) \right] = \\ & = - \int_{0}^{1} \, \mathrm{d} r \, \frac{h_r}{\sqrt{2\pi r^3(1-r)^3}} \, E^{3} \left[\Phi (X) \, | \, X_{r}=0\right], \end{split} \end{equation} where $\Phi$ and $h$ are as above. Here, for all $\delta >0$, $E^{\delta}$ denotes the expectation with respect to the law $P^{\delta}$, on the space of continuous real-valued functions on $[0,1]$, of the $\delta$-dimensional Bessel bridge from $0$ to $0$ over the interval $[0,1]$, and $\kappa(\delta)$ is defined in \eqref{kappadelta}. Note that while $\kappa(\delta) > 0$ for $\delta > 3$, $\kappa$ vanishes at $\delta=3$, the dimension corresponding to the Brownian excursion. At the same time, the quantity $\langle |h|, X^{-3} \rangle $ is integrable with respect to $P^{\delta}$ for $\delta > 3$, but is non-integrable with respect to $P^{3}$ for $h$ that is not identically $0$. Thus, when $\delta \searrow 3$, the right-hand side of \eqref{ibpf_larger_three} is an indeterminate form which turns out to converge to the non-trivial quantity in the right-hand side of \eqref{ibpf_three}; this can be seen, at least for Fr\'echet differentiable $\Phi$, by comparing the left-hand sides of the two formulae and by using continuity of the map $\delta\mapsto P^\delta$. Formula \eqref{ibpf_three} also possesses a geometric-measure theory interpretation as a Gauss-Green formula in an infinite-dimensional space, the second term in the right-hand side corresponding to a boundary term (see Chapter 6.1.2 in \cite{zambotti2017random}). What can we say for Bessel bridges of dimension $\delta <3$ ? In such a regime, the techniques used in \cite{zambotti2003integration}, based on absolute continuity relations with the Brownian excursion as well as monotonicity arguments, fall apart. Indeed, when $\delta \in (1,3)$, $\kappa(\delta) <0$, so the required monotonicity properties do not hold anymore, while for $\delta <2$ the absolute continuity relations fail to exist. Hence, the problem of finding IbPFs for the measures $P^{\delta}$, when $\delta <3$, has remained open until now, excepted for the value $\delta=1$, corresponding to the reflected Brownian bridge, for which some (strictly weaker) IbPFs have been obtained, see \cite{zambotti2005integration} for the case of the reflected Brownian motion, \cite{grothaus2016integration} for the case of a genuine bridge, and Remark \ref{weaker} below for a discussion. \subsection{Outline of the results} Here and below, let $C([0,1]) := C([0,1], \mathbb{R})$ be the space of continuous real-valued functions on $[0,1]$. In this article, we obtain IbPFs for the laws $P^{\delta}$ of Bessel bridges of dimension $\delta \in (0,3)$ from $0$ to $0$ over $[0,1]$. Our formulae hold for a large class of functionals $\Phi: C([0,1]) \to \mathbb{R}$. More precisely, we consider linear combinations of functionals of the form \begin{equation}\label{suitable} \Phi(\zeta) = \exp(- \langle m, \zeta^{2} \rangle), \quad \zeta \in C([0,1]), \end{equation} with $m$ a finite Borel measure on $[0,1]$, and where $\langle m, \zeta^{2} \rangle := \int_0^1 \zeta_t^{2} \, m({\rm d} t)$. We prove that these functionals satisfy IbPFs for the laws $P^{\delta}$, for all $\delta >0$. Our method is based on deriving semi-explicit expressions for quantities of the form \[ E^{\delta} \left[\Phi (X) \right] \qquad \text{and} \qquad E^{\delta} \left[\Phi (X) \, | \, X_{r} = a\right], \quad a \geq 0, \, r \in (0,1), \] using solutions to some second-order differential equations, and exploiting the nice computations done in Chapter XI of \cite{revuz2013continuous}. The fundamental property enabling these computations is the additivity property of the squared Bessel processes, which in particular implies that both of the quantities above factorize in a very specific way, see the expression \eqref{bridge2} below. As a consequence, for functionals as above, all the IbPFs for $P^{\delta}$, $\delta \geq 3$ are just multiples of a single differential relation which does not depend on $\delta$ (see Lemma \ref{thm} below), the dependence in $\delta$ entering only through the multiplying constant which involves some $\Gamma$ values. When $\delta \geq 3$, expressing these $\Gamma$ values as integrals, and performing a change of variable, we retrieve the formulae already obtained in \cite{zambotti2002integration} and \cite{zambotti2003integration}. On the other hand, when $\delta <3$, one of the $\Gamma$ values appearing is negative, so we cannot express it using the usual integral formula, but must rather use \textit{renormalised} integrals. As a result, when $\delta \in (1,3)$, the IbPFs can be written \begin{equation} \label{new_ibpf_13} \begin{split} & E^{\delta} (\partial_{h} \Phi (X) ) + E^{\delta} (\langle h '' , X \rangle \, \Phi(X) ) = \\ &=-\kappa(\delta)\int_{0}^{1} h_{r} \int_0^\infty a^{\delta-4} \Big[ \Sigma^\delta_r \left(\Phi (X) \, | \, a\right) - \Sigma^\delta_r \left(\Phi (X) \, | \, 0\right) \Big] \, \mathrm{d} a \, \mathrm{d} r, \end{split} \end{equation} where, for all $a \geq 0$, $\Sigma^\delta_r \left({\rm d}X \, | \, a\right)$ is a measure on $C([0,1])$ proportional to the law of the Bessel bridge conditioned to hit $a$ at $r$, see \eqref{Sigma}. Thus, the left-hand side is the same as for \eqref{ibpf_larger_three} and \eqref{ibpf_three}, but the right-hand side now contains Taylor remainders at order $0$ of the functions $a \mapsto \Sigma^\delta_r \left(\Phi (X) \, | \, a\right)$. When $\delta \in (0,1)$, this renormalisation phenomenon becomes even more acute. Indeed, in that case, the IbPFs are similar to \eqref{new_ibpf_13}, but the right-hand side is replaced by \begin{equation} \label{new_ibpf_01} -\kappa(\delta)\int_{0}^{1} h_{r} \int_0^\infty a^{\delta-4} \left[\varphi(a)-\varphi(0)-\frac{a^2}2\varphi''(0) \right] \, \mathrm{d} a \, \mathrm{d} r, \end{equation} where $\varphi(a):=\Sigma^\delta_r \left(\Phi (X) \, | \, a\right)$, and where we see Taylor remainders at order 2 appearing. An important remark is that the terms of order 1 vanish \[ \varphi'(0)=\left. \frac{\rm d}{{\rm d} a} \Sigma^\delta_r \left(\Phi (X) \, | \, a \right) \, \right|_{a=0} = 0 , \quad r \in (0,1), \] so we do not see them in the above Taylor remainders. Finally, in the critical case $\delta=1$, we obtain the fomula \begin{equation} \label{new_ibpf_1} \begin{split} E^{1} (\partial_{h} \Phi (X) ) + E^{1} (\langle h '' , X \rangle \, \Phi(X) ) = \frac{1}{4} \int_{0}^{1} h_{r} \, \frac{{\rm d}^{2}}{{\rm d} a^{2}} \, \Sigma^1_r (\Phi(X) \, | \, a) \, \biggr\rvert_{a=0} \, \mathrm{d} r. \end{split} \end{equation} The IbPFs are stated in Theorem \ref{statement_ibpf} below. One important, expected feature is the transition that occurs at the critical values $\delta=3$ and $\delta=1$. Another important but less expected feature is the absence of transition at $\delta =2$, as well as the related remarkable fact that the functions $a \mapsto \Sigma^\delta_r \left(\Phi (X) \, | \, a\right)$ are, for all $r \in (0,1)$, smooth functions in $a^{2}$, so that all their odd-order derivatives vanish at $0$. This is the reason why there only ever appear derivatives of even order in our formulae. An objection to this observation might be that the class of functionals \eqref{suitable} is too restrictive. However, in a forthcoming article, we will show that the IbPFs obtained in the present article still hold for a class of very general functionals. In particular, vanishing of first-order derivatives at $a=0$ can be established for $ a \mapsto \Sigma^\delta_r \left(\Phi (X) \, | \, a\right)$ for any $\Phi \in C^{1}_{b}(L^{2}(0,1))$, which confirms the absence of transition at $\delta=2$ observed in this article. Finally, note that all the IbPFs above can be written in a unified way, by re-expressing the last term as \[-\frac{\Gamma (\delta)}{4(\delta-2)} \int_0^1 \langle \mu_{\delta-3} ,\Sigma^\delta_r (\Phi(X) \, | \, \cdot\,) \rangle \] in analogy with \eqref{unified_expr_nonlinearity}. The latter formula bears out the idea that the new IbPFs for Bessel bridges of dimension $\delta<3$ are given by the unique analytic continuation of those for $\delta \geq 3$, at least for suitable test functionals $\Phi$ as in \eqref{suitable}. The IbPFs \eqref{new_ibpf_13}, \eqref{new_ibpf_01} and \eqref{new_ibpf_1} above suggest that the gradient dynamics associated with the laws of Bessel bridges of dimension $\delta <3$ should be given by the SPDEs \eqref{1<spde<3}, \eqref{0<spde<1} and \eqref{spde=1} respectively. Note that, in the case $\delta \geq 3$, the SPDEs had been solved in \cite{nualart1992white,zambotti2003integration} using pathwise techniques, and many fine properties of the solution had been studied, such as their hitting properties (see \cite{dalang2006hitting}), or the existence of occupation densities (see \cite{zambotti2004occupation}). By contrast, in the case $\delta < 3$, the SPDEs \eqref{1<spde<3}, \eqref{spde=1} and \eqref{0<spde<1} do not yet seem to possess any strong notion of solution, and essentially lie outside the scope of any existing theory of SPDEs. However, in this article, for $\delta=1$, using Dirichlet form techniques, and thanks to the IbPF \eqref{new_ibpf_1} for the reflecting Brownian bridge, we are able to construct a weak version of the associated SPDE in the stationary regime. Thus, the dynamics for $\delta=1$ can be described by \eqref{formal1}, which is a weaker version of \eqref{spde=1}. We also prove (see Theorem \ref{dist_res} below) that the corresponding Markov process does not coincide with the process associated with the absolute value of the solution to the stochastic heat equation. A similar construction can be implemented in the case $\delta=2$: this will be done in the forthcoming article \cite{henri2018bessel}. The approach using Dirichlet forms was already used in Robert Vo{\ss}hall's thesis \cite{vosshallthesis}, which provided a construction of the Markov process for $\delta=1$, but not the SPDE. The article is organized as follows: in Section \ref{sect_prelude} we address a toy-model consisting of a family of measures on $\mathbb{R}_{+}$, hence much simpler than the laws of Bessel bridges, but displaying a similar renormalisation phenomenon at the level of the IbPFs. In Section \ref{sect_sqred_bessel} we recall and prove some useful facts on the laws of squared Bessel processes, Bessel processes, and their bridges. In Section \ref{sect_ibpf_exp_func}, we state and prove the IbPFs for the laws of Bessel bridges. The dynamics associated with the law of a reflected Brownian bridge is constructed and studied in Section \ref{sect_Dirichlet}. Finally, in Section \ref{sect_conj_dynamics}, we justify our conjectures \eqref{1<spde<3} \eqref{spde=1} and \eqref{0<spde<1} for the $\delta$-Bessel SPDEs for $\delta<3$, and we formulate some additional related conjectures. {\bf Acknowledgements.} The arguments used in Prop \ref{closability} below to show quasi-regularity of the form associated with the law of a reflected Brownian bridge were communicated to us by Rongchan Zhu and Xiangchan Zhu, whom we warmly thank. The first author is very grateful to Jean-Dominique Deuschel, Tal Orenshtein and Nicolas Perkowski for their kind invitation to TU Berlin, and for very interesting discussions. We also thank Giuseppe Da Prato for very useful discussion and for his kindness and patience in answering our questions. The authors would finally like to thank the Isaac Newton Institute for Mathematical Sciences for hospitality and support during the programme "Scaling limits, rough paths, quantum field theory" when work on this paper was undertaken: this work was supported by EPSRC grant numbers EP/K032208/1 and EP/R014604/1. The second author gratefully acknowledges support by the Institut Universitaire de France and the project of the Agence Nationale de la Recherche ANR-15-CE40-0020-01 grant LSD. \section{A prelude} \label{sect_prelude} In this section we consider a toy model consisting of a family of Schwartz distributions on $\mathbb{R}_{+}$ satisfying nice integration by parts formulae. The content of this section is classical (see e.g. Section 3.5 of \cite{gelfand1964generalized}), but it will serve as a useful finite-dimensional example for the theory to come. For $\alpha \geq 0$, we set \[ \mu_{\alpha}({\rm d}x) = \frac{x^{\alpha - 1}}{\Gamma(\alpha)} \, \mathrm{d} x, \quad \alpha>0, \qquad \mu_{0} = \delta_{0}, \] where $\delta_{0}$ denotes the Dirac measure at $0$. A simple change of variable yields the Laplace transform of the measures $\mu_{\alpha}, \, \alpha \geq 0$ \begin{equation} \label{laplace0} \int_0^{+\infty} \exp ( - \lambda x) \, \mu_{\alpha}({\rm d}x) = \lambda^{-\alpha}, \qquad \lambda>0, \ \alpha \geq 0. \end{equation} It turns out that the family of measures $(\mu_{\alpha})_{\alpha \geq 0}$ can be extended in a natural way to a family of \textit{distributions} $(\mu_{\alpha})_{\alpha \in \mathbb{R}}$ . We first define the appropriate space of test functions on $[0,\infty)$. \begin{df} Let $S([0,\infty))$ be the space of $C^\infty$ functions $\varphi: [0,\infty) \to \mathbb{R}$ such that, for all $k, l \geq 0$, there exists $C_{k,\ell} \geq0$ such that \[ | \varphi^{(k)} (x) | \, x^{\ell} \leq C_{k,\ell}, \qquad \forall x \geq 0. \] \end{df} For $\alpha < 0$, we will define $\mu_{\alpha}$ as a distribution, using a \textit{renormalisation} procedure based on Taylor polynomials. To do so, for any smooth function $\varphi: \mathbb{R}_{+} \to \mathbb{R}$, for all $n \in \mathbb{Z}$, and all $x \geq 0$, we set \begin{equation}\label{eq:taylor} \mathcal{T}^{\,n}_{x} \varphi := \varphi(x) - \sum_{0\leq j\leq n} \frac{x^{j}}{j!} \, \varphi^{(j)}(0). \end{equation} In words, if $n \geq 0$ then $\mathcal{T}^{\,n}_{x} \varphi$ is the Taylor remainder based at $0$, of order $n+1$, of the function $\varphi$, evaluated at $x$; if $n<0$ then $\mathcal{T}^{\,n}_{x} \varphi$ is simply the value of $\varphi$ at $x$. \begin{df} \label{def_mu_alpha} For $\alpha < 0$, we define the distribution $\mu_{\alpha}$ as follows \begin{itemize} \item if $ \alpha = -k$ with $k \in \mathbb{N}$, then \begin{equation} \label{mu_neg_int} \langle \mu_{\alpha}, \varphi \rangle := (-1)^{k} \varphi^{(k)}(0), \qquad \forall \, \varphi \in \mathcal{S}([0,\infty)) \end{equation} \item if $ - k - 1 < \alpha < -k$ with $k \in \mathbb{N}$, then \begin{equation} \label{mu_neg_delta} \langle \mu_{\alpha} , \varphi \rangle := \int_{0}^{+ \infty} \mathcal{T}^{\,k}_{x} \varphi \, \frac{x^{\alpha -1}}{\Gamma(\alpha)} \, \mathrm{d} x, \qquad \forall \, \varphi \in \mathcal{S}([0,\infty)). \end{equation} \end{itemize} \end{df} Note that formula \eqref{mu_neg_delta} defines a bona fide distribution on $\mathcal{S}([0,\infty))$. Indeed, by Taylor's theorem, the integrand is of order $x^{k+\alpha}$ near $0$, therefore integrable there, while it is dominated by $x^{k+\alpha-1}$ near $+\infty$, so is integrable at infinity as well. We note that $\mu_\alpha$ is equal to the generalized function $\frac{x_+^{\alpha-1}}{\Gamma(\alpha)}$ of Section 3.5 of \cite{gelfand1964generalized}. \begin{rk} Note that for all $\alpha >0$ and all Borel function $\varphi: \mathbb{R}_+ \to \mathbb{R}_+$, the integral $\int_0^\infty \varphi(x) \, \mu_{\alpha}(\, \mathrm{d} x) $ coincides with $\Gamma(\alpha)^{-1} {\mathcal M}\varphi(\alpha)$, where ${\mathcal M}\varphi(\alpha)$ is the value of the Mellin transform of the function $\varphi$ computed at $\alpha$. Definition \ref{def_mu_alpha} thus provides an extension of the Mellin transform of a function $\varphi \in \mathcal{S}([0,\infty))$ to the whole real line. In particular, equality \eqref{mu_neg_int} is natural in view of Ramanujan's Master Theorem, which allows to see the successive derivatives at $0$ of an analytic function as the values, for non-positive integers, of the analytic extension of its Mellin transform. We refer to \cite{amdeberhan2012ramanujan} for more details on this theorem. We also stress that the renormalisation procedure used in equation \eqref{mu_neg_delta} to define $\mu_{\alpha}$ for $\alpha <0$ is very natural, and can also be used to extend the domain of validity of Ramanujan's Master Theorem, see Theorem 8.1 in \cite{amdeberhan2012ramanujan}. \end{rk} \begin{rk} For $k \in \mathbb{N}$ and $\alpha$ such that $-k-1 < \alpha < -k$, and for all $\varphi \in \mathcal{S}([0,\infty))$, we obtain after $k+1$ successive integration by parts the equality: \[ \langle \mu_{\alpha} , \varphi \rangle := (-1)^{k+1} \int_{0}^{+ \infty} \varphi^{(k+1)}(x) \, \mu_{\alpha+k+1} ({\rm d} x), \] which can be interpreted as a variant of the Caputo differential, at order $-\alpha$, of $\varphi$, see e.g. (1.17) in \cite{gorenflo2008fractional}. \end{rk} We recall the following basic fact, which is easily proven (see e.g. (5) in Section 3.5 of \cite{gelfand1964generalized}). It can be seen as a toy-version of the integration by parts formulae of Theorem \ref{statement_ibpf} below. \begin{prop} \label{thm_ibpf_mu} For all $\alpha \in \mathbb{R}$ and $\varphi \in \mathcal{S}([0,\infty))$ \[ \langle \mu_{\alpha}, \varphi' \rangle = - \langle \mu_{\alpha-1}, \varphi \rangle. \] \end{prop} In particular, for $\alpha\in(0,1)$ we have the measure $\mu_\alpha$ in the left-hand side of the IbPF and the distribution $\mu_{\alpha-1}$ in the right-hand side. \begin{rk} \label{laplace_mu_neg} As a consequence of Proposition \ref{thm_ibpf_mu}, we deduce that the expression \eqref{laplace0} for the Laplace tranform of $\mu_{\alpha}$ remains true also for negative $\alpha$. Indeed, for such $\alpha$, picking $k \in \mathbb{N}$ such that $\alpha + k > 0$, we have, for all $\lambda >0$ \[ \begin{split} \langle \mu_{\alpha} , e^{-\lambda \cdot} \rangle &= (-1)^{k} \, \langle \mu_{\alpha+k} , \frac{\, \mathrm{d}^{k}}{\, \mathrm{d} x^{k}} e^{-\lambda \cdot} \rangle = \lambda^{k} \, \langle \mu_{\alpha+k} , e^{-\lambda \cdot} \rangle = \lambda^{k} \, \lambda^{-\alpha-k} = \lambda^{-\alpha}. \end{split} \] \end{rk} \section{Bessel processes and associated bridges} \label{sect_sqred_bessel} In this section we recall and prove some useful facts about squared Bessel processes, Bessel processes, and their corresponding bridges. We recall that, for all $\alpha\geq 0$, $\theta>0$, $\Gamma(\alpha,\theta)$ denotes the Gamma probability law on $\mathbb{R}_{+}$ \[ \Gamma(\alpha,\theta) ({\rm d}x) = \frac{\theta^{\alpha}}{\Gamma(\alpha)} \,x^{\alpha-1} \,e^{-\theta x} \, \mathbf{1}_{x > 0}\, \mathrm{d} x, \qquad \Gamma(0,\theta):= \delta_{0}. \] \subsection{Squared Bessel processes and Bessel processes} For all $x, \delta \geq 0$, denote by $Q^{\delta}_{x}$ the law, on $C(\mathbb{R}_{+}, \mathbb{R}_{+})$, of the $\delta$-dimensional squared Bessel process started at $x$, namely the unique solution to the SDE \eqref{sqB1} with $Y_0=x$, see Chapter XI of \cite{revuz2013continuous}. We denote by $(X_t)_{t\geq 0}$ the canonical process \[ X_t:C([0,1])\to\mathbb{R}, \qquad X_t(\omega):=\omega_t, \quad \omega\in C([0,1]). \] \begin{df} For any interval $I \subset \mathbb{R}_{+}$, and any two probability laws $\mu, \nu$ on $C(I, \mathbb{R}_{+})$, let $\mu \ast \nu$ denote the convolution of $\mu$ and $\nu$, i.e. the image of $\mu \otimes \nu$ under the addition map: \[ C(I,\mathbb{R}_{+}) \times C(I, \mathbb{R}_{+}) \to C(I, \mathbb{R}_{+}), \quad (x,y) \mapsto x+y. \] \end{df} The family of probability measures $\left(Q^{\delta}_{x}\right)_{\delta, x \geq 0}$ satisfies the following well-known additivity property, first observed by Shiga and Watanabe in \cite{shiga1973bessel}. \begin{prop} \label{levy} For all $x,x', \delta, \delta'\geq 0$, we have the following equality of laws on $C(\mathbb{R}_{+}, \mathbb{R}_{+}) $ \begin{equation} \label{additivity_sqred_bes} Q^{\delta}_{x} \ast Q^{\delta'}_{x'} = Q^{\delta + \delta'}_{x + x'} \end{equation} \end{prop} We recall that squared Bessel processes are homogeneous Markov processes on $\mathbb{R}_{+}$. Exploiting the additivity property \eqref{additivity_sqred_bes}, Revuz and Yor provided, in section XI of \cite{revuz2013continuous}, explicit expressions for their transition densities $\left( q^{\delta}_{t}(x,y) \right)_{t > 0, x,y \geq 0}$. When $\delta >0$, these are given by \begin{equation} \label{density_besq_x_pos} q^{\delta}_{t}(x,y) = \frac{1}{2t} \left( \frac{y}{x} \right)^{\nu/2} \exp\left( - \frac{x+y}{2t} \right) I_{\nu} \left(\frac{\sqrt{xy}}{t} \right),\quad t >0, \ x>0. \end{equation} Here, $\nu := \delta/2 -1>-1$ and $I_{\nu}$ is the modified Bessel function of index $\nu$ \[ I_\nu(z) := \sum_{k=0}^\infty \frac{\left(z/2\right)^{2k + \nu}}{k! \, \Gamma(k + \nu +1)}, \qquad z > 0. \] For $x=0$, we have \begin{equation} \label{density_besq_x_zero} q^{\delta}_{t}(0,y) = (2t)^{-\frac\delta2} \, \Gamma \left( \delta/2 \right)^{-1} y^{\delta/2-1} \exp\left( - \frac{y}{2t} \right),\quad t >0, \end{equation} that is \[q^{\delta}_{t}(0,y) \, \mathrm{d} y = \Gamma \left(\frac{\delta}{2}, \frac{1}{2t} \right) ({\rm d} y). \] We also denote by $P^\delta_x$ the law of the $\delta$-Bessel process, image of $Q^{\delta}_{x^2}$ under the map \begin{equation} \label{sqrt_map} C(\mathbb{R}_{+}, \mathbb{R}_{+})\ni \omega \mapsto \sqrt{\omega} \in C(\mathbb{R}_{+}, \mathbb{R}_{+}) . \end{equation} We shall denote by $\left( p^{\delta}_{t}(a,b) \right)_{t >0, \, a,b \geq 0}$ the transition densities of a $\delta$-Bessel process. They are given in terms of the densities of the squared Bessel process by the relation \begin{equation} \label{relation_denisties_bes_besq} \forall t > 0, \quad \forall a, b \geq 0, \quad p^{\delta}_{t}(a,b) = 2 \, b \, q^{\delta}_{t}(a^{2},b^{2}). \end{equation} In section XI of \cite{revuz2013continuous}, Revuz and Yor provided semi-explicit expressions for the Laplace transforms of squared Bessel processes (and also the corresponding bridges). Their proof is based on the fact that, for all $\delta, x \geq 0$, and all finite Borel measure $m$ on $[0,1]$, the measure $\exp \left( - \langle m , X \rangle \right) Q^{\delta}_{x}$ possesses a nice probabilistic interpretation, where we use the notation \[ \langle m , f \rangle := \int_{0}^{1} f(r) \,m({\rm d}r) \] for any Borel function $f : [0,1] \to \mathbb{R}_+$. This remarkable fact is used implicitly in \cite{revuz2013continuous} (see e.g. the proof of Theorem (3.2) of Chap XI.3), where the authors compute the one-dimensional marginal distributions of this measure. By contrast, in the proof of Lemma \ref{lap_cond_bridge} below, we will need to compute higher-dimensional marginals. As a convenient way to perform such a computation, we will show that the measure $\exp \left( - \langle m , X \rangle \right) Q^{\delta}_{x}$ corresponds (up to a normalisation constant) to the image of the measure $Q^{\delta}_{x}$ under a deterministic time change. To prove this fact, we first introduce some notations. Let $m$ be a finite, Borel measure on $[0,1]$. As in Chap. XI of \cite{revuz2013continuous}, we consider the unique solution $\phi:\mathbb{R}_{+}\to\mathbb{R}$ of the following problem \begin{equation} \label{phi} \begin{cases} \phi''({\rm d} r) = 2 \mathbf{1}_{[0,1]}(r) \, \phi_{r} \, m({\rm d} r) \\ \phi_0=1, \ \phi > 0, \ \phi ' \leq 0 \ \text{on} \ \mathbb{R}_{+}, \end{cases} \end{equation} where the first is an equality of measures (see Appendix 8 of \cite{revuz2013continuous} for existence and uniqueness of solutions to this problem). Note that the above function $\phi$ coincides with the function $\phi_{\mu}$ of Chap XI.1 of \cite{revuz2013continuous}, with $\mu := 2 \mathbf{1}_{[0,1]} \, m$. \begin{lm} \label{measure_change} Let $m$ be a finite, Borel measure on $[0,1]$, and let $\phi$ be the unique solution of \eqref{phi}. Then, for all $x , \delta \geq 0$, the measure $R^{\delta}_{x}$ on $C([0,1])$ defined by \begin{equation} \label{measure_r_delta} R^{\delta}_{x} := \exp \left(- \frac{x}{2} \phi'_0 \right) \phi_1^{-\frac\delta2} \ e^{-\langle m, X\rangle} \ Q^{\delta}_{x} \end{equation} is a probability measure, equal to the law of the process \[ \left( \phi_t^{2} \ Y_{\varrho_t} \right)_{t \in [0,1]}, \] where $Y \overset{(d)}{=} Q^{\delta}_{x}$ and $\varrho$ is the deterministic time change \begin{equation} \label{def_var_rho} \varrho _t = \int_{0}^{t} \phi_u^{-2} \, \mathrm{d} u, \quad t \geq 0. \end{equation} \end{lm} \begin{proof} We proceed as in the proofs of Theorem (1.7) and (3.2) in Chapter XI of \cite{revuz2013continuous}. Let $x, \delta \geq 0$. Under $Q^{\delta}_{x}$, $M_{t} := X_{t} - \delta t$ is a local martingale, so we can define an exponential local martingale by setting \[ Z_{t} = \mathscr{E} \left( \frac{1}{2} \int_{0}^{\cdot} \frac{\phi'_s}{\phi_s} \, \mathrm{d} M_{s} \right)_{t}. \] As established in the proof of Theorem (1.7) of \cite{revuz2013continuous}, we have \[ \begin{split} Z_{t} &= \exp \left( \frac{1}{2} \left( \frac{\phi'_t}{\phi_t} X_{t} - \phi'_0 x - \delta \ln \phi_t \right) - \int_{0}^{t} X_{s} \,m({\rm d}s) \right) \\ &= \exp \left(- \frac{x}{2} \phi'_0 \right) \phi_t^{-\frac\delta2} \exp \left( \frac{1}{2} \frac{\phi'_t}{\phi_t} X_{t} - \int_0^t X_s \,m({\rm d}s) \right), \end{split} \] recalling that the measure $\mu$ considered in \cite{revuz2013continuous} is given in our case by $2 \, \mathbf{1}_{[0,1]} \, m$. In particular, we deduce that the measure $R^{\delta}_{x}$ defined by \eqref{measure_r_delta} coincides with $Z_{1} Q^{\delta}_{x}$ (note that $\phi'_1=0$ as a consequence of \eqref{phi}). Moreover, by the above expression, $(Z_{t})_{t\in[0,1]}$ is uniformly bounded by $\exp \left(- \frac{1}{2} \phi'_0 \right) \phi_1^{-\frac\delta2}$, so it is a martingale on $[0,1]$. Hence, $R^{\delta}_{x}$ defines a probability measure. There remains to give a description of $R^{\delta}_{x}$. By Girsanov's theorem, under $R^{1}_{x}$, $\left( X_{t} \right)_{t \in [0,1]}$ solves the following SDE on $[0,1]$ \begin{equation} \label{sde_h_square} X_{t} = x + 2 \int_{0}^{t} \sqrt{X_{s}} \, \mathrm{d} B_{s} + 2\int_{0}^{t} \frac{\phi'_s}{\phi_s} \,X_{s} \, \mathrm{d} s + t. \end{equation} But a weak solution to this SDE is provided by $(H_{t}^{2})_{t \in [0,1]}$, where \[ H_{t} := \left(\sqrt{x} + \int_{0}^{t} \phi_s^{-1} \, \mathrm{d} W_{s} \right) \phi_t , \] where $W$ is a standard Brownian motion. By strong and therefore weak uniqueness of solutions to equation \eqref{sde_h_square}, see \cite[Theorem IX.3.5]{revuz2013continuous}, we deduce that $X$ is equal in law to the process $(H_{t}^{2})_{t \in [0,1]}$. On the other hand, by L\'{e}vy's characterization theorem \cite[IV.3.6]{revuz2013continuous}, we have \[ (H_{t})_{t \in [0,1]} \overset{(d)}{=} \left( \phi_t \,\gamma_{\varrho_t} \right)_{t \in [0,1]},\] where $\gamma$ is a standard Brownian motion started at $x$. Hence we deduce that \[ (H_{t}^{2})_{t \in [0,1]} \overset{(d)}{=} \left( \phi_t^{2} \,Y_{\varrho_t} \right)_{t \in [0,1]}, \] where $Y \overset{(d)}{=} Q^{1}_{x}$. Therefore, under $R^{1}_{x}$, we have \[ X \overset{(d)}{=} \left( \phi_t^{2} \,Y_{\varrho_t} \right)_{t \in [0,1]}.\] The claim is thus proven for $\delta=1$ and for any $x \geq 0$. Now, by the additivity property \eqref{additivity_sqred_bes} satisfied by $\left(Q^{\delta}_{x} \right)_{\delta,x \geq 0}$, there exist $A, B>0$ such that, for all $x, \delta \geq 0$, and all finite Borel measure $\nu$ on $[0,1]$, we have \[ Q^{\delta}_{x} \left[ \exp \left(- \int_{0}^{1} \phi_t^{2} \,X_{\varrho_t} \,\nu ({\rm d}t) \right) \right] = A^{x} B^{\delta}, \] which can be proved exactly as Corollary 1.3 in Chapter XI of \cite{revuz2013continuous}. Note now that the family of probability laws $\left(R^{\delta}_{x} \right)_{\delta,x \geq 0}$ satisfies the same additivity property \[ \forall \ \delta, \delta ', x , x' \geq 0, \quad R^{\delta}_{x} \ast R^{\delta'}_{x'} = R^{\delta + \delta'}_{x+x'}. \] Hence, there also exist $\tilde{A}, \tilde{B} >0$ such that, for all $x, \delta \geq 0$, and $\mu$ as above: \[ R^{\delta}_{x} \left[ \exp \left(-\int_{0}^{1} X_{t} \,\nu ({\rm d}t) \right) \right] = {\tilde{A}}^{x} {\tilde{B}}^{\delta}. \] By the previous point, evaluating at $\delta=1$, we obtain \[ \forall x \geq 0, \quad A^{x} B = {\tilde{A}}^{x} \tilde{B}.\] Hence $A=\tilde{A}$ and $B = \tilde{B}$, whence we deduce that, for all $\delta, x \geq 0$ \[Q^{\delta}_{x} \left[ \exp \left(- \int_{0}^{1} \phi_t^{2} \,X_{\varrho_t} \,\nu ({\rm d}t) \right) \right] = R^{\delta}_{x} \left[ \exp \left(-\int_{0}^{1} X_{t} \,\nu ({\rm d}t) \right) \right]. \] Since this holds for any finite measure $\nu$ on $[0,1]$, by injectivity of the Laplace transform, the claimed equality in law holds for all $\delta, x \geq 0$. \end{proof} \subsection{Squared Bessel bridges and Bessel bridges} For all $\delta>0$ and $x, y \geq 0$, we denote by $Q^{\delta}_{x,y}$ the law, on $C([0,1])$, of the $\delta$-dimensional squared Bessel bridge from $x$ to $y$ over the interval $[0,1]$. In other words, $Q^{\delta}_{x,y}$ is the law of of a $\delta$-dimensional squared Bessel bridge started at $x$, and conditioned to hitting $y$ at time $1$. A rigourous construction of these probability laws is provided in Chap. XI.3 of \cite{revuz2013continuous} (see also \cite{pitman1982decomposition} for a discussion on the particular case $\delta=y=0$). In the sequel we shall chiefly consider the case $x=y=0$. We recall that if $X \overset{(d)}{=} Q^{\delta}_{0,0}$, then, for all $r \in (0,1)$, the distribution of the random variable $X_{r}$ is given by $\Gamma(\frac{\delta}{2}, \frac{1}{2r(1-r)})$, so it admits the density $q^{\delta}_{r}$ given by: \begin{equation} \label{one_pt_density_sqred_bridge_00} q^{\delta}_{r}(z) := \frac{z^{\delta/2-1}}{(2r(1-r))^{\frac\delta2} \Gamma(\delta/2)} \exp \left(- \frac{z}{2r(1-r)} \right), \quad z \geq 0, \end{equation} see Chap. XI.3 of \cite{revuz2013continuous}. In the same way as one constructs the laws of squared Bessel bridges $Q^{\delta}_{x,y}$ for $\delta>0$ and $x , y \geq 0$, one can also construct the laws of Bessel bridges. In the following, for any $\delta>0$ and $a, b \geq 0$, we shall denote by $P^{\delta}_{a,b}$ the law, on $C([0,1])$, of the $\delta$-dimensional Bessel bridge from $a$ to $b$ over the time interval $[0,1]$ (that is, the law of a $\delta$-dimensional Bessel process started at $a$ and conditioned to hit $b$ at time $1$). We shall denote by $E^{\delta}_{a,b}$ the expectation operator for $P^{\delta}_{a,b}$. Morever, when $a=b=0$, we shall drop the subindices and use the compact notations $P^\delta$ and $E^\delta$. Note that, for all $a,b \geq 0$, $P^{\delta}_{a,b}$ is the image of $Q^{\delta}_{a^{2},b^{2}}$ under the map $\omega\mapsto\sqrt{\omega}$. In particular, under the measure $P^\delta$, for all $r \in (0,1)$, $X_{r}$ admits the density $p^{\delta}_{r}$ on $\mathbb{R}_{+}$, where by \eqref{one_pt_density_sqred_bridge_00} \begin{equation} \label{one_pt_density_bridge_00} p^{\delta}_{r}(a) = 2 a \, q^{\delta}_{r}(a^{2})= \frac{a^{\delta-1}}{2^{\frac\delta2-1}\,\Gamma(\frac{\delta}{2})(r(1-r))^{\delta/2}}\, \exp \left(- \frac{a^{2}}{2r(1-r)} \right), \quad a \geq 0 . \end{equation} \subsection{Pinned bridges} Let $\delta>0$. For all $x \geq 0$ and $r \in (0,1)$, we denote by $Q^{\delta}_{0,0} [\, \cdot \, | \, X_{r} = x]$ the law, on $C([0,1])$, of a $\delta$-dimensional squared Bessel bridge between $0$ and $0$, pinned at $x$ at time $r$ (that is, conditioned to hit $x$ at time $r$). Such a probability law can be constructed using the same procedure as for the construction of squared Bessel bridges. One similarly defines, for all $a \geq 0$ and $r \in (0,1)$, the law $P^{\delta} [\ \cdot \ \, | \, X_{r} = a]$ of a $\delta$-dimensional Bessel bridge between $0$ and $0$ pinned at $a$ at time $r$. Note that the latter probability measure is the image of $Q^{\delta}_{0,0} [ \ \cdot \ \, | \, X_{r} = a^{2}]$ under the map \eqref{sqrt_map}. With these notations at hand, we now define a family of measures which will play an important role in the IbPF for Bessel bridges. Heuristically, they should be related to the local times of the solution $(u(t,x))_{t\geq 0, \, x \in [0,1]}$ to an SPDE having the law of a Bessel bridge as reversible measure. \begin{df} For all $a \geq 0$ and $r\in(0,1)$, we set \begin{equation}\label{Sigma} \Sigma^\delta_r({\rm d}X \,|\, a) := \frac{p^{\delta}_{r}(a)}{a^{\delta-1}} \, P^{\delta} [ {\rm d} X \,| \, X_{r} = a], \end{equation} where $p^{\delta}_{r}$ is the probability density function of $X_{r}$ under $P^{\delta}:=P^{\delta}_{0,0}$, see \eqref{one_pt_density_bridge_00}. \end{df} The measure $\Sigma^\delta_r(\,\cdot \,|\, a)$ is meant to be the \textit{Revuz measure} of the \textit{additive functional} corresponding to the diffusion local time of $(u(t,r))_{t\geq 0}$ at level $a \geq 0$ (see \cite[Chap. V]{fukushima2010dirichlet} and \cite[Chap. 6]{ma2012introduction} for this terminology). \begin{rk} Note that, for all $r \in (0,1)$, by \eqref{one_pt_density_bridge_00}, we have \[ \frac{p^{\delta}_{r}(a)}{a^{\delta-1}} = \frac1{2^{\frac\delta2-1}\,\Gamma(\frac{\delta}{2})(r(1-r))^{\delta/2}}\, \exp \left(- \frac{a^{2}}{2r(1-r)} \right), \quad a > 0, \] and the right-hand side is well-defined also for $a=0$. It is this quantity that we consider in equality \eqref{Sigma} above. \end{rk} To keep the formulae concise, for all $r \in (0,1)$ and $a \geq 0$, and all Borel function $\Phi : C([0,1]) \to \mathbb{R}_+$, we shall write with a slight abuse of language \[ \Sigma^\delta_r(\Phi(X) \,|\, a) := \int \Phi(X) \ \Sigma^\delta_r({\rm d}X \,|\, a). \] In the sequel we will have to compute quantities of the form \[ \Sigma^\delta_r\left(\exp(- \langle m, X^2 \rangle) \,|\, a\right) \] for $m$ a finite Borel measure on $[0,1]$. In that perspective, we introduce some further notations. Given such a $m$, following the notation used in \cite{pitman1982decomposition} (see also Exercise (1.34), Chap. XI, of \cite{revuz2013continuous}), we denote by $\psi$ the function on $[0,1]$ given by \begin{equation} \label{psi} \psi_r := \phi_r \int_{0}^{r} \phi_u^{-2} \, \mathrm{d} u = \phi_r \varrho_r, \qquad r \in [0,1], \end{equation} where $\varrho$ is as in \eqref{def_var_rho}. Note that $\psi$ is the unique solution on $[0,1]$ of the Cauchy problem \[\begin{cases} \psi''({\rm d} r) = 2 \, \psi_{r} \, m({\rm d} r) \\ \psi_0=0, \quad \psi'_0= 1. \end{cases} \] Moreover, we denote by $\hat{\psi}$ the function on $[0,1]$ given by \begin{equation} \label{psi_hat} \hat{\psi}_r := \phi_1 \phi_r (\varrho_{1} - \varrho_r) = \psi_1 \phi_r - \psi_r \phi_1,\quad r \in [0,1]. \end{equation} Note that $\hat{\psi}$ satisfies the following problem on $[0,1]$ \[\begin{cases} \hat{\psi}''({\rm d} r) = 2 \, \hat{\psi}_{r} \, m({\rm d} r) \\ \hat{\psi}_1=0, \quad \hat{\psi}'_1= -1. \end{cases} \] Note that the functions $\phi$, $\psi$ and $\hat{\psi}$ take positive values on $]0,1[$. \begin{lm}\label{lap_cond_bridge} For all $r \in (0,1)$, $\delta>0$ and $a \geq 0$, the following holds: \begin{equation}\label{bridge2} \int \exp (- \langle m, X^{2} \rangle) \ \Sigma^\delta_r({\rm d}X \,|\, a) = \frac1{2^{\frac\delta2-1}\,\Gamma(\frac{\delta}{2})} \, \exp \left(-\frac{a^{2}}{2} C_r \right) D_r^{\delta/2}, \end{equation} where \[ C_r = \frac{\psi_1}{\psi_r \hat{\psi}_r}, \qquad D_r = \frac{1}{\psi_r \hat{\psi}_r}. \] \end{lm} \begin{proof} First note that by \eqref{relation_denisties_bes_besq} and \eqref{Sigma}, we have \begin{equation} \label{intermediate_expr} \begin{split} \int \exp (- \langle m, X^{2} \rangle) \ \Sigma^\delta_r({\rm d}X \,|\, a) = 2 \, \frac{q^{\delta}_{r}(a^{2})}{a^{\delta-2}} \, Q^{\delta}_{0,0} [\exp (- \langle m, X \rangle ) \, | \, X_{r} = a^{2}]. \end{split} \end{equation} To obtain the claim, it therefore suffices to compute \[ Q^{\delta}_{0,0} [\exp (- \langle m, X \rangle ) \, | \, X_{r} = a^{2}]. \] Since $Q^{\delta}_{0,0} := Q^\delta_0 [ \, \cdot \, | X_1 = 0]$, one can rewrite the above expression as \[Q^{\delta}_0 [\exp (- \langle m, X \rangle ) \, | \, X_{r} = a^{2}, X_1 = 0].\] Therefore, \eqref{bridge2} follows from the computation of the Laplace transform of the conditional law $ Q^{\delta}_0$ given the value of the pair $(X_r,X_1)$. To this aim, consider two Borel functions $f,g: \mathbb{R}_{+} \to \mathbb{R}_{+}$. We have \begin{align*} &\int_{0}^{\infty} \int_{0}^{\infty} Q^{\delta}_{0} [\exp (- \langle m, X \rangle) \, | \, X_{r} = x, X_{1}=y] \, q^{\delta}_{r}(a^{2},x) q^{\delta}_{1-r}(x,y) f(x) g(y) \, \mathrm{d} x \, \mathrm{d} y = \\ &= Q^{\delta}_{0} \left[\exp (- \langle m, X \rangle) f(X_{r}) g(X_{1}) \right] = \phi_1^{\frac\delta2} Q^{\delta}_{0} \left[ f \left( \phi_r^{2} X_{\varrho_r} \right) g \left( \phi_1^{2} X_{\varrho_1} \right) \right] = \\ &= \phi_1^{\delta/2-2} \phi_r^{-2} \int_{0}^{\infty} \int_{0}^{\infty} q^{\delta}_{\varrho_r} \left(0,\frac{x}{\phi_r^{2}}\right) q^{\delta}_{\varrho_1 - \varrho_r}\left(\frac{x}{\phi_r^{2}},\frac{y}{\phi_1^{2}}\right)f(x)g(y) \, \mathrm{d} x \, \mathrm{d} y . \end{align*} Here, we used Lemma \ref{measure_change} to obtain the second equality. Since the functions $f$ and $g$ are arbitrary we deduce that: \[ \begin{split} & Q^{\delta}_{0} [\exp (- \langle m, X \rangle) \, | \, X_{r} = x, X_{1}=y] \, = \, \phi_1^{\delta/2-2} \phi_r^{-2} \frac{q^{\delta}_{\varrho_r} \left(0,\frac{x}{\phi_r^{2}}\right) q^{\delta}_{\varrho_1 - \varrho_r}\left(\frac{x}{\phi_r^{2}},\frac{y}{\phi_1^{2}}\right)}{q^{\delta}_{r}(0,x) \,q^{\delta}_{1-r}(x,y)} \end{split}\] ${\rm d} x \, \mathrm{d} y$ a.e. on ${\mathbb{R}_{+}^{*}}\times {\mathbb{R}_{+}^{*}}$. Since the family of measures $\left( Q^{\delta}_{x,y} \right)_{x,y \geq 0}$ is continuous in $(x,y) \in \mathbb{R}_{+}^{2}$ for the weak topology on probability measures (see \cite{revuz2013continuous}, Section XI.3), we deduce that, for all $x \geq 0$ \[\begin{split} Q^{\delta}_{0,0} [\exp (- \langle m, X \rangle) \, | \, X_{r} = x] &= \underset{\substack{y \to 0 \\y>0}}{\lim} \, \phi_1^{\delta/2-2} \phi_r^{-2} \frac{q^{\delta}_{\varrho_r} \left(0,\frac{x}{\phi_r^{2}}\right) q^{\delta}_{\varrho_1 - \varrho_r}\left(\frac{x}{\phi_r^{2}},\frac{y}{\phi_1^{2}}\right)}{q^{\delta}_{r}(0,x) \,q^{\delta}_{1-r}(x,y)}. \end{split}\] But, by \eqref{density_besq_x_pos} and \eqref{density_besq_x_zero}, we have \[\frac{q^{\delta}_{\varrho_r} \left(0,\frac{x}{\phi_r^{2}}\right) }{q^{\delta}_{r}(0,x)} = \left(\frac{r}{\varrho_{r}}\right)^{\frac\delta2} \phi_{r}^{2-\delta} \exp \left(-\frac{x}{2}\left(\frac{1}{\phi_{r}^{2} \varrho_{r}}- \frac{1}{r} \right) \right) \] and \[ \underset{\substack{y \to 0 \\y>0}}{\lim} \, \frac{q^{\delta}_{\varrho_1 - \varrho_r}\left(\frac{x}{\phi_r^{2}},\frac{y}{\phi_1^{2}}\right)}{q^{\delta}_{1-r}(x,y)} = \left(\frac{1-r}{\varrho_{1} -\varrho_{r}}\right)^{\frac\delta2} \phi_{1}^{2-\delta} \exp \left(-\frac{x}{2}\left(\frac{1}{\phi_{r}^{2} (\varrho_{1}-\varrho_{r})}- \frac{1}{1-r} \right) \right). \] We thus obtain \begin{equation} \label{equality_bridge} \begin{split} &Q^{\delta}_{0,0} [\exp (- \langle m, X \rangle) \, | \, X_{r} = x] =\\ &= \phi_1^{-\delta/2} \phi_r^{-\delta} \left(\frac{r(1-r)}{\varrho_{r}(\varrho_{1} -\varrho_{r})}\right)^{\frac\delta2} \exp \left(-\frac{x}{2}\left(\frac{\varrho_{1}}{\phi_{r}^{2} \varrho_{r}(\varrho_{1}-\varrho_{r})}- \frac{1}{r(1-r)} \right) \right) = \\ &= \left(\frac{r(1-r)}{\psi_{r} \hat{\psi}_{r}}\right)^{\frac\delta2} \exp \left(-\frac{x}{2}\left(\frac{\psi_{1}}{\psi_{r} \hat{\psi}_{r}}- \frac{1}{r(1-r)} \right) \right), \end{split} \end{equation} where the second equality follows from the relations \eqref{psi}-\eqref{psi_hat} defining $\psi$ and $\hat{\psi}$. Applying this equality to $x=a^{2}$, and replacing in \eqref{intermediate_expr}, we obtain the claim. \end{proof} \begin{rk} Along the proof of the above Proposition, for $\delta > 0$, $a \geq 0$, $r \in (0,1)$ and $m$ as above, we also obtained from equality \eqref{equality_bridge} the following, useful expression \begin{equation} \label{cond_bridge} \begin{split} & Q^{\delta}_{0,0} \left[\exp (- \langle m, X \rangle) \, | \, X_{r} = a^2\right] = E^{\delta} [\exp (- \langle m, X^{2} \rangle) \, | \, X_{r} = a] \\ & = \exp \left(-\frac{a^{2}}{2} \left( \frac{\psi_1}{\psi_r \hat{\psi}_r} - \frac{1}{r(1-r)} \right)\right) \left(\frac{r(1-r)}{\psi_r \hat{\psi}_r}\right)^{\delta/2}. \end{split} \end{equation} \end{rk} \section{Integration by parts formulae} \label{sect_ibpf_exp_func} Here and in the sequel, we denote by $\mathcal{S}$ the linear span of all functionals on $C([0,1])$ of the form \begin{equation} \label{exp_functional} C([0,1])\ni X \mapsto \exp \left( - \langle m, X^{2} \rangle \right)\in \mathbb{R} \end{equation} where $m$ is a finite Borel measure on $[0,1]$. The elements of $\mathcal{S}$ are the functionals for which we will derive our IbPFs wrt the laws of Bessel bridges. \subsection{The statement} After recalling the definition \eqref{kappadelta} of $\kappa(\delta) = \frac{(\delta-3)(\delta-1)}{4}$, for $\delta\in\mathbb R$, we can now state one of the main results of this article. \begin{thm} \label{statement_ibpf} Let $\delta \in (0,\infty) \setminus \{1,3\}$, and set $k:=\lfloor \frac{3-\delta}{2} \rfloor \leq 1$. Then, for all $\Phi \in \mathcal{S}$ and $h \in C^2_c(0,1)$ \begin{equation} \label{exp_fst_part_ibpf_a_b} \begin{split} & E^{\delta} (\partial_{h} \Phi (X) ) + E^{\delta} (\langle h '' , X \rangle \, \Phi(X) ) = \\ &=-\kappa(\delta)\int_{0}^{1} h_{r} \int_0^\infty a^{\delta-4} \Big[ \mathcal{T}^{\,2k}_{a} \, \Sigma^\delta_r(\Phi (X) \,|\, \cdot\,) \Big] \, \mathrm{d} a \, \mathrm{d} r, \end{split} \end{equation} where $\mathcal{T}^{\,n}_{x}$ is the Taylor remainder defined in \eqref{eq:taylor}. On the other hand, when $\delta \in \{1,3\}$, the following formulae hold for all $\Phi \in \mathcal{S}$ and $h \in C^2_c(0,1)$ \begin{equation} \label{exp_fst_part_ibpf_a_b_3} E^{3}(\partial_{h} \Phi (X) ) + E^{3}(\langle h '' , X \rangle \, \Phi(X) ) = -\frac{1}{2} \int_{0}^{1} h_{r} \, \Sigma^3_r(\Phi (X) \,|\, 0) \, \mathrm{d} r, \end{equation} \begin{equation} \label{exp_fst_part_ibpf_a_b_1} \begin{split} E^{1} (\partial_{h} \Phi (X) ) + E^{1} (\langle h '' , X \rangle \, \Phi(X) ) = \frac{1}{4} \int_{0}^{1} h_{r} \, \frac{{\rm d}^{2}}{{\rm d} a^{2}} \, \Sigma^1_r (\Phi(X) \, | \, a) \, \biggr\rvert_{a=0} \, \mathrm{d} r. \end{split} \end{equation} \end{thm} \begin{rk} Note that the last integral in \eqref{exp_fst_part_ibpf_a_b} is indeed convergent. Indeed, by Lemma \ref{lap_cond_bridge} $\mathcal{T}^{\,2k}_{a} \, \Sigma^\delta_r(\Phi (X) \,|\, \cdot\,)$ is the Taylor remainder of order $2k$ at $0$ of a smooth, even, function, see \eqref{eq:taylor} above. Hence, near $0$, the integrand is of order $O(a^{\delta+ 2k - 2})$. Since, $\delta + 2k -2 > -1$, the integral is convergent at $0$. On the other hand, near $\infty$, the integrand is of order $O(a^{\delta + 2k -4 })$. Since $\delta + 2k -4 < -1$, integrability also holds at $+ \infty$. \end{rk} \begin{rk} For all $\delta \in (1,3)$ the right-hand side in the IbPF \eqref{exp_fst_part_ibpf_a_b} takes the form \[ -\kappa(\delta)\int_{0}^{1} h_{r} \int_0^\infty a^{\delta-4} \Big[ \Sigma^\delta_r(\Phi (X) \,|\, a) - \Sigma^\delta_r(\Phi (X) \,|\, 0) \Big] \, \mathrm{d} a \, \mathrm{d} r. \] Note that, while there is a transition in the structure of the IbPF at the values $\delta=3$ and $\delta=1$, with the order of the Taylor series changing at these critical values, no such transition occurs at $\delta=2$. This might seem surprising given the transition that the Bessel bridges undergo at $\delta=2$, which is the smallest value of $\delta$ satisfying \[ P^{\delta} \left[ \exists r \in \,]0,1[ \ : \, X_{r} = 0 \right] = 0. \] This lack of transition at $\delta=2$ is related to the fact that, as a consequence of Lemma \ref{lap_cond_bridge}, we have for all $\Phi\in{\mathcal E}$: \[ \frac{\rm d}{{\rm d}a} \, \Sigma^\delta_r(\Phi (X) \,|\, a) \biggr\rvert_{a=0} = 0. \] \end{rk} \begin{rk} In the IbPF \eqref{exp_fst_part_ibpf_a_b}, the last term may equivalently be written as \begin{equation} \label{last_term} -\kappa(\delta)\int_{0}^{1} h_{r} \int_0^\infty a^{-3} \Big[ \mathcal{T}^{\,2k}_{a} \, \Sigma^\delta_r(\Phi (X) \,|\, \cdot\,) \Big] m_{\delta}({\rm d}a) \, \mathrm{d} r \end{equation} where $m_{\delta}$ is the measure on $\mathbb{R}_{+}$ defined by \[ m_{\delta}({\rm d} a) = \mathbf{1}_{a>0} \, a^{\delta-1} \, \mathrm{d} a. \] Note that $m_{\delta}$ is a reversible measure for the $\delta$-dimensional Bessel process. Actually, if $(X_{t})_{t \geq 0}$ is a $\delta$ dimensional Bessel process, we can construct a bicontinuous family of \textit{diffusion local times} $\left(\ell^{a}_{t}\right)_{a, t \geq 0}$, satisfying the occupation times formula \[ \int_{0}^{t} f \left( X_{s} \right) {\rm d} s = \int_{0}^{+\infty} f(a) \, \ell^{a}_{t} \, m_{\delta}({\rm d}a), \] for all $f: \mathbb{R}_{+} \to \mathbb{R}_{+}$ bounded and Borel. We hope that such a property should hold also for $(u(t,x))_{t \geq 0}$, for all $x \in (0,1)$ where $u$ is the hypothetical solution of the dynamics corresponding to $P^{\delta}$. In that case the term \eqref{last_term} should correspond, in the dynamics, to a drift in $u^{-3}$ integrated against renormalised local times. We shall develop this idea more in detail in Section \ref{sect_conj_dynamics} below. \end{rk} \subsection{Proof of Theorem \ref{statement_ibpf}} We first state a differential relation satisfied by the product of the functions $\psi$ and $\hat{\psi}$ associated as above with a finite Borel measure $m$ on $[0,1]$. This relation is the skeleton of all the IbPFs for $P^{\delta}$, $\delta >0$ : the latter will all be deduced from the former with a simple multiplication by a constant (depending on the parameter $\delta$). \begin{lm}\label{thm} Let $m$ be a finite Borel measure on $[0,1]$, and consider the functions $\psi$ and $\hat{\psi}$ as in \eqref{psi} and \eqref{psi_hat}. Then, for all $h \in C^{2}_{c}(0,1)$ and $\delta > 0$, the following equality holds \begin{equation}\label{laplace} \int_{0}^{1} \sqrt{\psi_r \hat{\psi}_r} \left( h''_r \, \mathrm{d} r - 2 h_r \, m({\rm d} r) \right) = - \frac{1}{4} \psi_1^{2} \int_{0}^{1} h_r (\psi_r \hat{\psi}_r)^{-\frac{3}{2}} \, \mathrm{d} r. \end{equation} \end{lm} \begin{proof} Performing an integration by parts, we can rewrite the left-hand side as \[ \int_{0}^{1} h_r \left( \frac{\mathrm{d}^{2}}{\mathrm{d} r^{2}} - 2 \, m({\rm d} r) \right) \left( \psi_r \hat{\psi}_r \right)^{\frac{1}{2}}. \] Note that here we are integrating wrt the signed measure \[ \left( \frac{\mathrm{d}^{2}}{\mathrm{d} r^{2}} - 2 \, m({\rm d} r) \right) \left( \psi_r \hat{\psi}_r \right)^{\frac{1}{2}} = \frac{\mathrm{d}^{2}}{\mathrm{d} r^{2}} \left( \psi_r \hat{\psi}_r \right)^{\frac{1}{2}} - 2 \left( \psi_r \hat{\psi}_r \right)^{\frac{1}{2}} \, m({\rm d} r).\] Now, we have \begin{align*} \frac{\mathrm{d}^{2}}{\mathrm{d} r^{2}} \left( \psi \hat{\psi} \right)^{\frac{1}{2}} = \frac{1}{2} \frac{ \psi''\hat{\psi} + 2 \psi'\hat{\psi}' + \psi \hat{\psi}''}{(\psi\hat{\psi})^{\frac12}} - \frac{1}{4} \frac{(\psi'\hat{\psi} + \psi \hat{\psi}')^{2}}{(\psi\hat{\psi})^{3/2}}. \end{align*} Recalling that $\psi''= 2 \psi \, m$ and $\hat{\psi}''=2 \hat{\psi} \, m$, we obtain \begin{align*} \left( \frac{\mathrm{d}^{2}}{\mathrm{d} r^{2}} - 2 \, m ({\rm d} r) \right) \left( \psi \hat{\psi} \right)^{\frac{1}{2}} = & \frac{\psi'\hat{\psi}'\psi \hat{\psi} - \frac{1}{4} (\psi'\hat{\psi} + \psi \hat{\psi}')^{2}}{(\psi\hat{\psi})^{3/2}} \\ = & -\frac{1}{4} \frac{(\psi'\hat{\psi} - \psi \hat{\psi}')^{2}}{(\psi\hat{\psi})^{3/2}}. \end{align*} Using the expressions \eqref{psi} and \eqref{psi_hat} for $\psi$ and $\hat{\psi}$, we easily see that \[ \psi'_r\hat{\psi}_r -\psi \hat{\psi}'_r = \psi_1, \qquad r \in (0,1). \] Hence, we obtain the following equality of signed measures: \[ \left( \frac{\mathrm{d}^{2}}{\mathrm{d} r^{2}} - 2 \, m \right) \left( \psi \hat{\psi} \right)^{\frac{1}{2}} = - \frac{1}{4} \frac{\psi_1^{2}}{(\psi_r\hat{\psi}_r)^{3/2}} \, \mathrm{d} r. \] Consequently, the left-hand side in \eqref{laplace} is equal to \[ - \frac{1}{4} \psi_1^{2} \int_{0}^{1} \, \mathrm{d} r \ h_r \left(\psi_r\hat{\psi}_r\right)^{-3/2}. \] The claim follows. \end{proof} As a consequence, we obtain the following preliminary result. \begin{lm} Let $m$ be a finite measure on $[0,1]$, and let $\Phi:C([0,1]) \to \mathbb{R}$ be the functional thereto associated as in \eqref{exp_functional}. Then, for all $\delta>0$ and $h \in C^{2}_{c}(0,1)$, \begin{equation}\label{exp_fst_part_ibpf0} \begin{split} & E^{\delta} (\partial_{h} \Phi (X) ) + E^{\delta} (\langle h '' , X \rangle \, \Phi(X) ) = \\ & =-\frac{\Gamma(\frac{\delta+1}{2})}{2^{\frac32}\,\Gamma(\frac{\delta}{2})} \, \psi_1^{-\frac{\delta-3}{2}}\int_{0}^{1} h_r \left(\psi_r\hat{\psi}_r\right)^{-\frac32} \, \mathrm{d} r, \end{split} \end{equation} where $\psi$ and $\hat{\psi}$ are associated with $m$ as in \eqref{psi} and \eqref{psi_hat}. \end{lm} \begin{proof} By the expression \eqref{exp_functional} for $\Phi$, we have \[ \partial_{h} \Phi (X) = - 2 \langle X h , m \rangle \, \Phi(X).\] Therefore \begin{align*} &E^{\delta} (\partial_{h} \Phi (X) ) + E^{\delta} (\langle h '' , X \rangle \, \Phi(X) ) =Q^{\delta}_{0,0} \left[ \left( \langle h '',\sqrt{X} \rangle - 2 \langle h \sqrt{X}, m\rangle \right) \, e^{- \langle m, X \rangle} \right] = \\ &=\int_{0}^{1} ( h''_{r} \, \mathrm{d} r - 2 h_{r} \, m({\rm d} r) ) \int_{0}^{+ \infty} \! \Gamma\left(\frac{\delta}2,\frac1{2r(1-r)}\right)({\rm d}a) \sqrt{a} \, Q^{\delta}_{0,0} \left[\left.e^{- \langle m, X \rangle} \, \right| \, X_{r} = a \right]. \end{align*} By \eqref{cond_bridge} we obtain: \begin{align*} &E^{\delta} (\partial_{h} \Phi (X) ) + E^{\delta} (\langle h '', X \rangle \, \Phi(X))= \\ &= \int_{0}^{1} \, \left(h''_{r} \, \mathrm{d} r - 2 h_{r} \, m({\rm d} r) \right) \frac{\Gamma(\frac{\delta+1}2)}{\Gamma(\frac{\delta}2)}\left(\frac{C_r}2\,\psi_1^\delta\right)^{-\frac12} \int_{0}^{+ \infty} \, \Gamma\left(\frac{\delta+1}2,\frac{C_r}2\right)({\rm d}a) \\ &= \sqrt{2}\,\frac{\Gamma(\frac{\delta+1}2)}{\Gamma(\frac{\delta}2)} \psi_1^{-\frac{\delta+1}2}\int_{0}^{1} {\rm d} r \, \left(h''_{r} \, \mathrm{d} r - 2 h_{r} \, m({\rm d} r) \right) \sqrt{\psi_r\hat\psi_r} . \end{align*} Finally, by \eqref{laplace}, the latter expression is equal to \[ -\frac{\Gamma(\frac{\delta+1}{2})}{2^{\frac32}\,\Gamma(\frac{\delta}{2})} \, \psi_1^{-\frac{\delta-3}{2}}\int_{0}^{1} h_r \left(\psi_r\hat{\psi}_r\right)^{-\frac32} \, \mathrm{d} r \] and the proof is complete. \end{proof} Apart from the above lemma, the proof of the IbPF for $P^{\delta}$, $\delta > 0$, will require integral expressions for negative Gamma values. For all $x\in \mathbb{R}$ we set $\lfloor x \rfloor:=\sup\{k\in \mathbb{Z}: k \leq x\}$. We also use the notation $\mathbb Z^-:=\{n\in\mathbb Z: n\leq 0\}$. \begin{lm} \label{neg_gamma} For all $x\in \mathbb{R}\setminus\mathbb Z^-$ \[ \Gamma (x) = \int_{0}^{\infty} t^{x-1} \mathcal{T}^{\,\lfloor - x \rfloor}_{t} (e^{- \, \cdot \,}) \, \mathrm{d} t. \] \end{lm} \begin{proof} By Remark \ref{laplace_mu_neg} we have \[ \int_{0}^{\infty} t^{x-1} \mathcal{T}^{\,\lfloor - x \rfloor}_{t} (e^{- \, \cdot \,}) \, \mathrm{d} t =\Gamma(x) \,\langle \mu_{\alpha} , e^{-\cdot } \rangle=\Gamma(x) \,1^{x}=\Gamma(x), \] and the claim follows. \end{proof} From Lemma \ref{neg_gamma} we obtain for all $C>0$, $x\in \mathbb{R}\setminus\mathbb Z^-$ \begin{equation}\label{forallx} \Gamma(x) \, C^{-x} = 2^{1-x}\int_0^{+\infty} a^{2x-1} \left( e^{-C\frac{a^2}2} - \sum_{0\leq j\leq \lfloor -x \rfloor} \frac{(-C)^{j} a^{2j}}{2^jj !} \right) \, \mathrm{d} a \end{equation} by a simple change of variable $t=Cb^2/2$. Then \eqref{forallx} can be rewritten as follows \begin{equation}\label{forallx'} \Gamma(x) \, C^{-x} = 2^{1-x}\int_0^{+\infty} a^{2x-1} \, \mathcal{T}^{\, 2\lfloor - x \rfloor}_{a} \left( e^{-C\frac{(\cdot)^2}2} \right) \, \mathrm{d} a, \quad x\in \mathbb{R}\setminus\mathbb Z^-. \end{equation} We can finally prove the main statement of this section. \begin{proof}[Proof of Theorem \ref{statement_ibpf}] Let first $\delta>0$ and $\delta\notin\{1,3\}$. Then by \eqref{exp_fst_part_ibpf0} \[ \begin{split} & E^{\delta} (\partial_{h} \Phi (X) ) + E^{\delta} (\langle h '' , X \rangle \, \Phi(X) ) = \\ &= - \frac{\Gamma(\frac{\delta+1}{2})}{2^{3/2} \,\Gamma(\frac{\delta}{2})} \, \int_{0}^{1} h_{r}\, \left(\frac{\psi_1}{\psi_r\hat\psi_r}\right)^{\frac{3-\delta}{2}} \, \left(\psi_r\hat{\psi}_r\right)^{-\frac{\delta}{2}}\, \mathrm{d} r \\ &= - \frac{\Gamma(\frac{\delta+1}{2})}{2^{3/2} \,\Gamma(\frac{\delta}{2})} \, \int_{0}^{1} h_{r}\, C_{r}^{\frac{3-\delta}{2}} \, D_{r}^{\delta/2}\, \mathrm{d} r \\ & = - \frac{\Gamma(\frac{\delta+1}{2})}{2^{3/2} \Gamma(\frac{\delta}{2})} \, \frac{ 2^{\frac{5-\delta}{2}}}{\Gamma\left(\frac{\delta-3}{2}\right)} \int_{0}^{1} h_{r} \,D_{r}^{\delta/2} \int_0^\infty a^{\delta-4} \, \mathcal{T}^{\,2k}_{a} e^{-\frac{C_r}{2} (\cdot)^{2}}\, \mathrm{d} a \, \mathrm{d} r, \end{split} \] where we used \eqref{forallx'} with $C=C_{r}$ and $x = \frac{\delta-3}{2}$ to obtain the last line. Recalling the expression \eqref{bridge2} for $\Sigma^\delta_r(\Phi(X)\,|\,a)$, we thus obtain \[\begin{split} & E^{\delta} (\partial_{h} \Phi (X) ) + E^{\delta} (\langle h '' , X \rangle \, \Phi(X) ) = \\& = - \frac{\Gamma(\frac{\delta+1}{2})}{\Gamma(\frac{\delta-3}{2})} \int_{0}^{1} h_{r} \int_0^\infty a^{\delta-4} \, \mathcal{T}^{\,k}_{2a} \, \Sigma^\delta_r(\Phi(X)\,|\,a) \,\, \mathrm{d} a \, \mathrm{d} r. \end{split} \] Now, since $\delta\notin\{1,3\}$, \[ \textstyle{\Gamma(\frac{\delta+1}{2}) = \frac{\delta-1}{2}\, \Gamma(\frac{\delta-1}{2}) = \frac{\delta-1}{2}\, \frac{\delta-3}{2}\, \Gamma(\frac{\delta-3}{2}) = \kappa(\delta)\, \Gamma(\frac{\delta-3}{2}). } \] Therefore $\frac{\Gamma(\frac{\delta+1}{2})}{\Gamma(\frac{\delta-3}{2})}=\kappa(\delta)$ and we obtain the claim. There remains to treat the critical cases $\delta \in \{1,3\}$. By linearity, we may assume that $\Phi$ is of the form \eqref{exp_functional}. For $\delta=3$ we have by \eqref{exp_fst_part_ibpf0} \[ \begin{split} & E^{3} (\partial_{h} \Phi (X) ) + E^{3} (\langle h '' , X \rangle \, \Phi(X) ) = -\frac{1}{2^{\frac32}\,\Gamma(\frac3{2})} \int_{0}^{1} h_{r} \left(\psi_r\hat{\psi}_r\right)^{-\frac32}\, \mathrm{d} r. \end{split} \] By \eqref{bridge2} this equals \[ - \frac{1}{2} \int_{0}^{1} \, \mathrm{d} r \, h_r \, \Sigma^3_r(\Phi(X) \,|\, 0) \] and the proof is complete. For $\delta=1$, by \eqref{exp_fst_part_ibpf0}, we have \[E^{1} (\partial_{h} \Phi (X) + \langle h '' , X \rangle \, \Phi(X) ) = -\frac1{2\sqrt{2\pi}} \, \psi_1\int_{0}^{1} h_r \left(\psi_r\hat{\psi}_r\right)^{-\frac32} \, \mathrm{d} r. \] But by \eqref{bridge2} we have, for all $r \in (0,1)$ \[ \frac{{\rm d}^{2}}{{\rm d} a^{2}} \, \Sigma^1_r (\Phi(X) \, | \, a)\, \biggr\rvert_{a=0} = - \frac{C_r \, D_r^{\frac12}}{2^{-\frac12}\,\Gamma(\frac{1}{2})} = -\sqrt{\frac{2}{\pi}}\, \psi_1 \left(\psi_r\hat{\psi}_r \right)^{-\frac{3}{2}}. \] The claimed IbPF follows. \end{proof} \begin{rk}\label{weaker} In \cite{zambotti2005integration} for the reflecting Brownian motion, and then in \cite{grothaus2016integration} for the Reflecting Brownian bridge, a different formula was proved in the case $\delta=1$. In our present notations, for $(\beta_r)_{r\in[0,1]}$ a Brownian bridge and $X:=|\beta|$, the formula reads \begin{equation}\label{|beta|} \mathbb E(\partial_h\Phi(X)) + \mathbb E(\langle h '' , X \rangle \, \Phi(X) )= \lim_{\epsilon\to 0}2 \, \mathbb E\left(\Phi(X)\int_0^1 h_r\left[ \left(\dot{\beta^\epsilon_r}\right)^2 -c^\epsilon_r\right] {\rm d} L^0_r \right), \end{equation} where $\Phi: H\to\mathbb R$ is any Lipschitz function, $h\in C^2_0(0,1)$, $L^0$ is the standard local time of $\beta$ at $0$ and for some even smooth mollifier $\rho_\epsilon$ we set \[ \beta^\epsilon:=\rho_\epsilon*\beta, \qquad c^\epsilon_r:=\frac{\|\rho\|_{L^{2}(0,1)}^{2}}{\epsilon}. \] The reason why \eqref{|beta|} is strictly weaker than \eqref{exp_fst_part_ibpf_a_b_1}, is that the former depends explicitly on $\beta$, while the latter is written only in terms of $X$. This will become crucial when we compute the SPDE satisfied by $u$ for $\delta=1$ in Theorem \ref{fukushima_decomposition} below. \end{rk} As a consequence of Theorem \ref{statement_ibpf}, we retrieve the following known results, see Chapter 6 of \cite{zambotti2017random} and \eqref{ibpf_larger_three}-\eqref{ibpf_three} above. \begin{prop} \label{already_known_ibpf0} Let $\Phi \in \mathcal{S}$ and $h \in C^{2}_{c}(0,1)$. Then, for all $\delta > 3$, the following IbPF holds \[ E^{\delta}(\partial_{h} \Phi (X) ) + E^{\delta}(\langle h '' , X \rangle \, \Phi(X) ) = - \kappa(\delta) \, E^{\delta} (\langle h , X^{-3} \rangle \, \Phi(X) ). \] Moreover, for $\delta = 3$, the following IbPF holds \[ \begin{split} & E^{3}(\partial_{h} \Phi (X) )+ E^{3}(\langle h '' , X \rangle \, \Phi(X) ) = \\ & = - \int_{0}^{1} \, \mathrm{d} r \, \frac{h_r}{\sqrt{2 \pi r^{3} (1-r)^{3}}} \, E^{3} [\Phi(X) \, | \, X_{r} = 0]. \end{split} \] \end{prop} \begin{proof} For $\delta>3$ we have $k:=\lfloor \frac{3-\delta}{2} \rfloor < 0$, and by \eqref{exp_fst_part_ibpf_a_b} \[ \begin{split} & E^{\delta} (\partial_{h} \Phi (X) ) + E^{\delta} (\langle h '' , X \rangle \, \Phi(X) )= \\ & = -\kappa(\delta)\int_{0}^{1} h_{r} \int_0^\infty a^{\delta-4} \, \Sigma^\delta_r(\Phi (X)\,|\, a) \, \mathrm{d} a \, \mathrm{d} r \\ & = -\kappa(\delta)\int_{0}^{1} h_{r} \int_0^\infty a^{-3} \, p^{\delta}_{r}(a)\, E^{\delta}[\Phi(X) \, | \, X_{r} = a] \, \mathrm{d} a \, \mathrm{d} r \\ & = - \kappa(\delta) \, E^{\delta}(\langle h , X^{-3} \rangle \, \Phi(X) ). \end{split} \] For $\delta=3$, it suffices to note that, for all $r \in (0,1)$ \[ \frac{1}{2} \, \lim_{\epsilon \downarrow0} \, \frac{p^{3}_{r}(\epsilon)}{\epsilon^{2}} = \frac{1}{\sqrt{2 \pi r^{3} (1-r)^{3}}}, \] so that \[\frac{1}{2} \, \Sigma^3_r(\Phi (X) \,|\, 0\,) = \frac{1}{\sqrt{2 \pi r^{3} (1-r)^{3}}} E^{3}[\Phi(X) \, | \, X_{r} = 0], \] and the proof is complete thanks to \eqref{exp_fst_part_ibpf_a_b_3}. \end{proof} \section{The dynamics via Dirichlet forms} \label{sect_Dirichlet} In this section we exploit the IbPF obtained above to construct a weak version of the gradient dynamics associated with $P^{1}$, using the theory of Dirichlet forms. The reason for considering the particular value $\delta=1$ is that we can exploit a representation of the Bessel bridge in terms of a Brownian bridge, for which the corresponding gradient dynamics is well-known and corresponds to a linear stochastic heat equation. This representation was already used in \cite{vosshallthesis} which constructed a quasi-regular Dirichlet form associated with $P^1$, a construction which does not follow from the IbPF \eqref{exp_fst_part_ibpf_a_b_1} due to the distributional character of its last term. Using this construction, we exploit the IbPF \eqref{exp_fst_part_ibpf_a_b_1} to prove that the associated Markov process, at equilibrium, satisfies \eqref{formal1}. The treatment of the particular value $\delta=1$ is also motivated by potential applications to scaling limits of dynamical critical pinning models, see e.g. \cite{vosshallthesis} and \cite{deuschel2018scaling}. For the sake of our analysis, instead of working on the Banach space $C([0,1])$, it shall actually be more convenient to work on the Hilbert space $H:=L^{2}(0,1)$ endowed with the $L^2$ inner product \[ \langle f, g \rangle = \int_0^1 f_r\, g_r \, \mathrm{d} r, \quad f,g \in H. \] We shall denote by $\| \cdot \|$ the corresponding norm on $H$. Moreover we denote by $\mu$ the law of $\beta$ on $H$, where $\beta$ is a Brownian bridge from $0$ to $0$ over the interval $[0,1]$. We shall use the shorthand notation $L^{2} (\mu)$ for the space $L^2(H,\mu)$. \subsection{The one-dimensional random string} Consider the Ornstein-Uhlenbeck semigroup $(\mathbf{Q}_{t})_{t \geq 0}$ on $H$ defined, for all $F \in L^{2} (\mu)$ and $z \in H$, by \[ \mathbf{Q}_{t} F (z) := \mathbb{E} \left[ F(v_{t}(z)) \right], \quad t \geq 0, \] where $(v_{t}(z))_{t \geq 0}$ is the solution to the stochastic heat equation on $[0,1]$ with initial condition $z$, and with homogeneous Dirichlet boundary conditions \begin{equation} \label{solution_she} \begin{split} \begin{cases} \frac{\partial v}{\partial t} = \frac{1}{2} \frac{\partial^{2} v}{\partial x^{2}} + \xi \\ v(0,x) = z(x), \qquad & x\in[0,1] \\ v(t,0)= v(t,1)= 0, \qquad & t > 0 \end{cases} \end{split} \end{equation} with $\xi$ a space-time white noise on $\mathbb{R}_{+} \times [0,1]$. Recall that $v$ can be written explicitly in terms of the fundamental solution $(g_{t}(x,x'))_{t \geq 0, \, x,x' \in (0,1)}$ of the stochastic heat equation with homogeneous Dirichlet boundary conditions on $[0,1]$, which by definition is the unique solution to \[ \begin{split} \begin{cases} \frac{\partial g}{\partial t} = \frac{1}{2} \frac{\partial^{2} g}{\partial x^{2}} \\ g_{0}(x,x') = \delta_{x}(x') \\ g_{t}(x,0)= g_{t}(x,1)= 0. \end{cases} \end{split}\] Recall further that $g$ can be represented as follows: \[ \forall t >0, \quad \forall x, x' \geq 0, \quad g_{t}(x,x') = \sum_{k=1}^{\infty} e^{-\frac{\lambda_{k}}{2}t} e_{k}(x) e_{k}(x'), \] where $(e_{k})_{k \geq 1}$ is the complete orthornormal system of $H$ given by \[e_{k}(x) := \sqrt{2} \sin(k \pi x), \quad x \in [0,1], \quad k \geq 1\] and $\lambda_{k} := k^{2} \pi^{2}$, $k \geq 1$. We can then represent $u$ as follows: \begin{equation} \label{expr_solution_she} v(t,x) = z(t,x) + \int_{0}^{t} \int_{0}^{1} g_{t-s}(x,x') \, \xi ({\rm d} s, \, \mathrm{d} x'), \end{equation} where \begin{equation} \label{expr_solution_he} z(t,x) := \int_{0}^{1} g_{t}(x,x') z(x') \, \mathrm{d} x', \end{equation} and where the double integral is a stochastic convolution. In particular, it follows from this formula that $v$ is a Gaussian process. An important role will be played by its covariance function. Namely, for all $t \geq 0$ and $x,x' \in (0,1)$, we set \[ q_{t}(x,x') := \text{Cov}(v(t,x) , v(t,x')) = \int_{0}^{t} g_{2 \tau}(x,x') \, \mathrm{d} \tau. \] We also set \[ q_{\infty}(x,x') := \int_{0}^{\infty} g_{2 \tau}(x,x') \, \mathrm{d} \tau=\mathbb{E}[\beta_x\beta_{x'}] = x \wedge x' - x x' . \] For all $t \geq 0$, we set moreover \[ q^{t}(x,x') := q_{\infty}(x,x') - q_{t}(x,x') = \int_{t}^{\infty} g_{2 \tau}(x,x') \, \mathrm{d} \tau.\] When $x=x'$, we will use the shorthand notations $q_{t}(x), q_{\infty}(x)$ and $q^{t}(x)$ instead of $q_{t}(x,x), q_{\infty}(x,x)$ and $q^{t}(x,x)$ respectively. Finally, we denote by $(\Lambda,D(\Lambda))$ the Dirichlet form associated with $(\mathbf{Q}_{t})_{t \geq 0}$ in $L^{2} (H,\mu)$, and which is given by \[ \Lambda(F,G) = \frac{1}{2} \int_{H} \langle \nabla F, \nabla G \rangle \, \mathrm{d} \mu, \quad F,G \in D(\Lambda) = W^{1,2}(\mu), \] where we recall that $\mu$ denotes the law of a standard Brownian bridge on $[0,1]$. Here, for all $F \in W^{1,2}(\mu)$, $\nabla F: H \to H$ is the gradient of $F$, see \cite{dpz3}. The corresponding family of resolvents $(\mathbf{R}_\lambda)_{\lambda>0}$ is then given by \[ \mathbf{R}_\lambda F (z) = \int_0^\infty e^{-\lambda t} \mathbf{Q}_{t} F(z) \, \mathrm{d} t, \quad z \in H, \, \lambda >0, \qquad F \in L^{2}(\mu). \] \subsection{Dirichlet form} In this section we introduce the Dirichlet form associated with our equation \eqref{spde=1} and the associated Markov process $(u_t)_{t\geq 0}$. We stress that these objects were already constructed in \cite[Chap. 5]{vosshallthesis}. Let $\mathcal{F} \mathcal{C}^{\infty}_{b}(H)$ denote the space of all functionals $F:H \to \mathbb{R}$ of the form \begin{equation}\label{Fexp} F (z) = \psi(\langle l_{1}, z \rangle, \ldots, \langle l_{m}, z \rangle ), \quad z \in H, \end{equation} with $m \in \mathbb{N}$, $\psi \in C^{\infty}_{b}(\mathbb{R}^{m})$, and $l_{1}, \ldots, l_{m} \in \text{Span} \{ e_{k}, k \geq 1 \}$. Since Bessel bridges are \textit{nonnegative} processes, we are led to also introduce the closed subset $K \subset H$ of nonnegative functions \[K:= \{ z \in H, \, \, z \geq 0 \, \, \text{a.e.} \}. \] Note that $K$ is a Polish space. We also define: \[\mathcal{F} \mathcal{C}^{\infty}_{b}(K) := \left\{ F \big \rvert_{K} \, , \ F \in \mathcal{F} \mathcal{C}^{\infty}_{b}(H) \right\}. \] Moreover, for $f \in \mathcal{F} \mathcal{C}^{\infty}_{b}(K)$ of the form $f=F \big \rvert_{K}$, with $F \in \mathcal{F} \mathcal{C}^{\infty}_{b}(H)$, we define $\nabla f : K \to H$ by \[ \nabla f (z) = \nabla F(z), \quad z \in K, \] where this definition does not depend on the choice of $F\in \mathcal{F} \mathcal{C}^{\infty}_{b}(H)$ such that $f=F\big \rvert_{K}$. We further denote by $\nu$ the law, on $K$, of the $1$-Bessel bridge from $0$ to $0$ on $[0,1]$ (so that $P^1$ is then the restriction of $\nu$ to $C([0,1])$). We shall use the shorthand $L^2(\nu)$ to denote the space $L^2(K,\nu)$. Denoting by $j : H \to K$ the absolute value map \begin{equation} \label{absolute_value_map} j(z) := |z|, \quad z \in H, \end{equation} we remark that the map $L^{2}(\nu)\ni\varphi\mapsto \varphi \circ j\in L^{2}(\mu)$ is an isometry. Let us finally denote by $\mathcal{E}$ the bilinear form defined on $\mathcal{F} \mathcal{C}^{\infty}_{b}(K)$ by \[ \mathcal{E}(f,g) := \frac{1}{2} \int_{K} \langle \nabla f , \nabla g \rangle \, \mathrm{d} \nu, \qquad f,g \in \mathcal{F} \mathcal{C}^{\infty}_{b}(K). \] \begin{prop} \label{closability} The form $(\mathcal{E},\mathcal{F} \mathcal{C}^{\infty}_{b}(K))$ is closable. Its closure $(\mathcal{E},D (\mathcal{E}))$ is a local, quasi-regular Dirichlet form on $L^{2}(\nu)$. In addition, for all $f \in D (\mathcal{E})$, $f \circ j \in D(\Lambda)$, and we have \begin{equation} \label{isometry} \forall f,g \in D(\mathcal{E}), \quad \mathcal{E}(f,g) = \Lambda(f \circ j, g \circ j). \end{equation} \end{prop} The proof of Proposition \ref{closability} is postponed to Appendix \ref{Proofs}. Let $(Q_t)_{t \geq 0}$ be the contraction semigroup on $L^{2}(K,\nu)$ associated with the Dirichlet form $(\mathcal{E}, D(\mathcal{E}))$, and let $(R_\lambda)_{\lambda>0} $ be the associated family of resolvents. Let also $\mathcal{B}_{b}(K)$ denote the set of Borel and bounded functions on $K$. As a consequence of Prop. \ref{closability}, in virtue of Thm IV.3.5 and Thm V.1.5 in \cite{ma2012introduction}, we obtain the following result. \begin{cor} There exists a diffusion process $M=\{\Omega, \mathcal{F}, (u_t)_{t \geq 0}, (\mathbb{P}_x)_{x \in K} \}$ properly associated to $(\mathcal{E},D(\mathcal{E}))$, i.e. for all $\varphi \in L^{2}(\nu) \cap \mathcal{B}_b(K)$, and for all $t > 0$, $E_{x}(\varphi(u_{t})), \, x \in K,$ defines an $\mathcal{E}$ quasi-continuous version of $Q_{t} \varphi$. Moreover, the process $M$ admits the following continuity property \[\mathbb{P}_{x}[t \mapsto u_{t} \, \, \text{is continuous on} \, \, \mathbb{R}_{+}] = 1,\quad \text{for} \, \, \mathcal{E}-{\rm q.e.} \, x \in K. \] \end{cor} The rest of this section will be devoted to show that for $\mathcal{E}$-q.e. $x \in K$, under $\mathbb{P}_{x}$, $(u_t)_{t\geq 0}$ solves \eqref{spde=1}, or rather its weaker form \eqref{formal1}. In the sequel, we set $\Lambda_{1} := \Lambda + (\cdot,\cdot)_{L^{2}(\mu)}$ and $\mathcal{E}_{1} := \mathcal{E} + (\cdot,\cdot)_{L^{2}(\nu)}$, which are inner products for the Hilbert spaces $D(\Lambda)$ and $D(\mathcal{E})$ respectively. We shall also write in an abusive way, for \text{any} $\Phi \in C^{1}(H)$ \[ \mathcal{E}_{1}(\Phi, \Phi) := \int_{K} \Phi^{2} \, \mathrm{d} \nu + \frac{1}{2} \int_{K} \| \nabla \Phi \|_{H}^{2}. \] Since the Dirichlet form $(\mathcal{E},D(\mathcal{E}))$ is quasi-regular, by the transfer method stated in VI.2 of \cite{ma2012introduction}, we can apply several results of \cite{fukushima2010dirichlet} in our setting. An important technical point is the density of the space $\mathcal{S}$ introduced in Section \ref{sect_ibpf_exp_func} above in the domain $D(\mathcal{E})$ of this Dirichlet form. To state this precisely, we consider $\mathscr{S}$ to be the vector space generated by functionals $F:H \to \mathbb{R}$ of the form \[ F(\zeta) = \exp(- \langle \theta,\zeta^{2} \rangle), \quad \zeta \in H, \] for some $\theta : [0,1] \to \mathbb{R}_{+}$ Borel and bounded. Note that $\mathscr{S}$ may be seen as a subspace of the space $\mathcal{S}$ of Section \ref{sect_ibpf_exp_func} in the following sense: for any $F \in \mathscr{S}$, $F \rvert_{C([0,1])} \in \mathcal{S}$. We also set: \[ \mathscr{S}_{K} := \{ F \big \rvert_{K}, \ F \in \mathscr{S} \}. \] \begin{lm}\label{density} $\mathscr{S}_{K}$ is dense in $D(\mathcal{E})$. \end{lm} The proof of Lemma \ref{density} is postponed to Appendix \ref{Proofs}. \subsection{Convergence of one-potentials} The key tool in showing that the Markov process constructed above defines a solution of \eqref{formal1} is the IBPF \eqref{exp_fst_part_ibpf_a_b_1}. The rule of thumb is that the last term in the IbPF yields the expression of the drift in the SPDE. Recall however that, for any fixed $h \in C^{2}_{c}(0,1)$, the last term in \eqref{exp_fst_part_ibpf_a_b_1} is given by \[\frac14\int_{0}^{1} \, \mathrm{d} r \, h_r\, \frac{{\rm d}^{2}}{{\rm d} a^{2}} \, \Sigma^1_r (\Phi(X) \, | \, a)\, \biggr\rvert_{a=0}, \quad \Phi \in \mathcal{S}, \] which defines a generalized functional in the sense of Schwartz, rather than a genuine measure, on $C([0,1])$. It is therefore not immediate to translate the IbPF in terms of the corresponding dynamics. The strategy we follow to handle this difficulty relies on Dirichlet form techniques: we approximate the above generalized functional by a sequence of measures admitting a smooth density w.r.t. the law of the reflecting Brownian bridge, and show that the corresponding one-potentials converge in the domain $D(\mathcal{E})$ of the Dirichlet form (see Section 5 of \cite{fukushima2010dirichlet} for the definition of one-potentials). This will imply that the associated additive functionals converge to the functional describing the drift in the SPDE. More precisely, let $\rho$ be a smooth function supported on $[-1,1]$ such that \[\rho \geq 0, \quad \int_{-1}^{1} \rho = 1, \quad \rho(y) = \rho(-y), \quad y \in \mathbb{R}.\] For all $\epsilon>0$, let \begin{equation} \label{def_mollifier} \rho_{\epsilon}(y) := \frac{1}{\epsilon} \, \rho \left( \frac{y}{\epsilon} \right), \quad y \in \mathbb{R}. \end{equation} Then, for all $\Phi \in \mathcal{S}$ and $h \in C^{2}_{c}(0,1)$, the right-hand side of the IbPF \eqref{exp_fst_part_ibpf_a_b_1} can be rewritten as follows \begin{equation} \label{relation_cond_lt} \frac{1}{4} \int_{0}^{1} h_{r} \, \frac{ \, \mathrm{d}^{2}}{\, \mathrm{d} a^{2}} \, \Sigma^1_r (\Phi(X) \, | \, a)\, \biggr\rvert_{a=0} \, \mathrm{d} r = \frac{1}{2} \, \lim_{\epsilon \to 0} \mathbb{E} \left[ \Phi(|\beta|) \int_{0}^{1} h_{r} \, \rho_{\epsilon}''(\beta_{r}) \, \, \mathrm{d} r \right]. \end{equation} Indeed, starting from the right-hand side, by conditioning on the value of $|\beta_{r}|$, and recalling that $|\beta| \overset{(d)}{=} \nu$, the equality follows at once. We will now show that the convergence of measures \eqref{relation_cond_lt} can be enhanced to a convergence in the space $D(\Lambda)$ of the associated one-potentials. We henceforth fix a function $h \in C^{2}_{c}(0,1)$. Then there exists $\delta \in (0,1)$ such that $h$ is supported in $[\delta, 1-\delta]$. For all $\epsilon > 0 $, let $G_{\epsilon}:H\to{\mathbb R}$ be defined by \begin{equation}\label{Geps} G_{\epsilon}(z) := \frac{1}{2} \int_{0}^{1} h_{r} \, \rho_{\epsilon}''(z_{r}) \, \mathrm{d} r, \quad z \in H. \end{equation} For all $t > 0$ and $z \in H$, we have \[ \mathbf{Q}_{t} G_{\epsilon}(z) = \int_{0}^{1} \frac{h_{r}}{2\sqrt{2 \pi q_{t}(r)}} \int_{\mathbb{R}} \rho_{\epsilon}''(a) \exp \left( - \frac{(a-z(t,r))^{2}}{2 q_{t}(r)} \right) \, \mathrm{d} a \, \mathrm{d} r, \] which, after two successive integration by parts, can be also written \[ \int_{0}^{1} \frac{h_{r}}{2\sqrt{2 \pi q_{t}(r)}} \int_{\mathbb{R}} \rho_{\epsilon}(b) \left[ \left( \frac{b-z(t,r)}{q_t(r)} \right)^2 - \frac{1}{q_t(r)} \right] \exp \left( - \frac{(b-z(t,r))^{2}}{2 q_{t}(r)} \right) \, \mathrm{d} b \, \mathrm{d} r, \] where $z(t,\cdot)$ depends on $z$ via \eqref{expr_solution_he}. For all $\epsilon >0$, we define the functional $U_{\epsilon}: H \to \mathbb{R}$ by \[ U_{\epsilon}(z) = \int_{0}^{\infty} e^{-t} \, \mathbf{Q}_{t}G_{\epsilon}(z) \, \mathrm{d} t, \quad z \in H.\] Note that $U_{\epsilon}$ is the one-potential of the additive functional \[ \int_{0}^{t} G_{\epsilon}(v(s,\cdot)) \, \mathrm{d} s, \qquad t \geq 0, \] associated with the Markov process $(v(t,\cdot))_{t\geq 0}$ in $H$ defined in \eqref{solution_she} (see Section 5 of \cite{fukushima2010dirichlet} for this terminology). In particular, $U_{\epsilon} \in D(\Lambda)$. For all $t>0$, let $G^{(t)}: H \to \mathbb{R}$ be the functional defined by \[ G^{(t)}(z) := \int_{0}^{1} \frac{h_{r}}{2 \sqrt{2 \pi q_{t}(r)}} \left[ \left( \frac{z(t,r)}{q_t(r)} \right)^2 - \frac{1}{q_t(r)} \right] \exp \left( - \frac{z(t,r)^{2}}{2 q_{t}(r)} \right) \, \mathrm{d} r, \qquad z \in H. \] We claim that the following holds: \begin{prop} \label{conv_one_pot} The functional $U : H \to \mathbb{R}$ defined by \begin{equation} \label{limiting_one_pt} U(z) := \int_0^\infty e^{-t} \, G^{(t)}(z) \, \mathrm{d} t, \qquad z \in H, \end{equation} belongs to $D(\Lambda)$. Moreover, $U_{\epsilon} \underset{\epsilon \to 0}{\longrightarrow} U$ in $D(\Lambda)$. \end{prop} \begin{proof} First note that $U_\epsilon \underset{\epsilon \to 0}{\longrightarrow} U$ in $L^2(\mu)$. Indeed, for all fixed $t>0$ and $z \in H$, we have \[ \begin{split} & |\mathbf{Q}_{t} G_\epsilon (z) - G^{(t)}(z)| \leq \\ & \int_{0}^{1} \frac{|h_{r}|}{2 \sqrt{2 \pi q_{t}(r)^{3}}} \int_{\mathbb{R}} \rho(x) \left| F\left( \frac{\epsilon x - z(t,r)}{\sqrt{q_t(r)}} \right) - F \left( \frac{z(t,r)}{\sqrt{q_t(r)}} \right) \right| \, \mathrm{d} x \, \mathrm{d} r, \end{split} \] where the function $F: \mathbb{R} \to \mathbb{R}$ is defined by \[F(y) = (y^2-1) \exp(-y^2/2), \qquad y \in \mathbb{R}. \] Since $F$ is continuous and bounded, by dominated convergence, we deduce that, for all $r \in (0,1)$ and $x \in \mathbb{R}$ \[ \left \| F\left( \frac{\epsilon x - z(t,r)}{q_t(r)} \right) - F \left( \frac{z(t,r)}{q_t(r)} \right) \right \|_{L^2(\mu)} \underset{\epsilon \to 0}{\longrightarrow} 0. \] Therefore, again by dominated convergence, we have \[\begin{split} &\|\mathbf{Q}_{t} G_\epsilon - G^{(t)}\|_{L^2(\mu)} \\ &\leq \int_{0}^{1} \frac{|h_{r}|}{2 \sqrt{2 \pi q_{t}(r)^{3/2}}} \int_{\mathbb{R}} \rho(x) \left\| F\left( \frac{\epsilon x - z(t,r)}{q_t(r)} \right) - F \left( \frac{z(t,r)}{q_t(r)} \right) \right\|_{L^2(\mu)} \, \mathrm{d} x \, \mathrm{d} r\\ & \underset{\epsilon \to 0}{\longrightarrow} 0. \end{split}\] Recall that we have fixed $\delta \in (0,1)$ such that $h$ is supported in $[\delta,1-\delta]$. As showed in the proof of Proposition 1 in \cite{zambotti2004occupation}, there exists $C_\delta>0$ such that, for all $r \in (\delta, 1-\delta)$ and $t>0$ \begin{equation} \label{lower_bound_qt} q_t(r) \geq C_\delta (\sqrt{t} \wedge 1). \end{equation} In the following, we will denote by $C_\delta$ any constant depending only on $\delta$, and whose value may change from line to line. Thanks to \eqref{lower_bound_qt}, we obtain the bound \[\|\mathbf{Q}_{t} G_\epsilon - G^{(t)}\|_{L^2(\mu)} \leq C_\delta \|F \|_{\infty} \frac{\|h\|_\infty}{t^{3/4} \wedge 1}, \] where the right-hand side is integrable w.r.t. the measure $e^{-t} \, \mathrm{d} t$ on $\mathbb{R}_+$. Hence, by dominated convergence, \[\begin{split} \| U_{\epsilon} - U \|_{L^2(\mu)} & \leq \int_0^\infty e^{-t} \, \|\mathbf{Q}_{t} G_\epsilon - G^{(t)}\|_{L^2(\mu)} \, \mathrm{d} t \\ &\underset{\epsilon \to 0}{\longrightarrow} 0, \end{split} \] whence the claim. Now, we show that $U \in D(\Lambda)$. Note that, for all $t>0$ and $\epsilon >0$, we have \[ \nabla \mathbf{Q}_{t} G_\epsilon (z) = \frac{1}{2} \int_{0}^{1} h_{r} \, g_t(r,\cdot) \, \mathbb{E}[ \rho_{\epsilon}^{(3)}(v(t,r))] \, \mathrm{d} r, \qquad z \in H, \] where $v$ is given by \eqref{expr_solution_she} and where we are taking expectation with respect to the white noise $\xi$. Therefore, denoting by $\| \cdot \|_{L^2}$ the norm in $L^2(H,\mu; H)$, we have \[\begin{split} &\| \nabla \mathbf{Q}_{t} G_\epsilon \|^2_{L^2} = \\ & \frac{1}{4} \int_{[0,1]^2} h_{r} h_s \langle g_t(r,\cdot), g_t(s, \cdot) \rangle \, \int_H \mathbb{E}[ \rho_{\epsilon}^{(3)}(v(t,r))] \, \mathbb{E}[ \rho_{\epsilon}^{(3)}(v(t,s))] \, \, \mathrm{d} \mu(z) \, \, \mathrm{d} r \, \mathrm{d} s \end{split} \] where the integral in $\, \mathrm{d} \mu(z)$ is taken with respect to $v(0,\cdot) = z$. Hence \[ \begin{split} &\| \nabla \mathbf{Q}_{t} G_\epsilon \|^2_{L^2} = \\ & \int_{[0,1]} \frac{h_r h_s \langle g_t(r,\cdot), g_t(s, \cdot) \rangle}{4} \, \int_{\mathbb{R}^2} \rho^{(3)}_{\epsilon}(x) \, \rho^{(3)}_{\epsilon}(y) \, \Gamma_{r,s} (x,y) \, \, \mathrm{d} x \, \mathrm{d} y \, \, \mathrm{d} r \, \mathrm{d} s = \\ & \int_{[0,1]} \frac{h_r h_s \langle g_t(r,\cdot), g_t(s, \cdot) \rangle}{4} \, \int_{\mathbb{R}^2} \rho_{\epsilon}(x) \, \rho_{\epsilon}(y) \, \frac{\partial^6 \Gamma_{r,s}}{\partial x^3 \partial y^3} (x,y) \, \, \mathrm{d} x \, \mathrm{d} y \, \, \mathrm{d} r \, \mathrm{d} s, \end{split}\] where, for all $(r,s) \in [0,1]$ and $(x,y) \in \mathbb{R}^{2}$ \[ \Gamma_{r,s}(x,y) := \mathbb{E} \left[ \frac{1}{2 \pi \sqrt{q_{t}(r) q_{t}(s)}} \exp \left(- \frac{(x-z(t,r))^{2}}{2q_{t}(r)} - \frac{(y-z(t,s))^{2}}{2q_{t}(s)} \right) \right], \] where $z(t,\cdot)$ is given by \eqref{expr_solution_he}, and where we are taking expectation with respect to $z \overset{(d)}{=} \mu$. Reasoning as in Section 6 of \cite{zambotti2005integration}, we see that $\Gamma_{r,s}$ is the density of the centered Gaussian law on $\mathbb{R}^{2}$ with covariance matrix \[ M = \begin{pmatrix} q_{\infty}(r) & q^{t}(r,s) \\ q^{t}(r,s) & q_{\infty}(s) \end{pmatrix}. \] Similarly, we have \[\begin{split} \| \nabla G^{(t)}\|^2_{L^2} = \int_{0}^{1} \int_{0}^{1} \frac{h_{r} \, h_s \, \langle g_t(r,\cdot), g_t(s, \cdot) \rangle}{4} \, \frac{\partial^6 \Gamma_{r,s}}{\partial x^3 \, \partial y^3} (0,0) \, \, \mathrm{d} r \, \mathrm{d} s. \end{split}\] So there remains to obtain a bound on \[ \underset{\mathbb{R}^{2}}{\sup} \ \left| \frac{\partial^{6}\Gamma_{r,s}}{\partial x^{3} \, \partial y^{3}} \right|, \] for all $(r,s) \in [0,1]^{2}$. To do so, we use the following lemma: \begin{lm} \label{bound_derivative_density} Let $f: \mathbb{R}^{2} \to \mathbb{R}$ be the density of a centered Gaussian law on $\mathbb{R}^{2}$ with non-degenerate covariance matrix $M$ satisfying $|M_{i,j}| \leq 1$ for all $i,j \in \{1,2\}$. Then, for all $k,\ell \in \mathbb{N}$ and $(x,y) \in \mathbb{R}^{2}$ \[ \left| \frac{\partial^{k+\ell} f}{\partial^{k} x \, \partial^{\ell} y} \right| \leq A_{k,\ell} \ \det(M)^{-\frac{1+k+\ell}{2}}\] where $A_{k, \ell} >0$ is a constant depending only on $k$ and $\ell$. \end{lm} \begin{proof} Setting \[ M = \begin{pmatrix} a & b \\ b & c \end{pmatrix}, \] we can express the eigenvalues $\lambda$ and $\mu$ of $M$ as \[ \lambda = \frac{a+c}{2} + \sqrt{\left(\frac{a-c}{2} \right)^2 + b^2} \] and \[ \mu = \frac{a+c}{2} - \sqrt{\left(\frac{a-c}{2} \right)^2 + b^2}. \] Hence, since $a$, $b$ and $c$ are bounded by $1$, we deduce that $\lambda$ and $\mu$ are bounded by some universal constant $C>0$. Let now $P$ be an orthogonal matrix such that $M = P^{T} D P$, where $P^{T}$ denotes the transposed of the matrix $P$, and where \[ D = \begin{pmatrix} \lambda & 0 \\ 0 & \mu \end{pmatrix}.\] Then, for all $u \in \mathbb{R}^2$ \begin{equation} \label{expre_density} f(u) = \frac{1}{2 \pi \sqrt{\det(M)}} \, g(Pu) \end{equation} where \[g(v) := \exp \left(-\frac{1}{2} v^T D^{-1} v \right) = \exp \left(-\frac{x^2}{2 \lambda} - \frac{y^2}{2 \mu} \right)\] for all $v = (x,y) \in \mathbb{R}^2$. Since the function $u \mapsto e^{-\frac{u^2}{2}}$ is bounded on $\mathbb{R}$ with all its derivatives, we deduce that for all $k, \ell \in \mathbb{N}$, there exists $C_{k, \ell}>0$ depending only on $k$ and $\ell$ such that \[\left| \frac{\partial^{k+\ell} g}{\partial x^{k} \partial y^{\ell}} \right| \leq C_{k,\ell} \, \lambda^{-k/2} \mu^{-\ell/2}. \] Therefore, since $\lambda$ and $\mu$ are bounded by $C$, and noting that $\det(M) = \lambda \, \mu$, setting $C'_{k,\ell} := C_{k,\ell} \, C^{\frac{k+\ell}{2}}$ we have \[\left| \frac{\partial^{k+\ell} g}{\partial x^{k} \partial y^{\ell}} \right| \leq C'_{k,\ell} \, \det(M)^{-\frac{k+\ell}{2}}. \] Hence, by the relation \eqref{expre_density} and the chain rule, and since the coefficients of the orthogonal matrix $P$ are all bounded aby $1$, we obtain the claim. \end{proof} We now apply the Lemma to the Gaussian density function $\Gamma_{r,s}$ for all $(r,s) \in (0,1)$. Note that $q_\infty(r) \leq 1$ and $q_{\infty}(s) \leq 1$, so all coefficients of its covariance matrix $M$ are indeed bounded by $1$ as requested. Therefore \[ \underset{\mathbb{R}^{2}}{\sup} \ \left| \frac{\partial^{6}\Gamma_{r,s}}{\partial x^{3} \partial y^{3}} \right| \leq A \, \det(M)^{-7/2}, \] where $A \in (0, \infty)$ is a universal constant. Now \[ \det(M) = q_{\infty}(r) q_{\infty}(s) - q^{t}(r,s)^{2}, \] But \[\begin{split} q_{\infty}(r) q_{\infty}(s) - q^{t}(r,s)^{2} &\geq q_{\infty}(r) q_{\infty}(s) - q_{\infty}(r,s)^{2} \\ &= r(1-r)s(1-s) - (r \wedge s - rs)^{2} \\ &= s \wedge r (1- s \vee r) |s-r|, \end{split}\] so we obtain the lower bound \begin{equation} \label{lower_bound_det_trivial} \det(M) \geq \delta^{2} |r-s| \end{equation} for all $r,s \in [\delta,1-\delta]$. On the other hand, reasoning as in Section 6 of \cite{zambotti2005integration}, we can show that there exists $c_{\delta}>0$ depending only on $\delta$ such that, for all $r,s \in [\delta,1-\delta]$ \[q_{\infty}(r) q_{\infty}(s) - q^{t}(r,s)^{2} \geq c_{\delta} \, (t \wedge 1)^{1/2},\] which yields the lower bound \begin{equation} \label{lower_bound_det} \det(M) \geq c_{\delta} \, (t \wedge 1) ^{1/2}. \end{equation} As a consequence, for all $r,s \in [\delta,1-\delta]$, interpolating \eqref{lower_bound_det_trivial} and \eqref{lower_bound_det}, we thus obtain \begin{equation} \label{bound_derivative_gamma} \left| \frac{\partial^{6} \Gamma_{r,s}}{\partial^{3} x \, \partial^{3}y} \right| \leq C_\delta \, (t \wedge 1)^{-\gamma/2} |r-s|^{-(7/2-\gamma)}, \end{equation} for any $\gamma \in (5/2,3)$, where $C_\delta>0$ is a constant depending only on $\delta$. Note also that, for some universal constant $C>0$, we have \begin{equation} \label{bound_green} \forall r,s \in [\delta, 1-\delta], \qquad \langle g_t(r,\cdot), g_t(s, \cdot) \rangle = g_{2t}(r,s) \leq C \, t^{-1/2}, \end{equation} see e.g. Exercise 4.16 in \cite{zambotti2017random}. Therefore, \[\| \nabla G^{(t)}\|^2_{L^2} \leq C_\delta \, \|h\|_{\infty}^2(t \wedge 1)^{-(1+\gamma)/2} \, \int_{0}^{1} \int_{0}^{1} |r-s|^{-(7/2-\gamma)} \, \, \mathrm{d} r \, \mathrm{d} s, \] and the last integral is finite due to the choice of $\gamma$. Therefore, we deduce that $\| \nabla G^{(t)}\|_{L^2} \leq C(\delta,h,\gamma) \, t^{-(1+\gamma)/4}$, where the constant $C(\delta,h,\gamma)$ does not depend on $t$. Since $(1+\gamma)/4<1$, it follows that \[ \int_0^\infty e^{-t} \, \| \nabla G^{(t)}\|_{L^2} \, \mathrm{d} t < \infty, \] so that $\nabla U \in L^2(H,\mu; H)$. Therefore $U \in D(\Lambda)$ as claimed. There remains to prove that $U_\epsilon \underset{\epsilon \to 0}{\longrightarrow} U$ in $D(\Lambda)$. Note that, for all $t>0$ and $\epsilon > 0$, \[\begin{split} &\|\mathbf{Q}_{t} G_\epsilon - G^{(t)}\|^2_{L^2} \\ &= \int_{[0,1]^2} \, \mathrm{d} r \, \mathrm{d} s \, \frac{h_{r} h_s \langle g_t(r,\cdot), g_t(s, \cdot) \rangle}{4} \int_{\mathbb{R}^2} \, \mathrm{d} x \, \mathrm{d} y \, \rho(x) \, \rho(y) \, \Gamma^{(3;3)}_{r,s}(\epsilon x , \epsilon y ), \end{split}\] where for all $(u,v) \in \mathbb{R}^2$, \[\Gamma^{(3;3)}_{r,s}(u,v) := \frac{\partial^{6} \Gamma_{r,s}}{\partial^{3} x \, \partial^{3}y} (u,v) - \frac{\partial^{6} \Gamma_{r,s}}{\partial^{3} x \, \partial^{3}y} (u,0) - \frac{\partial^{6} \Gamma_{r,s}}{\partial^{3} x \, \partial^{3}y} (0,v) + \frac{\partial^{6} \Gamma_{r,s}}{\partial^{3} x \, \partial^{3}y} (0,0).\] By \eqref{bound_derivative_gamma} and \eqref{bound_green} we deduce that \[ \|\mathbf{Q}_{t} G_\epsilon - G^{(t)}\|^2_{L^2} \leq C_\delta \, \|h\|_{\infty}^2 \, t^{-(1+\gamma)/2} \, \int_{0}^{1} \int_{0}^{1} |r-s|^{-(7/2-\gamma)} \, \, \mathrm{d} r \, \mathrm{d} s,\] so that: \[ \|\mathbf{Q}_{t} G_\epsilon - G^{(t)}\|_{L^2} \leq C(\delta, h, \gamma) \, t^{-(1+\gamma)/4},\] where $C(\delta,h, \gamma)>0$ is independent of $\epsilon $ and $t$. Recall that the right-hand side above is integrable with respect to $ e^{-t} \, \mathrm{d} t$. Moreover, since $\Gamma_{r,s}$ is continuous, it follows that for all $t>0$, \[ \|\mathbf{Q}_{t} G_\epsilon - G^{(t)}\|_{L^2} \underset{\epsilon \to 0}{\longrightarrow} 0. \] Hence, by dominated convergence, we deduce that \[ \| \nabla U_\epsilon - \nabla U \|_{L^2} \leq \int_0^t e^{-t} \, \| \nabla P_t G_\epsilon - \nabla G^{(t)}\|_{L^2} \, \, \mathrm{d} t \underset{\epsilon \to 0}{\longrightarrow} 0. \] Hence $U_{\epsilon} \underset{\epsilon \to 0}{\longrightarrow} U$ in $D(\Lambda)$, and the Proposition is proved. \end{proof} \subsection{A projection principle} Note that in the above section we worked in the domain $D(\Lambda)$ of the Dirichlet form associated with the Brownian bridge. For our dynamical problem, we shall however need to transfer the above results to the domain $D(\mathcal{E})$ of the Dirichlet form associated with the Bessel bridge. To do so, we invoke the following projection principle, which was first used in \cite{zambotti2004occupation} for the case of a $3$-Bessel bridge (see Lemma 2.2 therein). Recall the notations $\Lambda_{1} := \Lambda + (\cdot,\cdot)_{L^{2}(\mu)}$ and $\mathcal{E}_{1} := \mathcal{E} + (\cdot,\cdot)_{L^{2}(\nu)}$. \begin{lm} \label{projection} There exists a unique bounded linear operator $\Pi: D(\Lambda) \to D(\mathcal{E})$ such that, for all $F,G \in D(\Lambda)$ and $f \in D(\mathcal{E})$ \[ \Lambda_{1}(F,f \circ j) = \mathcal{E}_{1}(\Pi F,f), \] where $j$ is as in \eqref{absolute_value_map}. Moreover, we have \[\mathcal{E}_{1}(\Pi F, \Pi F) \leq \Lambda_{1}(F,F). \] \end{lm} \begin{proof} We use the same arguments as in the proof of Lemma 2 in \cite{zambotti2004occupation}. Let $\mathcal{D} := \{ \varphi \circ j, \quad \varphi \in D(\mathcal{E}) \}$. By Proposition \ref{closability}, $\mathcal{D}$ is a linear subspace of $D(\Lambda)$ which is isometric to $D(\mathcal{E})$. In particular, it is a closed subspace of the Hilbert space $D(\Lambda)$. Hence, we may consider the orthogonal projection operator $\hat{\Pi}$ onto $\mathcal{D}$. Then, for all $F \in D(\Lambda)$, let $\Pi F $ be the unique element of $D(\mathcal{E})$ such that $\hat{\Pi} F = (\Pi F) \circ j$. It then follows that $\Pi$ possesses the required properties. \end{proof} We obtain the following refinement of the IbPF \eqref{exp_fst_part_ibpf_a_b_1} for $P^1$. \begin{cor} Let $U$ be as in \eqref{limiting_one_pt}. For all $f \in D(\mathcal{E})$ and $h \in C^{2}_{c}(0,1)$, we have \begin{equation} \label{IbPF_Dirichlet} \mathcal{E}\left(\langle h, \cdot \rangle - \frac{1}{2} \Pi U \, ,\, f\right) = - \frac{1}{2} \int_{K} \left(\langle h'', \zeta \rangle - \Pi U(\zeta)\right) f(\zeta) \, \mathrm{d} \nu(\zeta). \end{equation} \end{cor} \begin{proof} By the density of $\mathscr{S}_{K}$ in $D(\mathcal{E})$ proved in Lemma \ref{density}, it is enough to consider $f \in \mathscr{S}_{K}$. By \eqref{relation_cond_lt} \[ \begin{split} &\frac{1}{4} \int_{0}^{1} {\rm d} r\, h_{r} \frac{\, \mathrm{d}^{2}}{\, \mathrm{d} a^{2}} \Sigma^1_r\left(f(X) \,|\, a\right) \, \biggr \rvert_{a=0} = \frac{1}{2} \, \lim_{\epsilon \to 0} \mathbb{E} \left[ f(|\beta|) \int_{0}^{1} h_{r} \, \rho_{\epsilon}''(\beta_{r}) \, \, \mathrm{d} r \right] \\ &= \lim_{\epsilon \to 0} \int (f\circ j) \, G_{\epsilon} \, \mathrm{d} \mu = \lim_{\epsilon \to 0} \, \Lambda_{1} ( f \circ j , \, U_{\epsilon}) = \, \Lambda_{1} ( f \circ j, \, U) = \, \mathcal{E}_{1} (f , \, \Pi U). \end{split} \] Therefore, for all $f \in \mathscr{S}_{K}$, the IbPF \eqref{exp_fst_part_ibpf_a_b_1} can be rewritten \[ 2 \mathcal{E} (\langle h, \cdot \rangle, f ) = - \int_{K} \langle h'', \zeta \rangle \, f(\zeta) \, \mathrm{d} \nu(\zeta) + \, \mathcal{E}_{1} (f , \, \Pi U), \] that is \[ \mathcal{E} \left(\langle h, \cdot \rangle - \frac{1}{2} \Pi U, f \right) = - \frac{1}{2} \int_{K} (\langle h'', \zeta \rangle - \Pi U(\zeta)) \, f(\zeta) \, \mathrm{d} \nu(\zeta). \] The proof is complete. \end{proof} Recall that $M=(\Omega, \mathcal{F}, (u_t)_{t \geq 0}, (\mathbb{P}_x)_{x \in K})$ denotes the Markov process properly associated with the Dirichlet form $(\mathcal{E},D (\mathcal{E}))$ constructed above. Note that, by Theorem 5.2.2 in \cite{fukushima2010dirichlet}, for all $F \in D(\mathcal{E})$, we can write in a unique way \[ F(u_{t}) - F(u_{0}) = M^{[F]}_{t} + N^{[F]}_{t}, \quad t \geq 0, \] $\mathbb{P}_{\nu}$ a.s., where $M^{[F]}$ is a martingale additive functional, and $N^{[F]}$ is an additive functional of zero energy. Using this fact we can thus write $u$ as the weak solution to some SPDE, but with coefficients that are not explicit. However the formula \eqref{IbPF_Dirichlet} above will allow us to identify these coefficients. We can now finally state the result justifying that that the Markov process constructed above satisfies the SPDE \eqref{formal1} above. \begin{thm}\label{fukushima_decomposition} For all $h \in C^{2}_{c}(0,1)$, we have \[ \langle u_{t}, h \rangle - \langle u_0, h \rangle = M_{t} + N_{t}, \qquad \mathbb{P}_{u_0}-\text{a.s.}, \quad \text{q.e.} \ u_0 \in K. \] Here $(N_{t})_{t \geq 0}$ is a continuous additive functional of zero energy satisfying \[N_{t} - \frac{1}{2} \int_{0}^{t} \langle h'', u_{s} \rangle \, \mathrm{d} s = \underset{\epsilon \to 0}{\lim} \, N^{\epsilon}_{t}, \qquad N^{\epsilon}_{t} := -\frac{1}{4} \int_{0}^{t} \langle \rho''_{\epsilon}(u_{s}), h \rangle \, \mathrm{d} s, \] in $\mathbb{P}_{\nu}$-probability, uniformly in $t$ on finite intervals. Moreover, $(M_{t})_{t \geq 0}$ is a martingale additive functional whose sharp bracket has the Revuz measure $\|h\|_{H}^{2} \, \nu$. Finally we also have \[N_{t} - \frac{1}{2} \int_{0}^{t} \langle h'', u_{s} \rangle \, \mathrm{d} s = \underset{k \to \infty}{\lim} \, N^{\epsilon_k}_{t} \] along a subsequence $\epsilon_k\to 0$ in $\mathbb{P}_{u_0}$-probability, for q.e. $u_0 \in K$. \end{thm} \begin{proof} On the one hand, by \eqref{IbPF_Dirichlet}, we can write \begin{equation} \label{fukushima_decomposition_one} \langle u_{t}, h \rangle - \frac{1}{2} \Pi U (u_{t}) - \left(\langle u_0, h \rangle - \frac{1}{2} \Pi U (u_0) \right) = N^{(1)}_{t} + M^{(1)}_{t}, \end{equation} where $N^{(1)}$ is the continuous additive functional of zero energy given by \[ N^{(1)}_{t} = \frac{1}{2} \int_{0}^{t} \left(\langle h'', u_{s} \rangle - \Pi U(u_{s}) \right) \, \mathrm{d} s, \quad t \geq 0 \] and $M^{(1)}$ is defined by \eqref{fukushima_decomposition_one}. On the other hand, for all $\epsilon >0$, by definition of $U_{\epsilon}$, we have for $G_\epsilon$ as in \eqref{Geps} \[ \Lambda_{1} ( U_{\epsilon}, \Phi) = \int_{H} G_{\epsilon} \,\Phi \, \mathrm{d}\mu, \quad \Phi \in D( \Lambda ). \] Hence, remarking that $G_{\epsilon} = g_{\epsilon} \circ j$, where $g_{\epsilon}: K \to \mathbb{R}$ is the functional defined by \[ g_{\epsilon}(z) := \frac12\int_{0}^{1} h_{r} \, \rho_{\epsilon}''(z_{r}) \, \mathrm{d} r = \frac12\langle \rho''_{\epsilon}(z), h \rangle, \] by Lemma \ref{projection}, we obtain for all $f \in D( \mathcal{E} )$ \begin{equation} \label{u_epsilon} \mathcal{E}_{1} ( \Pi U_{\epsilon}, f) = \int_{K} f(z) \, g_{\epsilon}(z) \, \mathrm{d} \nu(z) = - \int_{K} f(z)(\Pi U_{\epsilon}(z) - g_{\epsilon}(z)) \, \mathrm{d} \nu(z). \end{equation} As a consequence, we have the decomposition \begin{equation} \label{fukushima_decomposition_two} \frac{1}{2} \Pi U_{\epsilon} (u_{t}) - \frac{1}{2} \Pi U_{\epsilon} (u_0) = N^{(2,\epsilon)}_{t} + M^{(2,\epsilon)}_{t}, \end{equation} where $N^{(2, \epsilon)}$ is the continuous additive functional of zero energy given by \[N^{(2, \epsilon)}_{t} = \frac{1}{2} \int_{0}^{t} \left( \Pi U_{\epsilon} (u_{s})- g_{\epsilon}(u_{s})\right) \, \mathrm{d} s, \quad t \geq 0\] and $M^{(2,\epsilon)}$ is defined by \eqref{fukushima_decomposition_two}. Since $U_{\epsilon} \underset{\epsilon \to 0}{\longrightarrow} U$ in $D(\Lambda)$ by Proposition \ref{conv_one_pot}, by the continuity of $\Pi:D(\Lambda) \to D(\mathcal{E})$, we have the convergence $\Pi U_{\epsilon} \underset{\epsilon \to 0}{\longrightarrow} \Pi U$ in $D(\mathcal{E})$. Therefore, setting \[ M^{(2)}_{t} = M^{[\Pi U]}_{t}, \qquad N^{(2)}_{t} := N^{[\Pi U]}_{t}, \] then, by (5.1.1), (5.2.22) and (5.2.25) in \cite{fukushima2010dirichlet}, we have \[ \Pi U_{\epsilon}(u_{t}) - \Pi U_{\epsilon}(u_{0}) \underset{\epsilon \to0}{\longrightarrow} \Pi U(u_{t}) - \Pi U(u_{0}), \quad M^{(2, \epsilon)}_{t} \underset{\epsilon \to0}{\longrightarrow} M^{(2)}_{t}, \quad N^{(2, \epsilon)}_{t} \underset{\epsilon \to \infty}{\longrightarrow} N^{(2)}_{t} \] in $\mathbb{P}_{\nu}$-probability, for the topology of uniform convergence on finite intervals of $t \in \mathbb{R}_{+}$. Adding equality \eqref{fukushima_decomposition_two} to \eqref{fukushima_decomposition_one} yields \[ \langle u_{t}, h \rangle - \langle u_0, h \rangle = M_{t} + N_{t}, \] with $M_{t} = M^{1}_{t} + M^{2}_{t}$ and \[\begin{split} N_{t} &= N^{1}_{t} + N^{2}_{t} = \frac{1}{2} \int_{0}^{t} \left(\langle h'', u_{s} \rangle - \Pi U(u_{s}) \right) \, \mathrm{d} s + \underset{\epsilon \to 0}{\lim} \, \frac{1}{2} \int_{0}^{t} \left( \Pi U_{\epsilon} (u_{s})- g_{\epsilon}(u_{s})\right) \, \mathrm{d} s \\ &= \frac{1}{2} \int_{0}^{t} \langle h'', u_{s} \rangle \, \mathrm{d} s - \underset{\epsilon \to 0}{\lim} \, \frac{1}{2} \int_{0}^{t} g_{\epsilon}(u_{s}) \, \mathrm{d} s, \end{split}\] Moreover, note that $M=M^{[F_h]}$, where $F_h \in D(\mathcal{E})$ is given by \[ F_h(z) := \langle z, h \rangle, \quad z \in K. \] Hence, by Theorem 5.2.3 in \cite{fukushima2010dirichlet}, $\mu_{<M>}$ is given by $\|h\|_{L^{2}(0,1)}^{2} \cdot \nu$. For the last statement, we apply \cite[Corollary 5.2.1]{fukushima2010dirichlet}. \end{proof} \subsection{A distinction result} As a consequence of our IbPFs and the above constructions, we can prove that the Markov process $(u_t)_{t\geq 0}$ constructed above is not identically equal in law to the process corresponding to the modulus of the solution $(v_t)_{t\geq 0}$ to the stochastic heat equation, as one could be tempted to infer in analogy with the relation between the invariant measures $\mu$ and $\nu$. Let $K^{\mathbb{R}_+}$ denote the space of functions from $\mathbb{R}_{+}$ to $K$, endowed with the product $\sigma$-algebra. For all $x \in K$, let $P_{x}$ be the law, on $K^{\mathbb{R}_+}$, of the Markov process $(u_t)_{t \geq 0}$ associated with $\mathcal{E}$, started from $x$. Similarly, for all $z \in H$, let $\mathbf{P}_{z}$ be the law, on $K^{\mathbb{R}_+}$, of $(|v_{t}|)_{t \geq 0}$, where $(v_{t})_{t \geq 0}$ is the solution of the stochastic heat equation \eqref{solution_she}, with $v_{0}=z$. \begin{thm} \label{dist_res} \[\mu \left( \{ z \in H : \, P_{|z|} \neq \mathbf{P}_{z} \} \right) > 0. \] \end{thm} \begin{proof} Assume by contradiction that $P_{|z|} = \mathbf{P}_{z}$ for $\mu$-a.e. $z \in H$. Then, recalling that $(\mathbf{Q}_{t})_{t \geq 0}$ denotes the semigroup associated with $\Lambda$, and $(Q_{t})_{t \geq 0}$ the semigroup associated with $\mathcal{E}$, we would have \[ \mathbf{Q}_{t}(f \circ j) = (Q_{t} f) \circ j, \quad \mu - \text{a.e.}, \] for all $t \geq 0$ and $f \in L^{2}(\nu)$. Therefore, the corresponding families of resolvents $(\mathbf{R}_{\lambda})_{\lambda >0}$ and $(R_{\lambda})_{\lambda >0}$ would satisfy, for all $f \in L^{2}(\nu)$ \[\mathbf{R}_{1} (f \circ j) = (R_{1} f) \circ j, \] where the equality holds in $L^{2}(\mu)$. In particular, this shows that $(R_{1} f) \circ j \in D(\Lambda)$ for any $f$ as above. We then claim that, for all $F \in D(\Lambda)$, $\Pi F = \mathbb{E} [F(\beta) \, | \, |\beta| \,]$ $\mu$-a.e. Indeed, by the previous observations, for all $f \in L^{2}(\nu)$, it holds \begin{equation}\label{pi_is_cond_exp} \begin{split} \int_{H} (f \circ j )(z) F(z) \, \mathrm{d} \mu(z) &= \Lambda_{1}( \mathbf{R}_{1}(f \circ j), F) = \Lambda_{1}( (R_{1}f) \circ j, F) \\&= \mathcal{E}_{1} (R_{1}f , \Pi F) = \int_{K} f(x) (\Pi F) (x) \, \mathrm{d} \nu(x), \end{split}\end{equation} i.e. $\Pi F = \mathbb{E} [F(\beta) \, | \, |\beta| \,]$ $\mu$-a.e., as claimed. By \eqref{pi_is_cond_exp} and the first equality in Lemma \ref{projection}, we deduce that, for all $f \in D(\mathcal{E})$ and $F \in D(\Lambda)$ \[\Lambda(F,f \circ j) = \mathcal{E}(\Pi F,f). \] Consider now the process $(v_{t})_{t \geq 0}$ associated with $\Lambda$ and started from $v_{0}=\beta$, where $\beta$ is a Brownian bridge on $[0,1]$. Consider also the process $(u_{t})_{t \geq 0}$ associated with $\mathcal{E}$ under the law $\mathbb{P}_{\nu}$ (so that, in particular, $u_{0}\overset{(d)}{=}|\beta|$). Thus the processes $v$ and $u$ are stationary, and $|v| \overset{(d)}{=} u$ by our assumption. Let us set \[ A_t:=\langle |v_t| , h \rangle - \langle |v_0| , h \rangle - \frac{1}{2} \int_0^t\langle |v_{s}| , h'' \rangle \, \mathrm{d} s, \] \[ C_t:=\langle u_t , h \rangle - \langle u_0 , h \rangle - \frac{1}{2} \int_0^t \langle u_s , h'' \rangle \, \mathrm{d} s. \] Let further $k \in C^{2}([0,1])$ with $k(0)=k(1)=0$, and consider the functionals $\Psi_k:H\to\mathbb R$ and $\tilde\Psi_k:K\to\mathbb R$ given by \[ \Psi_k(z):=\exp(\langle k,z\rangle), \qquad \tilde\Psi_k(y):=\mathbb E\left[\Psi_k(\beta)\,|\, |\beta|=y\, \right], \qquad y\in K. \] Note that $\Psi_{k} \in D(\Lambda)$, and recall that, by the above remarks, $\tilde{\Psi}_{k} = \Pi \Psi_{k}$ $\mu$-a.e., so in particular $\tilde{\Psi}_{k} \in D(\mathcal{E})$. We then have \[ \begin{split} &J(t):=- \frac{d}{dt}\mathbb E\left[ A_t\, \tilde\Psi_k(|v_0|)\right] = \\ & = - \frac{d}{dt}\mathbb E\left[ (\langle u_t , h \rangle - \langle u_0 , h \rangle )\, \tilde\Psi_k(u_0)\right] + \frac{1}{2} \frac{d}{dt} \mathbb{E} \left[\int_{0}^{t} \langle h'',|v_{s}|\rangle \, \mathrm{d} s \, \Psi_k(\beta)\right] \\ &= \mathcal{E}(\langle \cdot , h \rangle \, , \, \tilde \Psi_k) + \frac{1}{2} \mathbb E[\langle h'',|\beta|\rangle\Psi_k(\beta)] = \Lambda(\langle |\cdot| , h \rangle \, , \, \Psi_k) + \frac{1}{2} \mathbb E[\langle h'',|\beta|\rangle\Psi_k(\beta)] \\ & = \frac{1}{2} \mathbb E[\langle \nabla \Psi_k(\beta),{\rm sign}(\beta)\,h\rangle + \langle h'',|\beta|\rangle\Psi_k(\beta)] = \mathbb E\left[\Psi_k(\beta)\int_0^1 h \, :\dot{\beta}^2: \, \mathrm{d} L^0\right] \end{split} \] by (3.10) in \cite{zambotti2005integration}, or rather its analogue for the Brownian bridge as stated in Remark 1.3 of \cite{grothaus2016integration}. But, by \cite[Corollary 3.4]{zambotti2005integration} and \cite[Theorem 3.2]{grothaus2016integration}, the last quantity equals \[ \begin{split} \sqrt{\frac{1}{2 \pi}} \, e^{\frac{1}{2} \langle Qk, k \rangle} \int_{0}^{1} \frac{h_r}{\sqrt{r(1-r)}} \exp\left( - \frac{K_{r}^{2}}{2r(1-r)} \right) \lambda(K'_r, -K_{r}, r) \, \mathrm{d} r, \end{split} \] where $K= Q k$, with $Q$ the covariance operator of $\beta$, \[ (Q k)_{r} = \int_{0}^{1} (r \wedge \sigma - r \sigma) \, k_{\sigma} \, \mathrm{d} \sigma, \qquad r \in [0,1], \] and $\lambda : \mathbb{R}^{2} \times [0,1] \to \mathbb{R}$ is defined by \[ \lambda(x,y,r) := x^{2} + xy \frac{1-2r}{r(1-r)} + y^{2}\frac{(1-2r)^{2}}{4r^{2}(1-r)^{2}} - \frac{1}{4r(1-r)}, \quad x,y \in \mathbb{R}, \ r \in [0,1]. \] Hence, \begin{equation} \label{quantity_one} \begin{split} J(t) = \sqrt{\frac{2}{\pi}} e^{\frac{1}{2} \langle Qk, k \rangle} \int_{0}^{1} \frac{h_r}{\sqrt{r(1-r)}} \exp\left( - \frac{K_{r}^{2}}{2r(1-r)} \right) \lambda(K'_{r}, -K_{r}, r) \, \mathrm{d} r. \end{split} \end{equation} On the other hand \[ \begin{split} L(t):=- \left. \frac{d}{dt}\mathbb E\left[ C_t\, \tilde\Psi_k(|v_0|)\right] \,\right|_{t=0} & = \mathcal{E} (\Pi \Psi_k , \langle \cdot, h \rangle) + \frac{1}{2} \mathbb E[\langle h'',|\beta|\rangle \, \Pi \Psi_k(|\beta|)] \\ = \frac{1}{2}\mathcal{E}(\Pi U,\Pi \Psi_{k}) &= \frac{1}{4} \, \underset{\epsilon \to 0}{\lim} \, \mathbb{E} \left[\int_{0}^{1} h_r \, \rho''_{\epsilon}(|\beta_r|) \, \mathrm{d} r \, \Pi \Psi_k(|\beta|) \right], \end{split} \] where we used \eqref{IbPF_Dirichlet} to obtain the second equality, and the fact that $U = \underset{\epsilon \to 0}{\lim} \, U_{\epsilon}$ in $D(\mathcal{E})$, combined with \eqref{u_epsilon}, to obtain the third one. Therefore, recalling that $\Pi \Psi_{k} = \mathbb E(\Psi_{k} \, | \, |\beta|)$ $\mu$-a.e., we have \[\begin{split} L(t) = \frac14 \lim_{\epsilon\to 0} \mathbb{E} \left[\int_{0}^{1} h_r \, \rho''_{\epsilon}(|\beta_r|) \, \mathrm{d} r \, \Psi_{k}( \beta ) \right] = \frac14 \lim_{\epsilon\to 0} \mathbb{E} \left[ \int_{0}^{1} h_r \, \rho''_{\epsilon}(\beta_r) \, \mathrm{d} r \, e^{\langle k, \beta \rangle} \right]. \end{split} \] By the Cameron-Martin formula, for all $\epsilon >0$ \[ \begin{split} & \frac{1}{4} \, \mathbb{E} \left[\int_{0}^{1} h_r \, \rho''_{\epsilon}(\beta_r) \, \mathrm{d} r \, e^{\langle k, \beta \rangle} \right] = \\ &= \frac{1}{4} e^{\frac{1}{2} \langle Qk, k \rangle} \int_{0}^{1} \frac{h_r}{\sqrt{2 \pi r(1-r)}} \int_{\mathbb{R}} \rho_{\epsilon}''(a) \exp\left( - \frac{(a-K_{r})^{2}}{2r(1-r)} \right) {\rm d} a\, \mathrm{d} r \\ &\underset{\epsilon \to 0}{\to} \frac{1}{4} e^{\frac{1}{2} \langle Qk, k \rangle} \int_{0}^{1} \frac{h_r}{\sqrt{2 \pi r(1-r)}} \left[\frac{K_{r}^{2} - r(1-r)}{r^{2}(1-r)^{2}}\right] \exp\left( - \frac{K_{r}^{2}}{2r(1-r)} \right){\rm d}r. \end{split}\] Hence we obtain \begin{equation} \label{quantity_two} \begin{split} &L(t) = \frac{1}{4} e^{\frac{1}{2} \langle Qk, k \rangle} \int_{0}^{1} \frac{h_r}{\sqrt{2 \pi r(1-r)}} \left[\frac{K_{r}^{2} - r(1-r)}{r^{2}(1-r)^{2}}\right] \exp\left( - \frac{K_{r}^{2}}{2r(1-r)} \right){\rm d}r. \end{split} \end{equation} Since $|v|$ and $u$ have the same law, $J(t)=L(t)$ and therefore the right-hand sides of \eqref{quantity_one} and \eqref{quantity_two} above are equal. This being true for any $h \in C^{2}_{c}(0,1)$, we deduce that \[ \frac{K_{r}^{2} - r(1-r)}{4 r^{2}(1-r)^{2}} = \lambda(K'_{r}, -K_{r}, r), \] for a.e. $r \in (0,1)$, hence for all $r$ by continuity. We thus deduce that \begin{equation*} (K'_{r})^{2} - \frac{1-2 r}{r(1-r)} K_{r}K'_{r} - \frac{1}{r(1-r)} K_{r}^{2} = 0, \qquad \forall \, r \in (0,1). \end{equation*} Since we can choose $k \in C^{2}_{c}(0,1)$ such that $K = Q k$ does not satisfy the above equation, we obtain a contradiction. \end{proof} \section{Conjectures and open problems} \label{sect_conj_dynamics} Theorem \ref{statement_ibpf} above enables us to conjecture the structure of the Bessel SPDEs for $\delta < 3$. The idea is that the right-hand side of the IbPFs \eqref{exp_fst_part_ibpf_a_b} (respectively \eqref{exp_fst_part_ibpf_a_b_1}) corresponds to the logarithmic derivative of the measure $P^{\delta}$ for $\delta \in (0,3) \setminus \{1\}$ (resp. $\delta=1$), which should yield the drift in the SPDEs we are looking for. More precisely, considering for instance the case $\delta \in (1,3)$, for all $\Phi \in \mathcal{S}$, we may rewrite the last term in the IbPF \eqref{exp_fst_part_ibpf_a_b} as follows \[ \begin{split} &- \kappa(\delta) \int_{0}^{1} h_{r} \int_{0}^{\infty} a^{\delta-4}\left( \Sigma^\delta_r\left(\Phi(X) \,|\, a\right) - \Sigma^\delta_r\left(\Phi(X) \,|\, 0\right) \right) \, \mathrm{d} a \, \, \mathrm{d} r = \\ &= - \kappa(\delta) \, \lim_{\epsilon \to 0} \lim_{\eta \to 0} \mathbb{E} \left[ \Phi(X) \int_{0}^{1} h_{r} \, \left( \frac{\mathbf{1}_{\{X_r \geq \epsilon\}}}{X_r^{3}} - 2 \frac{\epsilon^{\delta-3}}{3-\delta} \frac{\rho_{\eta}(X_r)}{X_r^{\delta-1}} \right) \, \mathrm{d} r \right], \end{split} \] where the mollifying functions $\rho_\eta, \eta >0$ are as in \eqref{def_mollifier}. As a consequence of this equality, we may write formally the gradient dynamics corresponding to $P^{\delta}$, $\delta \in (1,3)$, as follows \[ \partial_{t} u = \frac{1}{2} \partial^{2}_{x} u + \xi + \frac{\kappa(\delta)}{2} \lim_{\epsilon \to 0} \lim_{\eta \to 0} \, \left( \frac{\mathbf{1}_{\{u \geq \epsilon\}}}{u^{3}} - 2 \frac{\epsilon^{\delta-3}}{3-\delta} \frac{\rho_{\eta}(u)}{u^{\delta-1}} \right), \] where $\xi$ denotes space-time white noise on $\mathbb{R}_{+} \times (0,1)$. Assuming now the existence of a local time process $(\ell^a_{t,x})_{x \in (0,1), t, a \geq 0}$ satisfying the occupation times formula \eqref{otf} and possessing sufficient regularity at $a=0$, we could in turn write \[\lim_{\epsilon \to 0} \lim_{\eta \to 0} \, \left( \frac{\mathbf{1}_{u \geq \epsilon}}{u^{3}} - 2\frac{\epsilon^{\delta-3}}{3-\delta} \frac{\rho_{\eta}(u)}{u^{\delta-1}} \right) = \int_{0}^{+\infty} \ a^{\delta-4} (\ell^a_{t,x} - \ell^0_{t,x}) \, \mathrm{d} a, \] so the SPDE could be written: \[ \partial_{t} u = \frac{1}{2} \partial^{2}_{x} u + \xi + \frac{\kappa(\delta)}{2} \frac{\partial}{\partial t} \int_{0}^{+\infty} \ a^{\delta-4} (\ell^a_{t,x} - \ell^0_{t,x}) \, \mathrm{d} a. \] The same reasoning can be done for $\delta \in (0,1)$, yielding for that case \[ \partial_{t} u = \frac{1}{2} \partial^{2}_{x} u + \xi + \frac{\kappa(\delta)}{2} \frac{\partial}{\partial t} \int_{0}^{+\infty} \ a^{\delta-4} \,\mathcal{T}^{\,2}_{a} \ell^{(\cdot)}_{t,x} \, \mathrm{d} a. \] As for the critical case $\delta = 1$, as shown in Section \ref{sect_Dirichlet}, the dynamics is formally given by \eqref{formal1}, which we can rewrite using the local times as follows: \[ \partial_{t} u = \frac{1}{2} \partial^{2}_{x} u + \xi - \frac{1}{8} \frac{\partial}{\partial t} \frac{\partial^2}{\partial a^{2}} \ell^{a}_{t,x} \biggr\rvert_{a=0}. \] In all the SPDEs above, the unknown would be the couple $(u,\ell)$, where $u$ is a continuous nonnegative function on $\mathbb{R}_{+} \times (0,1)$, and, for all $x \in (0,1)$, $(\ell^{a}_{t}(x))_{a, t \geq 0}$ is a family of occupation times satisfying \eqref{otfo}. These conjectures raise several problems. Indeed, assuming that the process $u$ can be constructed - as done above for the case $\delta=1$ - it is at present unknown whether a family of occupation times $\ell$ satisfying \eqref{otfo} should exist and, if it does, whether it has the requested differentiability property. Moreover, pathwise uniqueness for such equations is at present an open problem. For instance, due to the lack of monotonicity, the techniques used in \cite{nualart1992white} to define a solution to the stochastic heat equation with reflection would not be of any help. We stress that the analogous SDE case of Bessel processes of dimension $\delta \in (0,1)$ is also a problem of interest in itself; these processes are not semi-martingales, but nonetheless satisfy the stochastic equation \[ X_{t} = x + \frac{\delta-1}{2} \int_{0}^{+\infty} a^{\delta-2} (\ell^{a}_{t} - \ell^{0}_{t}) \, \mathrm{d} a + B_{t}, \] where $\left(\ell^{a}_{t}\right)_{a,t \geq 0}$ is the diffusion local times process of the Bessel process $(X_{t})_{t \geq 0}$ (see \cite{revuz2013continuous}, Chapter XI, ex. 1.26). Even in this one-dimensional context, the only known method for solving this equation is to consider $Y_t:=X_t^2$ and show pathwise uniqueness for $Y$; this method breaks down for SPDEs since the It\^o formula produces very complicated terms, see the discussion in the Introduction. The Dirichlet form techniques used in Section \ref{sect_Dirichlet} above to construct $u$ in the case $\delta=1$ can also be applied successfully to treat the case $\delta=2$, see the forthcoming paper \cite{henri2018bessel}. However, for $\delta\in\,]0,3[\,\setminus\{1,2\}$, it is not even known whether the form which naturally generalizes $(\mathcal{E},\mathcal{F} \mathcal{C}^{\infty}_{b}(K))$ in Proposition \ref{closability} is closable and whether its closure is a quasi-regular Dirichlet form . We recall the main result of \cite{dalang2006hitting}: for all $\delta\geq 3$, we set \[ \zeta(\delta):=\sup\{k\geq\mathbb N: \exists t>0, \, 0<x_1<\ldots<x_k<1, \, u(t,x_i)=0 \quad i=1,\ldots,k\}, \] where $u$ is the solution to the $\delta$-Bessel SPDE \eqref{spde>3}-\eqref{spde=3}. Then we have \begin{equation}\label{conj} {\mathbb P}\left( \zeta(\delta)>\frac 4{\delta-2} \right)=0. \end{equation} In other words, a.s. $u$ hits the obstacle $0$ in at most $\lfloor\frac 4{\delta-2}\rfloor$ space points simultaneously in time. It is very tempting to conjecture that \eqref{conj} holds for all $\delta>2$ in other words, the $\delta$-Bessel SPDE would hit 0 at finitely many space points simultaneously in time for any $\delta>2$, but the number of such hitting points would tend to $+\infty$ as $\delta\downarrow 2$. The fact that $\delta=2$ is the critical value for this behaviour is clearly related to the fact that $\delta=2$ is also the critical dimension for the probability that the $\delta$-Bessel process or bridge hit 0. The transition between $\delta\geq 3$ and $\delta<3$ is visible at the level of the invariant measure, namely the $\delta$-Bessel bridge, since in the former case the measure is log-concave, while this property is lost in the latter case. Therefore the techniques of \cite{ASZ} based on optimal transport and gradient flows in metric spaces fail for $\delta<3$. In the same vein, the Strong Feller property holds easily for $\delta\geq 3$, while it is an open problem for $\delta<3$, again because the drift of the SPDE becomes highly non-dissipative. Still, the recent paper \cite{Henri18} of the first author shows that Bessel processes of dimension $\delta<1$ are Strong Feller even if their drift contains a renormalised local time. Moreover Tsatsoulis and Weber \cite{TW} have proved that the 2-dimensional stochastic quantization equation satisfies a Strong Feller property, although it is an equation which needs renormalisation; also Hairer and Mattingly \cite{HM18} have proved the Strong Feller property for a large class of equations with renormalised drifts. All this suggests that there may be hope that this technically very useful property holds also for $\delta$-Bessel SPDEs with $\delta<3$. \appendix \section{Proofs of two technical results}\label{Proofs} \begin{proof}[Proof of Proposition \ref{closability}] Since $D(\Lambda)$ contains all globally Lipschitz functions on $H$, for all $f \in \mathcal{F} \mathcal{C}^{\infty}_{b}(K)$ we have $f \circ j \in D(\Lambda)$. A simple calculation shows that for any $f\in \mathcal{F} \mathcal{C}^{\infty}_{b}(K)$ of the form \eqref{Fexp} we have \begin{equation} \label{derivative_functional_abs_val} \nabla (f\circ j)(z) = \nabla f (j(z)) \, \text{sgn}(z). \end{equation} Hence, for all $f,g \in \mathcal{F} \mathcal{C}^{\infty}_{b}(K)$, we have \[ \begin{split} \mathcal{E}(f,g) &= \frac{1}{2} \int \langle \nabla f(x) , \nabla g(x) \rangle \, \mathrm{d} \nu(x) = \frac{1}{2} \int \langle \nabla f(j(z)) , \nabla g(j(z)) \rangle \, \mathrm{d} \mu (z) \\ &= \frac{1}{2} \int \langle \nabla (f \circ j)(z) , \nabla (g \circ j)(z) \rangle \, \mathrm{d} \mu (z) = \Lambda(f \circ j, g \circ j), \end{split} \] where the third equality follows from \eqref{derivative_functional_abs_val}. This shows that the bilinear symmetric form $(\mathcal{E},\mathcal{F} \mathcal{C}^{\infty}_{b}(K))$ admits as an extension the image of the Dirichlet form $(\Lambda, D(\Lambda))$ under the map $j$. Since $\mathcal{F} \mathcal{C}^{\infty}_{b}(K)$ is dense in $L^{2}(\nu)$, this extension is a Dirichlet form. In particular, $(\mathcal{E},\mathcal{F} \mathcal{C}^{\infty}_{b}(K))$ is closable, its closure $(\mathcal{E},D (\mathcal{E}))$ is a Dirichlet form, and we have the isometry property \eqref{isometry}. There remains to prove that the Dirichlet form $(\mathcal{E},D (\mathcal{E}))$ is quasi-regular. Since it is the closure of $(\mathcal{E},\mathcal{F} \mathcal{C}^{\infty}_{b}(K))$, it suffices to show that the associated capacity is tight. Since $K$ is separable, we can find a countable dense subset $\{ y_{k}, \, k \in \mathbb{N} \} \subset K$ such that $y_k \neq 0$ for all $k \in \mathbb{N}$. Let now $\varphi \in C^{\infty}_{b}(\mathbb{R})$ be an increasing function such that $\varphi(t)=t$ for all $t \in [-1,1]$ and $\|\varphi'\|_{\infty} \leq 1$. For all $m \in \mathbb{N}$, we define the function $v_{m} : K \to \mathbb{R}$ by \[ v_{m}(z) := \varphi(\|z-y_{m}\|), \quad z \in K.\] Moreover, we set, for all $n \in \mathbb{N}$ \[w_{n}(z) := \underset{m \leq n}{\inf} v_{m}(z), \quad z \in K.\] We claim that $w_{n} \in D(\mathcal{E})$, $n \in \mathbb{N}$, and that $w_{n} \underset{n \to \infty}{\longrightarrow} 0$, $\mathcal{E}$ quasi-uniformly in $K$. Assuming this claim for the moment, for all $k \geq 1$ we can find a closed subset $F_{k}$ of $K$ such that $\text{Cap} (K \setminus F_{k}) < 1/k$, and $w_{n} \underset{n \to \infty}{\longrightarrow} 0$ uniformly on $F_{k}$. Hence, for all $\epsilon >0$, we can find $n \in \mathbb{N}$ such that $w_{n} < \epsilon$ on $F_{k}$. Therefore \[ F_{k} \subset \underset{m \leq n}{\bigcup} B(y_{m}, \epsilon) \] where $B(y, r)$ is the open ball in $K$ centered at $y \in K$ with radius $r >0$. This shows that $F_{k}$ is totally bounded. Since it is, moreover, complete as a closed subspace of a complete metric space, it is compact, and the tightness of $\text{Cap}$ follows. We now justify our claim. For all $i \in \mathbb{N}$, we set $l_i := \|y_i\| ^{-1} \, y_i$. Then for all $i \geq 1$, $l_{i} \in K$, $\|l_{i}\| = 1$ and, for all $z \in K$ \[ \|z\| = \underset{i \geq 0}{\sup} \, \langle l_{i}, z \rangle. \] Let $m \in \mathbb{N}$ be fixed. For all $i \geq 0$, let $u_{i}(z) := \underset{j \leq i}{\sup} \, \, \varphi( \, \langle l_{j}, z- y_{m} \rangle \, )$, $z \in K$. We have $u_{i} \in D(\mathcal{E})$, and, for $\nu$ - a.e. $z \in K$ \[\sum_{k=1}^{\infty} \frac{\partial u_{i}}{\partial e_{k}} (z) ^{2} \leq \underset{j \leq i}{\sup} \left( \sum_{k=1}^{\infty} \varphi'(\langle l_{j}, z - y_{m} \rangle )^{2} \, \langle l_{j}, e_{k} \rangle ^{2} \right) \leq 1, \] whence $\mathcal{E}(u_{i}, u_{i})\leq 1$. By the definition of $v_{m}$, as $i \to \infty$, $u_{i} \uparrow v_{m}$ on $K$, hence in $L^{2}(K, \nu)$. By \cite[I.2.12]{ma2012introduction}, we deduce that $v_{m} \in D(\mathcal{E})$, and that $ \mathcal{E}(v_{m}, v_{m}) \leq 1. $ Therefore, for all $n \in \mathbb{N}$, $w_{n} \in D(\mathcal{E})$, and $ \mathcal{E}(w_{n}, w_{n}) \leq 1. $ But, since $\{ y_{k}, \, k \in \mathbb{N} \}$ is dense in $K$, as $n \to \infty$, $w_{n} \downarrow 0$ on $K$. Hence $w_{n} \underset{n \to \infty}{\longrightarrow} 0$ in $L^{2}(K, \nu)$. This and the previous bound imply, by \cite[I.2.12]{ma2012introduction}, that the Ces\`{a}ro means of some subsequence of $(w_{n})_{n \geq 0}$ converge to $0$ in $D(\mathcal{E})$. By \cite[III.3.5]{ma2012introduction}, some subsequence thereof converges $\mathcal{E}$ quasi-uniformly to $0$. But, since $(w_{n})_{n \geq 0}$ is non-increasing, we deduce that it converges $\mathcal{E}$-quasi-uniformly to $0$. The claimed quasi-regularity follows. There finally remains to check that $(\mathcal{E}, D(\mathcal{E}))$ is local in the sense of Definition \cite[V.1.1]{ma2012introduction}. Let $u,v \in D(\mathcal{E})$ satisfying $\text{supp}(u) \cap \text{supp}(v) = \emptyset$. Then, $u \circ j$ and $v \circ j$ are two elements of $D(\Lambda)=W^{1,2}(\mu)$ with disjoint supports, and, recalling \eqref{isometry}, we have \[ \mathcal{E}(u,v) = \Lambda(u \circ j,v \circ j) = \frac{1}{2} \int_{H} \nabla (u \circ j) \cdot \nabla (v \circ j) \, \mathrm{d} \mu=0. \] The claim follows. \end{proof} \begin{proof}[Proof of Lemma \ref{density}] Recall that $D(\mathcal{E})$ is the closure under the bilinear form $\mathcal{E}_{1}$ of the space $\mathcal{F} \mathcal{C}^{\infty}_{b}(K)$ of functionals of the form $F = \Phi \bigr \rvert_{K}$, where $\Phi \in \mathcal{F} \mathcal{C}^{\infty}_{b}(H)$. Therefore, to prove the claim, it suffices to show that for any functional $\Phi \in \mathcal{F} \mathcal{C}^{\infty}_{b}(H)$ and all $\epsilon>0$, there exists $\Psi \in \mathscr{S}$ such that $\mathcal{E}_1(\Phi-\Psi,\Phi-\Psi) < \epsilon $. Let $\Phi \in \mathcal{F} \mathcal{C}^{\infty}_{b}(H)$. We set for all $\epsilon > 0$ \[ \Phi_{\epsilon}(\zeta) := \Phi(\sqrt{\zeta^{2} + \epsilon}), \quad \zeta \in H. \] A simple calculation shows that $\Phi_{\epsilon} \underset{\epsilon \to 0}{\longrightarrow} \Phi$ and $\nabla \Phi_{\epsilon} \underset{\epsilon \to 0}{\longrightarrow} \nabla \Phi$ pointwise, with uniform bounds $\|\Phi_{\epsilon}\|_{\infty} \leq \| \Phi \|_{\infty}$ and $ \| \nabla \Phi_{\epsilon} \|_{\infty} \leq \| \nabla \Phi \|_{\infty}$. Hence, by dominated convergence, $\mathcal{E}_1 (\Phi_{\epsilon} - \Phi, \Phi_{\epsilon} - \Phi) \underset{\epsilon \to 0}{\longrightarrow} 0$. Then, introducing for all $d \geq 1$ $(\zeta^{d}_{i})_{1 \leq i \leq d}$ the orthonormal family in $L^{2}(0,1)$ given by \[ \zeta^{d}_{i} := \sqrt{d} \ \mathbf{1}_{[\frac{i-1}{d}, \frac{i}{d}[}, \quad i = 1, \ldots, d, \] and setting \[ \Phi^{d}_{\epsilon}(\zeta) := \Phi_{\epsilon} \left( \left( \sum_{i=1}^{d} \langle \zeta_{d,i}, \zeta^{2} \rangle \right)^{\frac12} \right) = \Phi \left( \left( \sum_{i=1}^{d} \langle \zeta_{d,i}, \zeta^{2} \rangle + \epsilon \right)^{\frac12} \right) , \quad \zeta \in H, \] again we obtain the convergence $\mathcal{E}_1(\Phi^{d}_{\epsilon} - \Phi_{\epsilon}, \Phi^{d}_{\epsilon} - \Phi_{\epsilon}) \underset{d \to \infty}{\longrightarrow} 0$. There remains to show that any fixed functional of the form \[ \Phi (\zeta) = f\left( \langle \zeta_{1}, \zeta^{2} \rangle, \ldots, \langle \zeta_{d}, \zeta^{2} \rangle \right), \quad \zeta \in H \] with $d \geq 1$, $f \in C^{1}_{b}(\mathbb{R}_{+}^{d})$, and $(\zeta_{i})_{i=1, \ldots, d}$ a family of elements of $K$, can be approximated by elements of $\mathscr{S}$. Again by dominated convergence, we can suppose that $f$ has compact support in $\mathbb{R}_{+}^{d}$. We define $g\in C^{1}_{b}([0,1]^{d})$, \[ g(y) := f(-\ln(y_{1}), \cdots, -\ln(y_{d})), \quad y \in \,]0,1]^{d}, \] and $g(y):=0$ if $y_i=0$ for any $i=1,\ldots,d$. By a differentiable version of the Weierstrass Approximation Theorem (see Theorem 1.1.2 in \cite{llavona1986approximation}), there exists a sequence $(p_{k})_{k \geq 1}$ of polynomial functions converging to $g$ for the $C^{1}$ topology on $[0,1]^{d}$. Defining for all $k \geq 1$ the function $f_{k}: \mathbb{R}_{+}^{d} \to \mathbb{R}$ by \[ f_{k}(x) = p_{k}(e^{-x_{1}}, \cdots, e^{-x_{d}}), \quad x \in \mathbb{R}_{+}^{d}, \] we define $\Phi_{k} \in \mathscr{S}$ by \[ \Phi_{k} (\zeta) = f_{k} \left( \langle \zeta_{1}, \zeta^{2} \rangle, \ldots, \langle \zeta_{d}, \zeta^{2} \rangle \right), \quad \zeta \in H. \] Since $p_{k} \underset{k \to \infty}{\longrightarrow} g$ for the $C^{1}$ topology on $[0,1]^{d}$, $f_{k} \underset{k \to \infty}{\longrightarrow} f$ uniformly on $\mathbb{R}_{+}^{d}$ together with its first order derivatives. Hence, it follows that $\Phi_{k} \underset{k \to \infty}{\longrightarrow} \Phi$ pointwise on $K$ together with its gradient. It also follows that there is some $C>0$ such that for all $k \geq 1$ \[ \forall \zeta \in K, \quad |\Phi_{k}(\zeta)|^{2} + \|\nabla \Phi_{k}(\zeta)\|^{2} \leq C(1+ \|\zeta \|^{2}). \] Since the quantity in the right-hand side is $\nu$ integrable in $\zeta$, it follows by dominated convergence that $\mathcal{E}_1(\Phi_{k}-\Phi, \Phi_{k}-\Phi) \underset{k \to \infty}{\longrightarrow} 0$. This yields the claim. \end{proof} \end{document}
arXiv
\begin{document} \title{On the Approximation of Quantum Gates using Lattices} \begin{abstract} A central question in Quantum Computing is how matrices in $SU(2)$ can be approximated by products over a small set of "generators". A topology will be defined on $SU(2)$ so as to introduce the notion of a covering exponent \cite{letter}, which compares the length of products required to covering $SU(2)$ with $\varepsilon$ balls against the Haar measure of $\varepsilon$ balls. An efficient universal set over $PSU(2)$ will be constructed using the Pauli matrices, using the metric of the covering exponent. Then, the relationship between $SU(2)$ and $S^3$ will be manipulated to correlate angles between points on $S^3$ and give a conjecture on the maximum of angles between points on a lattice. It will be shown how this conjecture can be used to compute the covering exponent, and how it can be generalized to universal sets in $SU(2)$. \end{abstract} \section{Introduction}\label{Intro} A classical bit is the basic unit of information used in classical computing, which has the states 1 or 0. Quantum computing extends this concept using the notion of Quantum bits. Dirac notation is used to denote the basic states $|0\rangle$ and $|1\rangle$. Then a quantum bit, or qubit, is a pair of complex numbers $\alpha,\beta$ which correspond to the probability of the qubit being in the states $|0\rangle$ or $|1\rangle$. Thus, the quantum bit is represented as $\alpha |0\rangle + \beta |1\rangle$. Since $\alpha$ and $\beta$ represent the probability of the qubit being in a particular state, it must hold that $\abs{\alpha}^2+\abs{\beta}^2=1$. Therefore, qubits can be represented by unit vectors in $\mathbb{C}^2$. \\ This construction can be compounded to construct $n$-qubits, which are ordered collections of $n$-qubits. An $n$-qubit relates to the probability of $n$ different qubits are in a particular configuration. Therefore, any $n$-qubit is thus taken to be the tensor product of these qubits, $$ (\alpha_1|0\rangle +\beta_1|1\rangle )\otimes \cdots \otimes (\alpha_n|0\rangle +\beta_n|1\rangle ) $$ As mentioned above 1-qubits form the unit circle in $\mathbb{C}^2$, and it follows that $n$-qubits form vectors in $(\mathbb{C}^2)^{\otimes n}$. In classical computers, gates are linear operators over classical bits. Examples of classical gates include the AND, OR, and NOT gates. Thus, a quantum gate follows naturally as a linear function over the vector space $\mathbb{C}^{2^n}$. However, a prime function of quantum gates is that they are reversible. That is they are invertible, and more specifically they are unitary.\\ Since each $1$-qubit is a unit vector, then $1$-qubit quantum gates should take unit vectors to unit vectors. Thus, quantum gates are taken to have determinant $1$. Let $SU(2)$ denote the collection of $2\times 2$ Hermitian matrices with determinant $1$. Then, $n$-qubit gates can be formed by tensoring $1$-qubit matrices with the Controlled NOT gate: $$ CNOT = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 &0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix} $$ Thus, questions that are harder to answer for $n$-qubit gates can be extrapolated from the answer over $1$-qubit gates \cite{Selinger}. The main goal of quantum computing is obviously to build a quantum computer. In order for a quantum computer to be constructed, a finite base set of quantum gates must be chosen so that they generate $SU(2)$. However, it turns out that it is not practical to consider this problem. See the papers \cite{239}\cite{drag}\cite{262}\cite{236}\cite{200} and the references cited therein for examples of coverings of compact sets in Euclidean space, as well as some of their applications. Thus, gate sets are constructed so that the elements they generate can approximate any quantum gate. Defining how a gate approximates another gate is half the battle, which usually entails constructing a metric-induced topology on $SU(2)$, but in general gate sets which generate dense subsets of $SU(2)$ are chosen. In a given context, a set which generates a dense subset of $SU(2)$ is referred to as a universal set in $SU(2)$, and by definition gives that it can approximate any gate in $SU(2)$ with arbitrary precision according to the chosen measure of approximation. However, as in all computing, the question becomes how to choose efficient universal sets to approximate elements in $SU(2)$. Efficient meaning that it requires the least amount of matrices (or some generalized notion of cost) to approximate all elements of $SU(2)$.\\ In this paper, the goal is to construct a universal gate set $T$ in $SU(2)$ that efficiently approximates all of $SU(2)$ using a natural and simple notion of distance. A quantity called the covering exponent given by \cite{letter} will be used to measure the efficiency of a gate set in approximating every element of $SU(2)$. In general, $T$ will be constructed to minimize the maximal cost of approximating any gate. Since quantum gates are very similar up to scalars, it is also useful to consider how $T$ approximates the subset $PSU(2) \subset SU(2)$ (the equivalence classes of $SU(2)$ under multiplication by $-1$). It will be shown that $T$ can efficiently approximate $PSU(2)$, but does not quite efficiently approximate $SU(2)$. \section{Background}\label{sec:Back} As shown in \cite{Selinger}, gates such as the controlled NOT gate can be tensored with 1-qubit gates without much cost to approximate 2-qubit gates very well. Thus, a universal gate set on $SU(2)$ can easily be extended to a universal gate set on $SU(2^n)$. Furthermore, gates do not vary greatly up to constants. Thus, it is often convenient to study approximation over $PSU(2)$. The projective special linear group, $PSU(n)$, is defined as $SU(n)/Z(SU(n))$. For $n=2$, $Z(SU(2))\cong \mathbb{Z}/2\mathbb{Z}$. Another reason to use the case of 1-qubit quantum gates is this property, which gives that $PSU(2) \cong SU(2)/\lbrace I,-I\rbrace$. Additionally, it is useful that $PSU(2) \cong SO(3)$. Thus, the choice may not be largely important in a given application, but the choice must be consistent in order for the math to work. For constructions not dependent on the choice, $G$ will be used to represent either $SU(2)$ and $PSU(2)$. It will be apparent from the context, and reiterated when necessary, which choice is being used. \subsection{Structure of $SU(2)$}\label{ssc:SO} It is an elementary fact that any element $M \in SU(2)$ can be written in terms of $\alpha,\beta \in \mathbb{C}$ as $$ \begin{bmatrix} \alpha & \beta \\ -\bar{\beta} & \bar{\alpha} \\ \end{bmatrix} $$ Thus, $M$ can be associated with some vector $(x_1,x_2,x_3,x_4)$ in $\mathbb{R}^4$. In turns out, that the map $M \mapsto (x_1,x_2,x_3,x_4)$ is a diffeomorphism. Note that $$\det M = \alpha\bar{\alpha}+\beta\bar{\beta} = \abs{\alpha}^2+\abs{\beta}^2 = 1$$ This relation allows sets in $SU(2)$ to be related to sets on $S^3$. It is a powerful tool in computing the efficiency of universal sets of $SU(2)$. However, unlike $S^3$, $SU(2)$ does not have a standard topology by convention. Before notions of universality and closeness can be used, $SU(2)$ must be set up as a metric space with an induced topology. Define the distance between two matrices $M,N$ as \begin{equation}\label{eq:metric} d_G(M,N) = \sqrt{1-\frac{\abs{Tr(M^{\dag}N)}}{2}} \end{equation} where $M^{\dag}$ represents the conjugate transpose of $M$. Let $M,N,P\in SU(2)$. Most of the conditions for a metric are straightforwardly derivative of basic properties from the trace function and $SU(2)$. More interestingly, it is invariant under left and right multiplication as shown below \begin{align*} d_G(PM,PN) &= \sqrt{1-\frac{\abs{Tr((PM)^{\dag}(PN))}}{2}} \\ &= \sqrt{1-\frac{\abs{Tr(M^{\dag}P^{\dag}PN)}}{2}} \\ &= \sqrt{1-\frac{\abs{Tr(M^{\dag}N)}}{2}} \\ &= d_G(M,N)\\ d_G(MP,NP) &= \sqrt{1-\frac{\abs{Tr((MP)^{\dag}(NP))}}{2}} \\ &= \sqrt{1-\frac{\abs{Tr((NP)(MP)^{\dag})}}{2}} \\ &= \sqrt{1-\frac{\abs{Tr(NPP^{\dag}M^{\dag})}}{2}} \\ &= \sqrt{1-\frac{\abs{Tr(NM^{\dag})}}{2}} \\ &= \sqrt{1-\frac{\abs{Tr(M^{\dag}N)}}{2}}\\ &= d_G(M,N) \end{align*} Thus, $d_G(MN,N)=d_G(M,I)$. That implies that a matrix $M$ acting on $N$ can only move it as far as $d_G(M,I)$. This is convenient since $$ d_G(M,M)=d_G(I,I)=\sqrt{1-\frac{\abs{Tr(I^{\dag}I)}}{2}}=\sqrt{1-\frac{2}{2}}=0$$ \com{Haar might only work on $SU(2)$, Haar biinvariant?}With this metric, then there is an induced topology from the metric space $(G,d_G)$ using balls as open sets. A Haar measure on $G$ is a measure $\mu : G\rightarrow \mathbb{R}_{>0}$ so that $\mu(G)=1$ and $\mu(MS)=\mu(SM)=\mu(S)$ where $M \in G$ and $S\subset G$ is a Borel subset of $G$. Then for $M \in G$ and $\varepsilon > 0$, the size of a ball $B_G(M,\varepsilon)$ will be evaluated as $\mu(B_G(M,\varepsilon))$. Thus, every time $G$ is referenced, the measure space $(G,d_G,\mu)$ will be the object being used. \subsection{Universal Sets}\label{ssc:US} Let $\Gamma$ be a finite subset of $G$. The set $\Gamma$ is said to be universal in $G$, with respect to a chosen topology, if the subgroup of $G$ generated by $\Gamma$ is dense. If $\Gamma$ is not universal, then there will be open balls that contain no elements generated by $\Gamma$. A well known theorem cited in \cite{nielsen} expands on the importance of universal sets. \begin{theorem}{(Solovay-Kitaev)}\label{thm:SKT} Let $\Gamma$ be a finite universal set in $SU(n)$ and $\varepsilon > 0$. Then there exists a constant $c$ such that for any $M \in SU(n)$, there is a finite product $S$ of gates in $\Gamma$ of length $O\big(\log^c\big(\frac{1}{\varepsilon}\big)\big)$ such that $d_G(S,M) < \varepsilon$. \end{theorem} Universality of $\Gamma$ gives that any one matrix can be approximated with arbitrary precision. Theorem \ref{thm:SKT} gives that $\Gamma$ can approximate $SU(n)$ with arbitrary efficiency and provides a estimation for the maximum length required to achieve this approximation. This theorem provides justification for studying the efficiency of universal gate sets in approximating all of $SU(2)$, instead of specific matrices. As computers are not typically constructed to perform single calculations, this is much more useful. \\ To consider the efficiency of a universal set, first the idea of cost must be developed. In this paper, the notion of height from \cite{letter} will be used. Let $w$ be a weight function on $\Gamma$. Then $\forall \gamma \in \langle\Gamma\rangle$ define the height of $\gamma$ in $\Gamma$ as \begin{equation}\label{eq:height} h(\gamma) = \min \setof{\sum\limits_{i}w(c_i):c_i \in \Gamma,\gamma=\prod c_i} \end{equation} Note that this notion of height is heavily dependent on the choice of $w$. Thus all results should be taken into the context of the choice of weight, and that all weights have good motivation behind them. Given a choice of weight, then define the following sets for $t > 0$ \begin{align*} U_{\Gamma}(t) &= \setof{\gamma \in \langle \Gamma \rangle : h(\gamma)=t}\\ V_{\Gamma}(t) &= \setof{\gamma \in \langle \Gamma \rangle : h(\gamma) \leqslant t} \end{align*} Thus, if one is continuously taking products in $\Gamma$, then $U_{\Gamma}(t)$ are the gates added after the $t$th product and $V_{\Gamma}(t)$ are the gates that have been generated after $t$ products. Thus, $U_{\Gamma}(t-1)$ and $U_{\Gamma}(t)$ are disjoint, which gives a useful identity relating the two: \begin{equation}\label{eq:VUU} V_{\Gamma}(t) = \bigsqcup\limits_{0 \leqslant k \leqslant t} U_{\Gamma}(k) \end{equation} Let $\varepsilon > 0$. Define the covering length of $\Gamma$ within $\varepsilon$, denoted $t_\varepsilon$ as in \cite{letter}, as follows \begin{equation}\label{eq:te} t_\varepsilon = \min \big\lbrace t \in \mathbb{N} : G \subset \bigcup\limits_{\gamma \in V_{\Gamma}(t)} B_G(\varepsilon)\big\rbrace \end{equation} The calculation of $t_\varepsilon$ is the ultimate prize. Especially, if it can be computed or even bounded as a function of $\varepsilon$, then $t_\varepsilon$ can provide an explicit measure of how much cost it takes to approximate $SU(2)$. However, it doesn't quite give the whole picture. For one, comparing the covering lengths of universal sets is complicated. It is within the realm of reason that perhaps $t_\varepsilon$ does not grow uniformly or otherwise behaves pathologically (although at a minimum non-decreasing), which could complicate comparisons. \subsection{Covering Exponent}\label{ssc:CE} Let $\Gamma$ be a universal set in $G$, and $\varepsilon > 0$. Per the definition of a Haar measure, for any $t>0$ such that $$ G \subset \bigcup\limits_{\gamma \in V_\Gamma(t)} B_G(\varepsilon) $$ it follows $$ \mu\left( \bigcup\limits_{\gamma \in V_{\Gamma}(t)}B_{G}(\gamma) \right)\geqslant \mu(G) =1 $$ Then by construction $t_\varepsilon$ minimizes this gap. Let $B_G(\varepsilon)$ denote $B_G(I,\varepsilon)$. Now, breaking the left hand side down, \begin{align*} \mu\left( \bigcup\limits_{\gamma \in V_\Gamma(t_\varepsilon)} B_G(\varepsilon) \right) &= \sum\limits_{\gamma \in V_\Gamma(t_\varepsilon)} \mu(B_G(\gamma,\varepsilon)) \\ &= \sum\limits_{\gamma \in V_\Gamma(t_\varepsilon)}\mu(B_G(I,\varepsilon))\\ &= \abs{V_\Gamma(t_\varepsilon)}\mu(B_G(\varepsilon))) \end{align*} Thus, substituting this last form into the inequality, \begin{equation}\label{IQ:VB} \abs{V_\Gamma(t_\varepsilon)}\mu(B_G(\varepsilon)) \geqslant 1 \end{equation} If $\Gamma$ approximates $G$ optimally, then inequality (\ref{IQ:VB}) becomes an equality. In general, as $\abs{V_\Gamma(t_\varepsilon)}$ becomes close to $\frac{1}{\mu(B_G(\varepsilon))}$, the overlap between the balls centered at points in $V_\Gamma(t_\varepsilon)$ is minimized. Thus, $\Gamma$ becomes more efficient at approximating $G$. For a universal set $\Gamma$ in $G$ and a Haar measure $\mu$ on $G$, the covering exponent as given as in \cite{letter} is defined as \begin{equation}\label{eq:CE} K(\Gamma) = \limsup\limits_{\varepsilon\rightarrow 0}\dfrac{\log \abs{V_\Gamma(t_\varepsilon)}}{\log\big( \frac{1}{\mu(B_G(\varepsilon))} \big)} \end{equation} Note that $K$ is heavily dependent on $t_\varepsilon$, and does vary with a choice of $G$. The second part is to be expected, $PSU(2)$ is almost half the elements of $SU(2)$ and should typically be easier to generate. The dependence of $K$ on $t_\varepsilon$ is more convenient than impeding, as it sticks close to the original idea of directly comparing lengths of products to measure efficiency. The covering exponent will be used in this paper to measure and construct efficient universal sets in $PSU(2)$ and $SU(2)$. \section{An Efficient Universal Set in PSU(2)}\label{sec:OptPSU} What makes a universal set optimal, or even efficient in approximating $SU(2)$? There are many theories and methods behind this question, however the angle taken here will be that of well distributed points on the sphere. There are many suitable choices for these points, as explored in \cite{book}\cite{262}. However, the one that will be explored is a solution set to the quadratic form $x_1^2+x_2^2+x_3^2+x_4^2 = 5^k$ for different integers $k > 0$. These points are fairly evenly distributed, but conveniently have a very simple structure. This allows the calculations for $K(T)$ to be simplified using a handful of results. The preliminary result will mirror the analysis of a similar set in \cite{letter}, which takes the form as the following theorem \begin{theorem} $K(T)\leqslant 2$ \end{theorem} Then a conjecture is proposed, which would improve this upper bound. However, the resulting theorem is a little less set in stone, taking the form as the following theorem. \begin{theorem} For any $\delta > 0$ such that Conjecture \ref{conj:SD} holds, $K(T) \leqslant 2 - \delta$ \end{theorem} \subsection{Construction of $T$}\label{ssc:ConT} To construct an efficient universal set, lattices in $\mathbb{R}^4$ will be projected onto $S^3$ and then related to quantum gates. To do this, some additional framework specific to this construction is needed. First, for any set $S \subset \mathbb{R}$ let $$H(S)=\setof{a+bi+cj+dk : a,b,c,d \in S}$$ be the set of quaternions with coefficients in $S$. Define the map \begin{align*} \Phi : \: &SU(2) \rightarrow H(\mathbb{R}) \\ &\begin{bmatrix} \alpha & \beta \\ -\overline{\beta} & \overline{\alpha} \\ \end{bmatrix} \mapsto \alpha + \beta j \end{align*} Note that $\Phi$ forms an injective homomorphism, as \begin{align*} \Phi(MN) &= \Phi\left( \begin{bmatrix} \alpha_M\alpha_N-\beta_M\overline{\beta_N} & \alpha_M \beta_N + \beta_M \overline{\alpha_N} \\ -\overline{\beta_M}\alpha_N-\overline{\alpha_M}\overline{\beta_N} & -\overline{\beta_M}\beta_N+\overline{\alpha_M}\overline{\alpha_N} \\ \end{bmatrix} \right) \\ &= \alpha_M\alpha_N-\beta_M\overline{\beta_N}+(\alpha_M\beta_N+\beta_M\overline{\alpha_N})j \\ &= (\alpha_M+\beta_M j)(\alpha_N+\beta_N j)\\ &= \Phi(M)\Phi(N) \end{align*} To construct the universal set, consider integer quaternion factors of the integer $5$. Listed out, they are $$ 1\pm 2i,1\pm 2j,1\pm 2k,2\pm i,2\pm j,2\pm k,5$$ Note that, $$2+i = (1-2i)i $$ Thus, the factors of $5$ can be generated by $$ 1+2i,1+2j,1+2k,1-2i,1-2j,1-2k,i,j,k $$ Let $T=\Phi^{-1}(\setof{1+2i,1+2j,1+2k,1-2i,1-2j,1-2k,i,j,k})$. Then the space spanned by $T$ consists of quaternion factorizations of $5^k$ for all $k \in \mathbb{Z}$. Products of length $k$ will have a Euclidean norm of $5^k$. Thus, the factorziations of $5^k$ correspond exactly to representations of $5^k$ as a sum of $4$ squares. Moreover, any factorization of $5^k$ is represented as a factorization of $5^{k+1}$ by adding a factor of $5$ to the beginning. Thus, up to factors of $5$, the collection of factorizations for $5^k$ contains all factorizations of $5^j$ for any $j < k$. Define the weight $w$ on $T$ so that $$ w(A)=\begin{cases} 1 & A=i,j,k \\ 0 & A=1\pm 2i, 1\pm 2j, 1\pm 2k \\ \end{cases} $$ Using this weight, the previous argument shows that $U_T(k)$ corresponds to all factorizations of $5^k$ over the quaternions, and $V_T(k)$ is in bijection with $U_T(k)$. As a result of Lagrange's Four Squares theorem, the function $$ r(n) = \sum\limits_{m \vert n} m $$ is the number of ways to write $n$ as a sum of four integers. Thus, $r(n)$ also counts all quaternions of norm $n$. So, \begin{align} \abs{V_T(k)} &= r(5^k) \nonumber \\ &= \sum\limits_{m \vert 5^k} m \nonumber \\ &= \sum\limits_{j = 0}^{k} 5^j \nonumber \\ &= \frac{5^{k+1}-1}{5-1} \nonumber \\ &= \frac{1}{4}(5^{k+1}-1) \label{eq:absV} \end{align} \subsection{Upper Bound of $K(T)$}\label{ssc:UBKT} Recall from (\ref{eq:CE}), $$ K(T) = \limsup\limits_{\varepsilon\rightarrow 0} \dfrac{\log \abs{V_T(t_\varepsilon)}}{\log\big( \frac{1}{\mu(B_G(\varepsilon))} \big)} $$ Some well known calculations give that $\mu(B_G(\varepsilon))$ is approximately $\varepsilon^2$ when $\varepsilon$ is small only when $G=PSU(2)$\footnote{For an example, see \cite{mt}}. It will be shown that $\abs{V_T(t_\varepsilon)}$ can be bounded in a manner such that it makes allows the terms in $K(T)$ to be sufficiently simplified. The following proposition accomplishes this feat. \begin{proposition}\label{thm:Sar}\com{put footnote on credit} There exists a constant $c>0$ such that for any $\varepsilon > 0$, $$\abs{V_T(t_\varepsilon)} \geqslant \frac{c t_\varepsilon^2}{\varepsilon^4}$$ \end{proposition} \begin{proof} In \cite{letter}, the same lower bound was shown for $T \setminus \setof{X,Y,Z}$\footnote{Note that the exclusion of these elements make it such that $\Phi$ is no longer bijective over $T$}. It will be shown that the bound above persists when these elements are included. Consider $\mathbb{R}^3$ as the subspace of $H$ generated by $i,j,k$. Then for any $v \in \mathbb{R}^3$, $a \in H$ can act on $v$ by conjugation in $H$. Note that $a$ and $-a$ correspond to the same transformation. Thus, the choice of $G=PSU(2)$ allows for $G$ to be put in a 1-to-1 correspondence with elements of $SO(3)$. Thus, the action of $\gamma \in V_T(t)$ on a vector $v \in \mathbb{R}^3$ in this manner will be represented by juxtaposition. Let $k_\varepsilon$ be a point pair invariant kernel on $S^2$ so that the following hold: \begin{itemize} \item $k_\varepsilon (x,y) \geqslant 0$ for any $x,y \in S^2$ \item $\int\limits_{S^2} k_\varepsilon (x,y)dy = 1$ \item $k_\varepsilon (x,y) = 0$ when $d_{S^2}(x,y)\geqslant \varepsilon$ \item There is a non-zero constant $c' \in \mathbb{C}$ so that $k_\varepsilon (x,x) \leqslant \dfrac{c'}{\varepsilon^2}$ for any $x\in S^2$ \end{itemize} Additionally, let $h_{k_\varepsilon}$ be the spherical transform of $k_\varepsilon$ such that $h_{k_\varepsilon}(j) \geqslant 0$ for any $j \geqslant 0$. Then Hecke Operators are constructed as follows, $$ (T_{t}f)(x) = \sum\limits_{\gamma \in V_T(t)}f(\gamma x)$$ Then from \cite{letter} and the spectral theorem, there is a sequence of real eigenvalues for the $T_t$ $$ \lambda_0(t),\lambda_1(t),\ldots $$ and an orthonormal basis of $L^2(S^2)$ of corresponding eigenfunctions $$ \phi_0, \phi_1, \ldots $$ Then, \cite{letter} gives that $$ \phi_j(x)h_{k_\varepsilon}(j) = \int_{S^2} k_\varepsilon (x,y)\phi_j(y)dy = 1 $$ In particular, $\phi_0$ is the basis vector of the 1-dimensional space of spherical harmonics of degree 0. Then $$ \phi_0(x) = \frac{1}{\sqrt{4\pi}} $$ Thus, $$ h_{k_\varepsilon}(0) = \int_{S^2} k_\varepsilon(x,y)dy = 1 $$ Then, as the $\phi_j$ form an orthonormal basis, $k_\varepsilon$ can be written as $$ k_\varepsilon(x,y) = \sum\limits_{j=0}^{\infty} h_{k_\varepsilon}(j)\phi_j(x)\phi_j(y) $$ Fix a point $x_0$ in $S^2$. Then for any $\gamma \in V_T(t)$, $$ k_\varepsilon (\gamma x_0,y) = \sum\limits_{j=0}^{\infty} h_{k_\varepsilon}(j)\phi_j(\gamma x_0, y)\phi_j(y) $$ Hence, \begin{align*} \sum\limits_{\gamma \in V_T(t)} k_\varepsilon(\gamma x_0, y) &= \sum\limits_{\gamma \in V_T(t)}\sum\limits_{j = 0}^{\infty} h_{k_\varepsilon}(j)\phi_j(\gamma x_0)\phi_j(y) \\ &= \sum\limits_{\gamma \in V_T(t)} h_{k_\varepsilon}(0)\phi_0(sx_0)\phi_0(y)+\sum\limits_{j=1}^{\infty}\sum\limits_{\gamma \in V_T(t)} h_{k_\varepsilon}(j)\phi_j(sx_0)\phi_j(y) \\ &= \sum\limits_{\gamma \in V_T(t)} 1\cdot \frac{1}{\sqrt{4\pi}} \cdot \frac{1}{\sqrt{4\pi}} + \sum\limits_{j=1}^{\infty}h_{k_\varepsilon}(j)\phi_{j}(y)\sum\limits_{\gamma \in V_T(t)} \phi_j(\gamma x_0) \\ &= \frac{\abs{V_T(t)}}{4\pi}+ \sum\limits_{j=1}^{\infty} h_{k_\varepsilon}(j)\phi_j(y) (T_t\phi_j)(x_0) \end{align*} By construction, $\phi_j$ is the eigenfunction of $T_t$ with eigenvalue $\lambda_j(t)$. Thus, $$ \sum\limits_{\gamma \in V_T(t)} k_\varepsilon(\gamma x_0 , y) = \frac{\abs{V_T(t)}}{4\pi} + \sum\limits_{j=1}^{\infty} h_{k_\varepsilon}(0)\phi_j(y)\lambda_j(t)\phi_j(x_0) $$ If $d_{S^2}(\gamma x_0 , y)> \varepsilon$ for all $\gamma \in V_T(t)$, then by construction $k_\varepsilon(\gamma x_0, y) = 0$ for all $\gamma \in V_T(t)$. If that were true, $$ \sum\limits_{\gamma \in V_t(t)} k_\varepsilon(\gamma x_0 , y) = \frac{\abs{V_T(t)}}{4\pi} + \sum\limits_{j=1}^{\infty} \lambda_j(t)h_{k_\varepsilon}(j)\phi_j(x_0)\phi_j(y) = 0 $$ which gives $$ \frac{\abs{V_T(t)}}{4\pi} = - \sum\limits_{j=1}^{\infty}\lambda_j(t)h_{k_\varepsilon}(j)\phi_j(x_0)\phi_j(y) \leqslant \sum\limits_{j=1}^{\infty} h_{k_\varepsilon}(j)\abs{\lambda_j(t)}\abs{\phi_j(x_0)}\abs{\phi_j(y)}$$ It is an elementary identity that $$\abs{\phi_j(x_0)}\abs{\phi_j(y)} \leqslant \frac{1}{2}(\abs{\phi_j(x_0)}^2+\abs{\phi_j(y)}^2)$$ Since $T$ only has 6 elements of non-zero weight, a theorem from \cite{letter} can be applied with the value of q=5. $$ \abs{\lambda_j(t)} \leqslant 2tq^{\frac{t}{2}} $$ Combining these two, $$ \frac{\abs{V_T(t)}}{4\pi} \leqslant \sum\limits_{j=1}^{\infty} t5^{\frac{t}{2}}h_{k_\varepsilon}(j)(\abs{\phi_j(x)}^2+\abs{\phi_j(y)}^2) $$ Then, from the expression of $k_\varepsilon$ in terms of the basis $\phi_j$, it follows \begin{align*} \sum\limits_{j=1}^{\infty}t5^{\frac{t}{2}}h_{k_\varepsilon}(j)(\abs{\phi(x_0)}^2+\abs{\phi_j(y)}^2) &= t5^{\frac{t}{2}}\sum\limits_{j=1}^{\infty}h_{k_\varepsilon}(j)\abs{\phi_j(x_0)}^2+h_{k_\varepsilon}(j)\abs{\phi_j(y)}^2 \\ &= t5^{\frac{t}{2}}(k_\varepsilon(x_0,x_0)+k_\varepsilon(y,y)) \end{align*} Choose $z \in \setof{x_0,y}$ so that $k_\varepsilon(z,z) = \min \setof{k_\varepsilon(x_0,x_0),k_\varepsilon(y,y)}$. From (\ref{eq:absV}), $\abs{V_T(t)}=\frac{1}{4}(5^{t+1}-1)$. Therefore, $$ 5^t = 4\abs{V_T(t)}+1 $$ Which gives $$ t5^{\frac{t}{2}}(k_\varepsilon(x_0,x_0)+k_\varepsilon(y,y)) \leqslant 2t k_\varepsilon(z,z) \sqrt{4\abs{V_T(t)}+1} $$ However, from the construction of $k_\varepsilon$, $$ 2t k_\varepsilon(z,z) \sqrt{4\abs{V_T(t)}+1} \leqslant 2t\frac{c'}{\varepsilon^2}\sqrt{4\abs{V_T(t)}+1} $$ Note that the choice of $c$ is not vital, as long as the inequality in the definition of $k_\varepsilon$ holds. Thus, take $c$ to be a constant such that $$ t5^{\frac{t}{2}}(k_\varepsilon(x_0,x_0)+k_\varepsilon(y,y)) \leqslant \frac{2tc'}{\varepsilon^2}\sqrt{4\abs{V_T(t)}} $$ Stringing it all together, $$ \frac{\abs{V_T(t)}}{4\pi} \leqslant t5^{\frac{t}{2}}(k_\varepsilon(x_0,x_0)+k_\varepsilon(y,y)) \leqslant \frac{2tc'}{\varepsilon^2}\sqrt{4\abs{V_T(t)}} $$ Isolating the $\abs{V_T(t)}$ term from the equation above then gives $$ \sqrt{\abs{V_T(t)}} \leqslant \frac{16\pi c' t}{\varepsilon^2}$$ Which implies $$ \abs{V_T(t)} \leqslant \frac{256\pi^2 (c')^2 t^2}{\varepsilon^4}$$ Take $c=256 \pi^2 (c')^2$. Then if $d_{S^2}(\gamma x_0, y) > \varepsilon$ for all $\gamma \in V_T(t)$, $$ \abs{V_T(t)} \leqslant \frac{c t^2}{\varepsilon^4} $$ Thus, if there is some $\gamma \in V_T(t)$ so that $d_{S^2}(\gamma x_0, y) < \varepsilon$, then the contrapositive of the above statement holds as $$ \abs{V_T(t)} \geqslant \frac{c t^2}{\varepsilon^4} $$ By construction, if $t=t_\varepsilon$ then for any $y \in S^2$ there is a $\gamma \in V_T(t)$ so that $d_{S^2}(\gamma x_0, y) < \varepsilon$. Thus, for all $\varepsilon > 0$, $$ \abs{V_T(t_\varepsilon)} \geqslant \frac{c t_\varepsilon^2}{\varepsilon^4} $$ \end{proof} Thus rearranging the inequaltiy in theorem \ref{thm:Sar} yields $$ \frac{1}{\varepsilon^2} \geqslant \sqrt{\frac{\abs{V_T(t_\varepsilon)}}{c t_\varepsilon^2}} $$ Then $K(T)$ can be calculated as \begin{align*} K(T) &= \limsup\limits_{\varepsilon \rightarrow 0} \frac{\log \abs{V_T(t_\varepsilon)}}{\log\big( \frac{1}{\mu(B_G(\varepsilon))} \big)} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log\abs{V_T(t_\varepsilon)}}{\log\big(\frac{1}{\varepsilon^2}\big)} \\ &\leqslant \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log \abs{V_T(t_\varepsilon)}}{\log\big(\sqrt{\frac{\abs{V_T(t_\varepsilon)}}{ct_\varepsilon^2}\big)}} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log \abs{V_T(t_\varepsilon)}}{\frac{1}{2}\log\abs{V_T(t_\varepsilon)}-\frac{1}{2}\log\big( c\big)-\log\big(t_\varepsilon\big)}\\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log\big( 6\cdot 5^{t_\varepsilon} - 2 \big)}{\frac{1}{2}\log\big( 6 \cdot 5^{t_\varepsilon} - 2\big)-\frac{1}{2}\log\big( c\big)-\log\big(t_\varepsilon\big)} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log_5\big(5^{t_\varepsilon}\big)+\log_5\big( 6 - \frac{2}{5^{t_\varepsilon}} \big)}{\frac{1}{2}\log_5\big( 5^{t_\varepsilon} \big)+\frac{1}{2}\log_5\big( 6 - \frac{2}{5^{t_\varepsilon}} \big)-\frac{1}{2}\log_5\big( c\big)-\log_5\big(t_\varepsilon\big)} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{t_\varepsilon+\log_5\big( 6 - \frac{2}{5^{t_\varepsilon}} \big)}{\frac{t_\varepsilon}{2}+\frac{1}{2}\log_5\big( 6 - \frac{2}{5^{t_\varepsilon}} \big)-\frac{1}{2}\log_5\big( c\big)-\log_5\big(t_\varepsilon\big)} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{1+\frac{1}{t_\varepsilon}\log_5\big( 6 - \frac{2}{5^{t_\varepsilon}} \big)}{\frac{1}{2}+\frac{1}{2t_\varepsilon}\log_5\big( 6 - \frac{2}{5^{t_\varepsilon}} \big)-\frac{1}{2t_\varepsilon}\log_5\big( c\big)-\frac{1}{t_\varepsilon}\log_5\big(t_\varepsilon\big)} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{1+0}{\frac{1}{2}+0-0-0} \\ &= 2 \end{align*} Note that $\mu(B_G(\varepsilon)) = O(\varepsilon^2)$ when $G=PSU(2)$ is used with the constant in the $O$ term equal to $1$. This is because any variations by a constant factor $k$ would split into a term $\log\big( \frac{1}{k} \big)$ which, when divided by $t_\varepsilon$ as in the calculations above, would tend to zero. \subsection{Refining the Upper Bound}\label{ssc:RUB} The general framework of the upper bound would hopefully provide for an improved upper bound for $T$. Since $V_T(t_\varepsilon)$ provides a connection to $S^3$, an obvious place to look is the distribution of points on $S^3$. As with many point distributions on $S^3$, it is valuable to consider a way of calculating a mesh norm as done in \cite{257}\cite{walk}\cite{Saff}\cite{SD}.\\ Recall that in (\ref{eq:metric}), the metric was defined as $$d_G(X,Y) = \sqrt{1-\frac{\abs{Tr(X^{\dag}Y)}}{2}} $$ Thus, consider $Tr(X^{\dag}Y)$. Some simple algebra yields \begin{align*} Tr(X^{\dag}Y) &= Tr\left( \frac{1}{\sqrt{\abs{x_1}^2+\abs{x_2}^2}} \frac{1}{\sqrt{\abs{y_1}^2+\abs{y_2}^2}} \begin{bmatrix} \overline{x_1} & -x_2 \\ \overline{x_2} & x_1 \end{bmatrix}\begin{bmatrix} y_1 & y_2 \\ -\overline{y_2} & \overline{y_1} \end{bmatrix} \right)\\ &= Tr\left(\frac{1}{\abs{\Phi(X)}\abs{\Phi(Y)}} \begin{bmatrix} \overline{x_1}y_1+x_2\overline{y_2} & \overline{x_1}y_2-x_2\overline{y_1} \\ \overline{x_2}y_1-x_1\overline{y_2} & \overline{x_2}y_2+x_1\overline{y_1} \\ \end{bmatrix} \right) \\ &= \frac{\overline{x_1}y_1+x_2\overline{y_2}+\overline{x_2}y_2+x_1\overline{y_1}}{\abs{\Phi(X)}\abs{\Phi(Y)}} \\ &= \frac{x_1\overline{y_1}+\overline{x_1}y_1+x_2\overline{y_2}+\overline{x_2}y_2}{\abs{\Phi(X)}\abs{\Phi(Y)}} \\ &= \frac{x_1\overline{y_1}+\overline{(x_1\overline{y_1})}+x_2\overline{y_2}+\overline{(x_2\overline{y_2})}}{\abs{\Phi(X)}\abs{\Phi(Y)}} \\ &= \frac{2Re(x_1\overline{y_1})+2Re(x_2\overline{y_2})}{\abs{\Phi(X)}\abs{\Phi(Y)}} \end{align*} Thus if $\langle \Phi(X),\Phi(Y) \rangle$ is defined on $H$ just as it is on $\mathbb{R}^4$, then the formula for the dot product on $\mathbb{C}$ as $\mathbb{R}^2$ can be extended as \begin{align*} \langle \Phi(X),\Phi(Y) \rangle &= Re((x_1+x_2 j)\overline{(y_1+y_2j)}) \\ &= Re((x_1+x_2j)(\overline{y_1}-\overline{y_2}j))\\ &= Re(x_1\overline{y_1}+x_2\overline{y_2}-x_1\overline{y_2}j+x_2j\overline{y_1}) \\ &= Re(x_1\overline{y_1}+x_2\overline{y_2}) \end{align*} where the conjugate in the first line is the quaternion conjugate (they match on $\mathbb{C}$). Thus, combined $$ Tr(X^{\dag}Y) = \frac{2\langle \Phi(X),\Phi(Y) \rangle}{\abs{\Phi(X)}\abs{\Phi(Y)}} $$ which gives \begin{align*} d_G(X,Y) &= \sqrt{1-\frac{\abs{Tr(X^{\dag}Y)}}{2}}\\ &=\sqrt{1-\frac{\langle \Phi(X),\Phi(Y) \rangle}{\abs{\Phi(X)}\abs{\Phi(Y)}}} \\ d_G(X,Y)^2 &= 1-\frac{\langle \Phi(X)\Phi(Y) \rangle}{\abs{\Phi(X)}\abs{\Phi(Y)}} \\ \end{align*} With this relation $d_G(X,Y)$ is equal to $1-\cos \theta$ where $\theta$ is the angle between $\Phi(X),\Phi(Y)$. Thus, the angular distribution of $V_T(t_\varepsilon)$ on $S^3$ can give a bound on $d_G(X,Y)$. As in section \ref{ssc:UBKT}, a bound on $\varepsilon$ gives a bound on $K(T)$. Hence, the goal is to provide an upper bound on $d_G(\Phi(X),\Phi(Y))$ for $\varepsilon > 0$, and then use that to provide an upper bound for $\varepsilon$. Since $d_G(\Phi(X),\Phi(Y))^2 = 1 - \cos \theta$, then any lower bound of $\cos \theta $ will bound $1-\varepsilon^2$ from above. \\ Now refer back to Section \ref{ssc:ConT}, where it was noted that $V_T(t)$ is in a bijection with solutions to the family of quadratic forms $x_1^2+x_2^2+x_3^3+x_4^2=5^l$ where $l\leqslant t$ is an integer. Then note that for any distinct $X,Y \in SU(2)$, $d_G(X,Y)^2$ can be calculated in terms of $\abs{\Phi(X)},\abs{\Phi(Y)},$ and $\langle \Phi(X),\Phi(Y)\rangle$. Suppose $X \in \langle T \rangle$ and $Y \in PSU(2)$ so that $X$ approximates $Y$ within $\varepsilon$. Then $$d_G(X,Y) \leqslant \varepsilon $$ which implies $$ d_G(X,Y)^2 = 1-\frac{\langle \Phi(X)\Phi(Y) \rangle}{\abs{\Phi(X)}\abs{\Phi(Y)}} \leqslant \varepsilon^2 $$ Thus, $$ \dfrac{\langle \Phi(X),\Phi(Y) \rangle}{\abs{\Phi(X)}\abs{\Phi(Y)}} \geqslant 1-\varepsilon^2$$ Note that this is the formula for the cosine of the angle between $\Phi(X)$ and $\Phi(Y)$. Thus, we formulate a conjecture on the necessary angles to calculate the covering exponent. \begin{conjecture}\label{conj:SD} There is some $0 < \delta < 1$ so that for any $\varepsilon > 0$ and $a \in S^3$ there is a point $b \in \mathbb{Z}^4$ with some $k \in \mathbb{Z}$ so that $\abs{b} = 5^k$ and $\langle a,\frac{b}{5^k}\rangle \geqslant 1-5^{\frac{-k}{2-\delta}}$ \end{conjecture} Suppose the conjecture holds. For any matrix $M \in SU(2)$, $\Phi(M) \in S^3$. Thus, by the conjecture, there is a $b \in H(Z)$ and $k \in \mathbb{Z}$ so that $\abs{b}=5^k$ and $\langle\Phi(M),\frac{b}{5^k}\rangle > 1-5^{\frac{-k}{2-\delta}}$. Let $N=\frac{1}{\sqrt{5^k}}\Phi^{-1}(b)$. Then \begin{align*} d_G(M,N) &= \sqrt{1-\frac{\abs{Tr(M^{\dag}N)}}{2}} \\ &= \sqrt{1-\frac{\langle \Phi(M), \Phi(N) \rangle}{\abs{\Phi(M)}\abs{\Phi(N)}}} \\ &= \sqrt{1-\frac{\langle \Phi(M), \frac{b}{5^k} \rangle}{5^k \cdot \frac{1}{5^k}}} \\ &= \sqrt{1-\big\langle \Phi(M), \frac{b}{5^k} \big\rangle} \\ &\leqslant \sqrt{1-\big(1-5^{\frac{-k}{2-\delta}}\big)} \\ &= 5^{\frac{-k}{4-2\delta}} \end{align*} However, as noted earlier, $k$ can always be chosen to be $t_\varepsilon$ since $\Phi(V_T(t_\varepsilon))$ contains copies of $\Phi(U_T(k))$ for all $k$. Thus, for all $M \in SU(2)$, $b$ can be chosen such that $k=t_\varepsilon$ giving $$ d_G(M,N) \leqslant 5^{\frac{-t_\varepsilon}{4-2\delta}} $$ By construction, $t_\varepsilon$ is the smallest height such that $V_T(t_\varepsilon)$ can cover $SU(2)$ within a tolerance of $\varepsilon$. Then if $d_G(M,N) < \varepsilon$ it necessarily follows that $d_G(M,N) \leqslant 5^{\frac{-t_\varepsilon}{2-\delta}}$. Since this holds for all $\varepsilon > 0$, then $$ \varepsilon \leqslant 5^{\frac{-t_\varepsilon}{4-2\delta}} $$ Given this result, the upper bound from Section \ref{ssc:UBKT} can be rewritten as \begin{align*} K(T) &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log \abs{V_T(t_\varepsilon)}}{\log\big(\frac{1}{\mu(B_G(\varepsilon))} \big)} \\ &= \limsup\limits_{\varepsilon\rightarrow} \frac{\log \abs{V_T(t_\varepsilon)}}{\log\big( \frac{1}{\varepsilon^2} \big)} \\ &\leqslant \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log\abs{V_T(t_\varepsilon)}}{\log( 5^{\frac{t_\varepsilon}{2-\delta}})} \\ &= \limsup\limits_{\varepsilon \rightarrow 0} \frac{\log_5\big( \frac{1}{4} \big( 5^{t_\varepsilon+1} - 1 \big)\big)}{\log_5 \big(5^{\frac{t_\varepsilon}{2-\delta}}\big)} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log_5\big( 5^{t_\varepsilon+1} \big)+\log_5\big( 1-5^{-t_\varepsilon-1} \big)+\log_5\big( \frac{1}{4}\big)}{\frac{t_\varepsilon}{2-\delta}} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{t_\varepsilon+1+\log_5\big( 1-5^{-t_\varepsilon-1} \big)+\log_5 \big( \frac{1}{4} \big)}{\frac{t_\varepsilon}{2-\delta}} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{1+\frac{1}{t_\varepsilon}\big(1+\log_5\big( 1-5^{-t_\varepsilon-1} \big)+\log_5\big(\frac{1}{4}\big)\big)}{\frac{1}{2-\delta}} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{1+0}{\frac{1}{2-\delta}} \\ &= 2-\delta \end{align*} The conjecture suggests that such a $\delta$ exists per this construction of $T$, and that it directly gives the covering exponent. Thus, when the case of $G=PSU(2)$ is acceptable, this set $T$ provides a tangible and effective universal gate set for $G$. However, this setup described in this paper can be extrapolated to $SU(2)$ and other more general universal sets. It is important to note the construction of $T$ can be replicated by using different primes $p \equiv 1 \: (mod \: 4)$. Furthermore, $\abs{V_T(t_\varepsilon)}=\frac{1}{p-1}(p^{t_\varepsilon+1}-1)$. \\ \pagebreak Thus, assuming Conjecture \ref{conj:SD} holds \begin{align*} K(T) &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log \abs{V_T(t_\varepsilon)}}{\log\big(\frac{1}{\mu(B_G(\varepsilon))} \big)} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log \abs{V_T(t_\varepsilon)}}{\log\big( \frac{1}{\varepsilon^2} \big)} \\ &\leqslant \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log\abs{V_T(t_\varepsilon)}}{\log( p^{\frac{t_\varepsilon}{2-\delta}})} \\ &= \limsup\limits_{\varepsilon \rightarrow 0} \frac{\log_p\big( \frac{1}{4} \big( p^{t_\varepsilon+1} - 1 \big)\big)}{\log_p \big(p^{\frac{t_\varepsilon}{2-\delta}}\big)} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{\log_p\big( p^{t_\varepsilon+1} \big)+\log_p\big( 1-p^{-t_\varepsilon-1} \big)+\log_p\big( \frac{1}{4}\big)}{\frac{t_\varepsilon}{2-\delta}} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{t_\varepsilon+1+\log_p\big( 1-p^{-t_\varepsilon-1} \big)+\log_p \big( \frac{1}{4} \big)}{\frac{t_\varepsilon}{2-\delta}} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{1+\frac{1}{t_\varepsilon}\big(1+\log_p\big( 1-p^{-t_\varepsilon-1} \big)+\log_p\big(\frac{1}{4}\big)\big)}{\frac{1}{2-\delta}} \\ &= \limsup\limits_{\varepsilon\rightarrow 0} \frac{1+0}{\frac{1}{2-\delta}} \\ &= 2-\delta \end{align*} Thus, the choice of $p$ has no impact on the covering exponent for $T$ except for which $\delta$ a choice of $p$ allows the conjecture to hold. How $\delta$ changes with a change in $p$ is still an open question. Nevertheless, this framework allows for those computations on the 3-sphere to directly correlate to the efficiency of this construction. \\ \subsection{The Next Steps} The calculation of $\delta$ is not as straightforward as it seems, especially in the case of $T$. For very small values of $t_\varepsilon$, the largest holes form around the axes. However, the holes then shift towards the center of each sedecant (1/16th) of $S^3$. This shifting nature of the holes implies that any bounds on their size must be checked across the entirety of one sedecant. Since the sign of coordinates is irrelevant in whether they are a solution to a quadratic form, each sedecant should be identical. However, this calculation is still not simple by any means. Thus, other forms of well distributed points with more inherent bounds on their holes may yield better results. \\ For these reasons, Conjecture \ref{conj:SD} is quite open ended but opens the door for some concrete improvements on the upper bound of 2. This framework can be generalized to many of the other point sets mentioned in papers such as \cite{239}\cite{drag}\cite{262}\cite{Saff}\cite{book}. Each point set is still bounded by the same notion of "holes" in the distributions, which when bounded provide similar estimations on the covering exponent for the universal sets they generate. However, many of these point distributions use things like energy minimalization which, by construction minimize these holes. Work on this conjecture will provide a deeper understanding between competing quantum algorithms. \section*{Acknowledgments} Support from the University of Michigan Undergraduate Research Opportunity Program and Research Experiences for Undergraduates programs, the National Science Foundation, and the American Mathematical Society is greatly appreciated. We thank the contributions of Q. Liang and B. Mode to this paper. \end{document}
arXiv
EURASIP Journal on Image and Video Processing Reversible designs for extreme memory cost reduction of CNN training Tristan Hascoet ORCID: orcid.org/0000-0002-8160-60761 na1, Quentin Febvre3 na1, Weihao Zhuang1, Yasuo Ariki1,2 & Tetsuya Takiguchi1,2 EURASIP Journal on Image and Video Processing volume 2023, Article number: 1 (2023) Cite this article Training Convolutional Neural Networks (CNN) is a resource-intensive task that requires specialized hardware for efficient computation. One of the most limiting bottlenecks of CNN training is the memory cost associated with storing the activation values of hidden layers. These values are needed for the computation of the weights' gradient during the backward pass of the backpropagation algorithm. Recently, reversible architectures have been proposed to reduce the memory cost of training large CNN by reconstructing the input activation values of hidden layers from their output during the backward pass, circumventing the need to accumulate these activations in memory during the forward pass. In this paper, we push this idea to the extreme and analyze reversible network designs yielding minimal training memory footprint. We investigate the propagation of numerical errors in long chains of invertible operations and analyze their effect on training. We introduce the notion of pixel-wise memory cost to characterize the memory footprint of model training, and propose a new model architecture able to efficiently train arbitrarily deep neural networks with a minimum memory cost of 352 bytes per input pixel. This new kind of architecture enables training large neural networks on very limited memory, opening the door for neural network training on embedded devices or non-specialized hardware. For instance, we demonstrate training of our model to 93.3% accuracy on the CIFAR10 dataset within 67 minutes on a low-end Nvidia GTX750 GPU with only 1GB of memory. Over the last few years, Convolutional Neural Networks (CNN) have enabled unprecedented progress on a wide array of computer vision tasks. One disadvantage of these approaches is their resource consumption: training deep models within a reasonable amount of time requires special Graphical Processing Units (GPU) with numerous cores and large memory capacity. Given the practical importance of these models, a lot of research effort has been directed towards algorithmic and hardware innovations to improve their resource efficiency such as low-precision arithmetic [1], network pruning for inference [2], or efficient stochastic optimization algorithms [3]. In this paper, we focus on a particular aspect of resource efficiency: optimizing the memory cost of training CNNs. We envision several potential benefits from the ability to train large neural networks within limited memory: Democratization of deep learning research: Training large CNNs requires special GPUs with large memory capacity. Typical desktop GPUs memory capacity is too small for training large CNNs. As a result, getting into deep learning research comes with the barrier cost of either buying specialized hardware or renting live instances from cloud service providers. Reducing the memory cost of deep model training would allow training deep networks on standard graphic cards without the need for specialized hardware, effectively removing this barrier cost. In this paper, we demonstrate efficient training of a CNN on the CIFAR10 dataset (93.3% accuracy within 67 min) on an Nvidia GTX750 with only 1 GB of memory. On-device training: With mobile applications, a lot of attention has been given to optimize inference on edge devices with limited computation resources. Training state-of-the-art CNN on embedded devices, however, has still received little attention. Efficient on-device training is a challenging task for the underlying power efficiency, computation and memory optimization challenges it involves. As such, CNN training has thus far been relegated to large cloud servers, and trained CNNs are typically deployed to embedded device fleets over the network. On-device training would allow bypassing these server–client interactions over the network. We can think of several potential applications of on-device training, including: Life-long learning: Autonomous systems deployed in evolving environments like drones, robots or sensor networks might benefit from continuous life-long learning to adapt to their changing environment. On-device training would enable such application without the expensive communication burden of having edge devices continuously sending their data to remote servers over the network. It would also provide resilience to network failures in critical application scenarios. In privacy-critical applications such as biometric mobile phone authentication, users might not want to have their data sent over the network. On-device training would allow fine-tuning recognition models on local data without sending sensitive data over the network. In this work, we propose an architecture with minimal training memory cost requirements which enables training within the tight memory constraints of embedded devices. Research in optimization: Recent works on stochastic optimization algorithms have highlighted the benefits of large batch training [4, 5]. For example, in Imagenet, linear speed-ups in training have been observed with increasing batch sizes up to tens of thousands of samples [5]. Optimizing the memory cost of CNN training may allow further research on the optimization trade-offs of large batch training. For small datasets like MNIST or CIFAR10, we are able to process the full dataset in 14 and 18 GB of memory, respectively. Although large batch training on such small dataset is very computationally inefficient with current stochastic optimization algorithms [5], the ability to process the full dataset in one pass allows to easily train CNNs on the true gradient of the error. Memory optimization techniques have the potential to facilitate research on optimization techniques outside the realm of Stochastic Gradient Descent to be investigated. In this paper, we build on recent works on reversible networks [6, 7] and ask the question: how far can we reduce CNN training memory cost using reversible designs with minimal impact on the accuracy and computational cost? To do so, we take as a starting point the Resnet-18 architecture and analyze its training memory requirements. We then analyze the memory cost reduction of invertible designs successively introduced in the RevNet and iRevNet architectures. We identify the memory bottleneck of such architectures, which leads us to introduce a layer-wise invertible architecture. However, we observe that layer-wise invertible networks accumulate numerical errors across their layers, which leads to numerical instabilities impacting model accuracy. We characterize the accumulation of numerical errors within long chains of revertible operations and investigate their effect on model accuracy. To mitigate the impact of these numerical errors on the model accuracy, we propose both a reparameterization of invertible layers and a hybrid architecture combining the benefits of layer-wise and residual-block-wise reversibility to stabilize training. Our main result is to present a new architecture that allows to efficiently train a CNN with the minimal memory cost of 352 bytes per pixel. We demonstrate the efficiency of our method by efficiently training a model to 93.3% accuracy on the CIFAR10 dataset within 67 minutes on a low-end Nvidia GTX750 with only 1 GB of VRAM. Reversible network designs have been proposed for various purposes including generative modeling, visualization, solving inverse problems, or theoretical analysis of hidden representations. Flow-based generative models use analytically invertible transformations to compute the change of variable formula. Invertibility is either achieved through channel partitioning schemes (NICE [8] Real-NVP [9]), weight matrix factorization (GLOW [10]) or constraining layer architectures to easily invertible unitary operations (Normalization flows [11]) Neural ODEs [12] take a drastically different take on invertibility: They leverage the analogy between residual networks and the Euler method to define continuous hidden state systems. The conceptual shift from a finite set of discrete transformations to a continuous regime gives them invertibility for free. The computational efficiency of this approach, however, remains to be demonstrated. The RevNet model [6] was inspired by the Real-NVP generative model. They adapt the idea of channel partitioning and propose an efficient architecture for discriminative learning. The iRevNet [7] model builds on the RevNet architecture: they propose to replace the irreversible max-pooling operation with an invertible operation that reshapes the hidden activation states so as to compensate the loss of spatial resolution by an increase in the channel dimension. By preserving the volume of activations, their pooling operation allows for exact reconstruction of the inverse. In their original work, the authors focus on the analysis of the representations learned by invertible models rather than resource efficiency. From a resource optimization point of view, one downside of their method is that the proposed invertible pooling scheme drastically increases the number of channels in upper layers. As the size of the convolution kernel weights grows quadratically in the number of channels, the memory cost associated with storing the model weights becomes a major memory bottleneck. We address this issue in our proposed architecture. In [13], the authors use these reversible architectures to study undesirable invariances in feature space. In [14], the authors propose a unified architecture performing well on both generative and discriminative tasks. They enforce invertibility by regularizing the weights of residual blocks so as to guarantee the existence of an inverse operation. However, the computation of the inverse operation is performed with power iteration methods which are not optimal from a computational perspective. Finally, [15] propose to reconstruct the input activations of normalization and activation layers using their inverse function during the backward pass. We propose a similar method for layer-wise invertible networks. However, as their model does not invert convolution layers, it does not feature long chains of invertible operations so that they do not need to account for numerical instabilities. Instead, our proposed model features long chains of invertible operations so that we need to characterize numerical errors in order to stabilize training. Research into resource optimization of CNNs covers a wide array of techniques, most of which are orthogonal to our work. We briefly present some of these works. On the architectural side, Squeezenet [16] was first proposed as an efficient neural architecture reducing the number of model parameters while maintaining high classification accuracy. MobileNet [17] uses depth-wise separable convolutions to further reduce the computational cost of inference for embedded device applications. Network pruning [2] is a set of techniques developed to decrease the model weight size and computational complexity. Network pruning works by removing the network weights that contribute the least to the model output. Pruning deep models has been shown to drastically reduce the memory cost and computational cost of inference without significantly hurting model accuracy. Although pruning has been concerned with optimization of the resource inference, the recently proposed lottery ticket hypothesis [18] has shown that specifically pruned networks could be trained from scratch to high accuracy. This may be an interesting and complementary line of work to investigate in the future to reduce training memory costs. Low precision arithmetic has been proposed as a mean to reduce both memory consumption and computation time of deep learning models. Mixed precision training [19] combines FP16 with FP32 operations to avoid numerical instabilities due to either overflow or underflow. For inference, integer quantization [1, 20] has been shown to drastically improve the computation and memory efficiency and has been successfully deployed on both edge devices and data centers. Integrating mixed-precision training to our proposed architecture would allow us to further reduce training memory costs. Accumulating the weights' gradients over multiple batches is used to increase the effective batch size during the training with constant memory requirements. Although this method allows for training on arbitrary large batch sizes, it does not reduce the memory requirements for training on a single batch. Most related to our work, gradient checkpointing was introduced as a mean to reduce the memory cost of deep neural network training. Gradient checkpointing, first introduced in [21], trades off memory for computational complexity by storing only a subset of the activations during the forward pass. During the backward pass, missing activations are recomputed from the stored activations as needed by the backpropagation algorithm. Follow-up work [22] has since built on the original gradient checkpointing algorithm to improve this memory/computation trade-off. However, reversible models like RevNet have been shown to offer better computational complexity than gradient checkpointing, at the cost of constraining the model architecture to invertible residual blocks. In this section, we analyze the memory footprint of training architectures with different reversibility patterns. We start by introducing some notations and briefly review the backpropagation algorithm in order to characterize the training memory consumption of deep neural networks. In our analysis, we use a Resnet-18 as a reference baseline and analyze its training memory footprint. We then gradually augment the baseline architecture with reversible designs and analyze their impact on computation and memory consumption. Backpropagation and notations Let us consider a model F made of N sequential layers trained to minimize the error e defined by a loss function \(\mathcal {L}\) for an input x and ground-truth label \(\bar{y}\): $$\begin{aligned} F&: x \rightarrow y, \end{aligned}$$ $$\begin{aligned} y&= f_N \circ \ldots \circ f_2 \circ f_1(x), \end{aligned}$$ (1b) $$\begin{aligned} e&= \mathcal {L}(y, \bar{y}). \end{aligned}$$ (1c) During the forward pass, each layer \(f_i\) takes as input the activations \(z_{i-1}\) from the previous layer and outputs activation features \(z_i=f_i(z_{i-1})\), with \(z_0=x\) and \(z_N=y\) being the input and output of the network, respectively. During the backward pass, the gradient of the loss with respect to the hidden activations are propagated backward through the layers of the networks using the chain rule as: $$\begin{aligned} \frac{\delta \mathcal {L}}{\delta z_{i-1}} = \frac{\delta \mathcal {L}}{\delta z_{i}} \times \frac{\delta z_{i}}{\delta z_{i-1}}. \end{aligned}$$ Before propagating the loss gradient with respect to its input to the previous layer, each parameterized layer computes the gradient of the loss with respect to its parameters. In vanilla SGD, for a given learning rate \(\eta\), the weight gradients are subsequently used to update the weight values as: $$\begin{aligned} \frac{\delta \mathcal {L}}{\delta \theta _i}&=\frac{\delta \mathcal {L}}{\delta z_{i}} \times \frac{\delta z_{i}}{\delta \theta _i}, \end{aligned}$$ $$\begin{aligned} \theta _i&\leftarrow \theta _i - \eta \times \frac{\delta \mathcal {L}}{\delta \theta _i}. \end{aligned}$$ However, the analytical form of the weight gradients are functions of the layer's input activations \(z_{i-1}\). In convolution layers, for instance, the weight gradients can be computed as the convolution of the input activation by the output's gradient: $$\begin{aligned} \frac{\delta \mathcal {L}}{\delta \theta _i} = z_{i-1} \star \frac{\delta \mathcal {L}}{\delta z_i}. \end{aligned}$$ Hence, computing the derivative of the loss with respect to each layer's parameters \(\theta _i\) requires knowledge of the input activation values \(z_{i-1}\). In the standard backpropagation algorithm, hidden layers activations are stored in memory upon computation during the forward pass. Activations accumulate in live memory buffers until used for the weight gradients computation in the backward pass. Once the weight gradients computed in the backward pass, the hidden activation buffers can be freed from live memory. However, the accumulation of activation values stored within each parameterized layer along the forward pass creates a major bottleneck in GPU memory. The idea behind reversible designs is to constrain the network architecture to feature invertible transformations. Doing so, activations \(z_i\) in lower layers can be recomputed through inverse operations from the activations \(z_{j>i}\) of higher layers. In such architectures, activation do not need to be kept in memory during the forward pass as they can be recomputed from higher layer activations during the backward pass, effectively freeing up the GPU live memory. We denote the memory footprint of training a neural network as a value \(\mathcal {M}\) in bytes. Given an input x and ground-truth label \(\bar{y}\), the memory footprint represents the peak memory consumption during an iteration of training including the forward and backward pass. We divide the total training memory footprint \(\mathcal {M}\) into several memory cost factors: the cost \(M_{\theta }\) of storing the model weights, the hidden activations \(M_{z}\), and the hidden activations' gradients \(M_{g}\): $$\begin{aligned} \mathcal {M} = M_{\theta } + M_{z} + M_{g}. \end{aligned}$$ We choose not to include the cost of storing the gradients of the weights in our analysis since their accumulation has more to do with the implementation details of current differentiable frameworks than with algorithmic necessity. In the following subsections, we detail the memory footprint of existing architectures with different reversibility patterns. To help us formalize these memory costs, we further introduce the following notations: let n(x) denote the number of elements in a tensor x, i.e., if x is an \(h \times w\) matrix, then \(n(x)=h \times w\). Let bpe be the memory cost in bytes per elements of a given precision so that the actual memory cost for storing an \(h \times w\) matrix is \(n(x) \times bpe\). For instance, FP32 tensors have a memory cost per element \(bpe=4\). We use bs to denote the batch size, and \(c_i\) to denote the number of channels at layer i. It should be noted that the memory cost of the activations and the gradients are proportional to the size of the input image batch: training a CNN on twice larger input image batch sizes or twice higher resolution requires twice more memory. Thus, these costs, for a given architecture, are better characterized in bytes per input pixels, which we denote \(M_{z}'\) and \(M_{g}'\), respectively, and are defined by: $$\begin{aligned} M_{z}'&= \frac{M_{z}}{bs \times h \times w}, \end{aligned}$$ $$\begin{aligned} M_{g}'&= \frac{M_{g}}{bs \times h \times w}. \end{aligned}$$ The memory cost of the weights, on the other hand, is independent of the input size and thus reported in bytes. Vanilla ResNet The architecture of a vanilla ResNet-18 is shown in Fig. 1. Vanilla ResNets do not use reversible computations so that the input activations of all parameterized layers need to be accumulated in memory during the forward pass for the computation of the weight gradients to be done in the backward pass. Illustration of the ResNet-18 architecture and its memory requirements. Modules contributing to the peak memory consumption are shown in red. These modules contribute to the memory cost by storing their input in memory. The green annotation represents the extra memory cost of storing the gradient in memory. The peak memory consumption happens in the backward pass through the last convolution so that this layer is annotated with an additional gradient memory cost. At this step of the computation, all lower parameterized layers have stored their input in memory, which constitutes the memory bottleneck Hence the peak memory footprint of training a vanilla ResNet happens at the beginning of the backward pass when the top layer's activation gradients need to be stored in memory in addition to the full stack of hidden activation values. Let us denote by \(P \subset N\) the subset of parameterized layers of a network F (i.e., convolutions and batch normalization layers, excluding activation functions and pooling layers). The memory cost associated with storing the hidden activation values is given by: $$\begin{aligned} M_{z}&= \sum _{i \in P} n(z_i) \times bpe \end{aligned}$$ $$\begin{aligned}&= \sum _{i \in P} bs \times c_i \times h_i \times w_i \times bpe, \end{aligned}$$ where \(h_i\) and \(w_i\) represent the spatial dimensions of the activation values at layer i. \(h_i\) and \(w_i\) are determined by the input image size \(h \times w\) and the pooling factor \(p_i\) of layer i, so we can factor out both the spatial dimensions and the batch size from this equation, yielding the memory cost per input pixel: $$\begin{aligned} M_{z}&= \sum _{i \in P} bs \times h \times w \times p_i \times c_i \times bpe \end{aligned}$$ $$\begin{aligned}&= bs \times h \times w \times \sum _{i \in P} p_i \times c_i \times bpe, \end{aligned}$$ $$\begin{aligned} M_{z}^{\prime}&= \sum _{i \in P} p_i \times c_i \times bpe. \end{aligned}$$ The memory footprint of the weights is given by: $$\begin{aligned} M_{\theta } = \sum _{i \in P} n(\theta _i)\times bpe. \end{aligned}$$ The memory footprint of the gradients correspond to the size of the gradient buffers at the time of peak memory usage. In a vanilla ResNet18 model, this peak memory usage happens during the backward pass through the last convolution of the network. Hence, the memory footprint of the gradients correspond to the memory cost of storing the gradients with respect to either the input or the output of this layer. $$\begin{aligned} M_{g}&= max(n(g_{N-1}), n(g_N)) \times bpe \end{aligned}$$ (10a) $$\begin{aligned}&= h \times w \times bs \times p_i \times max(c_{N-1}, c_N) \times bpe, \end{aligned}$$ (10b) $$\begin{aligned} M_{g}^{\prime}&= p_i \times max(c_{N-1}, c_N) \times bpe. \end{aligned}$$ (10c) Figure 1 illustrates the peak memory consumption of a ResNet-like architecture. For a ResNet parameterized following Table 1, the peak memory consumption can then be computed as: $$\begin{aligned} \mathcal {M}&= M_{\theta } + M_{z} + M_{g} \end{aligned}$$ $$\begin{aligned}&= M_{\theta } + (M_{z}^{\prime} + M_{g}^{\prime}) \times (h \times w \times bs) \end{aligned}$$ $$\begin{aligned}&= 12.5*10^6 + 1928 \times (h \times w \times bs). \end{aligned}$$ (11d) For example, a training iteration over a typical batch of 32 images of resolution \(240 \times 240\) requires 12.5 MB of memory to store the model weights and 3.8 GB of memory to store the hidden layers activations and gradients for a total of \(\mathcal {M}=3.81\) GB of VRAM. The memory cost of the hidden activations is thus the main memory bottleneck of training a ResNet as the cost associated with the model weights is negligible in comparison. RevNet The RevNet architecture introduces reversible blocks as drop-in replacements of the residual blocks of the ResNet architecture. Reversible blocks have analytical inverses that allow for the computation of both their input and hidden activation values from the value of their output activations. Two factors create memory bottlenecks in training RevNet architectures, which we refer to as the local and global bottlenecks. First, the RevNet architecture features non-volume preserving max-pooling layers, for which the inverse cannot be computed. As these layers do not have analytical inverses, their input must be stored in memory during the forward pass for the reconstruction of lower layer's activations to be computed during the backward pass. We refer to the memory cost associated with storing these activations as the global bottleneck, since these activations need to be accumulated during the forward pass through the full architecture. The local memory bottleneck has to do with the synchronization of the reversible block computations: while activations values are computed by a forward pass through the reversible block modules, gradients computations flow backward through these modules so that the activations and gradient computations cannot be performed simultaneously. Figure 2 illustrates the process of backpropagating through a reversible block: first, the input activation values of the parameterized hidden layers within the reversible blocks are recomputed from the output. Once the full set of activation have been computed and stored in GPU memory, the backpropagation of the gradients through the reversible block can begin. We refer to the accumulation of the hidden activation values within the reversible block as the local memory bottleneck. Illustration of the backpropagation process through a reversible block. In the forward pass (left), activations are propagated forward from top to bottom. The activations are not kept in live memory as they are to be recomputed in the backward pass so no memory bottleneck occurs. The backward pass is made of two phases: first the hidden and input activations are recomputed from the output through an additional forward pass through both modules (middle). Once the activations recomputed, the activations gradient are propagated backward through both modules of the reversible blocks (right). Because the activation and gradient computations flow in opposite directions through both modules, both computations cannot be efficiently overlapped, which results in the local memory bottleneck of storing all hidden activations within the reversible block before the gradient backpropagation step For a typical parameterization of a RevNet, as summarized in Table 1, the local bottleneck of lower layers actually outweighs the global memory bottleneck introduced by non-reversible pooling layers. Indeed, as the spatial resolution decreases with pooling operations, the cost associated with storing the input activations of higher layers becomes negligible compared to the cost of storing activation values in lower layers. Hence, surprisingly, the peak memory consumption of the RevNet architecture, as illustrated in Fig. 3, happens in the backward pass through the first reversible block, in which the local memory bottleneck is maximum. For the architecture described in Table 1, the peak memory consumption can be computed as: $$\begin{aligned}&=(M_{\theta } + (M_z^{\prime} + M_{g}^{\prime}) \times (h \times w \times bs) \end{aligned}$$ $$\begin{aligned}&= 12.7 \times 10^6 + 640 \times (h \times w \times bs). \end{aligned}$$ Following our previous example, a RevNet architecture closely mimicking the ResNet-18 architecture requires \(\mathcal {M}=1.19\) GB of VRAM for a training iteration over batch of 32 images of resolution \(240 \times 240\). Illustration of the Revnet architecture and its memory consumption. Modules contributing to the peak memory consumption are shown in red. The peak memory consumption happens during the backward pass through the first reversible block. At this step of the computations, all hidden activations within the reversible block are stored in memory simultaneously Finally, the memory savings allowed by the reversible block come with the additional computational cost of computing the hidden activations during the backward pass. As noted in the original paper, this computational cost is equivalent to performing one additional forward pass. iRevNet The iRevNet model builds on the RevNet architecture: they replace the irreversible max-pooling operation with an invertible operation that reshapes the hidden activation states so as to compensate for the loss of spatial resolution by an increase in the channel dimension. As such, the iRevNet architecture is fully invertible, which alleviates the global memory bottleneck of the RevNet architecture. This pooling operation works by stacking the neighboring elements of the pooling regions along the channel dimension, i.e., for a 2D pooling operation with \(2 \times 2\) pooling window, the number of output channels is four times the number of input channels. Unfortunately, the size of a volume-preserving convolution kernel grows quadratically in the number of input channels: $$\begin{aligned} M(\theta )&= c_{in} \times c_{out} \times k_h \times k_w \end{aligned}$$ $$\begin{aligned}&= c^2 \times k_h \times k_w. \end{aligned}$$ Consider an iRevNet network with initial channel size 32. After three levels of \(2 \times 2\) pooling, the effective channel size becomes \(32 \times 4^3=2048\). A typical \(3 \times 3\) convolution layer kernel for higher layers of such network would have \(n(\theta )=2048^2 \times 3 \times 3=37M\) parameters. At this point, the memory cost of the network weights \(M_{\theta }\) becomes an additional memory bottleneck. Furthermore, the iRevNet architecture does not address the local memory bottleneck of the reversible blocks. Figure 4 illustrates such architecture. For an initial channel size of 32, as summarized in Table 1, the peak memory consumption is given by: $$\begin{aligned}&= M_{\theta } + (M_z^{\prime} + M_{g}^{\prime}) \times (h \times w \times bs) \end{aligned}$$ $$\begin{aligned}&= 171 \times 10^6 + 640 \times (h \times w \times bs). \end{aligned}$$ Training such an architecture for an iteration over batches of 32 images of resolution \(240 \times 240\) would require \(\mathcal {M}=1.35\)GB of VRAM. In the next section, we introduce both layer-wise reversibility, and a variant on this pooling operations to address the local memory bottleneck of reversible blocks, and the weight memory bottleneck, respectively. Illustration of the i-Revnet architecture and its memory consumption. The peak memory consumption happens during the backward pass through the top reversible block. In addition to this local memory bottleneck, the cost of storing the top layers weights (in orange) becomes a new memory bottleneck as the weight kernel size grows quadratically in the number of channels RevNet and iRevNet architectures implement reversible transformations at the level of residual blocks. As we have seen in the previous section, the design of these reversible blocks creates a local memory bottleneck as all hidden activations within a reversible block need to be computed before the gradients are backpropagated through the block. In order to circumvent this local bottleneck, we introduce layer-wise invertible operations in section 4.2. However, these invertible operations introduce numerical errors, which we characterize in the following subsections. In section 5, we will show that these numerical errors lead to instabilities that degrade the model accuracy. Hence, in "Hybrid architecture", we propose a hybrid model combining layer-wise and residual block-wise reversible operations to stabilize training while resolving the local memory bottleneck at the cost of a small additional computational cost. Section 4.1 starts by motivating the need for, and the methodology of, our numerical error analysis. Numerical error analysis Invertible networks are defined as the composition of invertible operations. During the backward pass, each operation is supposed to reconstruct its input x given the value of its output y using its inverse function: $$\begin{aligned} y&= f(x), \end{aligned}$$ $$\begin{aligned} x&= f^{-1}(y). \end{aligned}$$ In reality, however, the output of the network is an approximation of its true analytical value due to floating point numbers' precision \(\hat{y}=y+\epsilon _y\). Hence, the noisy input \(\hat{x}\) reconstructed by the inverse operation contains a noise \(\epsilon _x\) due to the noise \(\epsilon _y\) in the output, and the error propagates through the successive inverse computations. $$\begin{aligned} \hat{x}&= f^{-1}(y+\epsilon _y), \end{aligned}$$ $$\begin{aligned} \hat{x}&= (x+\epsilon _x), \end{aligned}$$ $$\begin{aligned} \epsilon _x&= x - f^{-1}(y+\epsilon _y). \end{aligned}$$ The operation f may either refer to an individual layer, as is the case for the layer-wise invertible architecture we propose in this paper, or at the level of residual blocks as for the reversible blocks proposed in RevNet or iRevNet. For each operation, we can compute the signal-to-noise ratio (SNR) of its output and input, respectively: $$\begin{aligned} snr_o&= \frac{|y|^2}{|\epsilon ^y|^2}, \end{aligned}$$ $$\begin{aligned} snr_i&= \frac{|x|^2}{|\epsilon ^x|^2}. \end{aligned}$$ We are interested in characterizing the factor \(\alpha\) of reduction of the SNR through the inverse reconstruction: $$\begin{aligned} \alpha = \frac{snr_i}{snr_o}. \end{aligned}$$ Indeed, given a layer i in a network, its input \(z_i\) will be reconstructed from the noisy network output \(\hat{y}\) by the composition of its upstream layers. Hence, the noise \(\epsilon _i\) in the reconstructed and noisy input \(\hat{z_i}\) can be computed as: $$\begin{aligned} \hat{z_i}&= z_i + \epsilon _i, \end{aligned}$$ $$\begin{aligned} \hat{z_i}&= f_i^{-1} \circ f_{i+1}^{-1} \circ \ldots \circ f_N^{-1}(\hat{y}), \end{aligned}$$ $$\begin{aligned} | \epsilon _i |^2&= \frac{| \epsilon _y |^2 \times | z_i |^2}{| y |^2} \times \prod _i^{N} \alpha _j. \end{aligned}$$ As \(z_i\) is used in the computation of layer i's weights' gradients according to Eq. 4, accumulated errors yield noisy gradients which prevent the network from converging as the SNR reaches certain levels. Hence, it is important to characterize the factor \(\alpha\) for the different invertible layers proposed below. Layer-wise invertibility In this section, we present invertible layers that act as drop-in replacement for convolution, batch normalization, pooling and non-linearity layers. We then characterize the numerical instabilities arising from the invertible batch normalization and non-linearities. Invertible batch normalization As batch normalization is not a bijective operation, it does not admit an analytical inverse. However, the inverse reconstruction of a batch normalization layer can be realized with minimal memory cost. Given first- and second-order moment parameters \(\beta\) and \(\gamma\), the forward f and inverse \(f^{-1}\) operation of an invertible batch normalization layer can be computed as follows: $$\begin{aligned} y = f(x)&= \gamma \times \frac{x - \hat{x}}{\sqrt{\dot{x}} + \epsilon } + \beta , \end{aligned}$$ $$\begin{aligned} x = f^{-1}(y, \hat{x}, \dot{x})&= (\sqrt{\dot{x}} + \epsilon ) \times \frac{y - \beta }{\gamma } + \hat{x}, \end{aligned}$$ where \(\hat{x}\) and \(\dot{x}\) represent the mean and variance of x, respectively. Hence, the input activation x can be recovered from y through \(f^{-1}\) at the minimal memory cost of storing the input activation statistics \(\hat{x}\) and \(\dot{x}\). The formula for the SNR reduction factor of the batch normalization is given below: $$\begin{aligned} \alpha = \frac{\sum _i (\hat{x}_i^2 + \dot{x_i})}{\sum _i (\gamma _i^2 + \beta _i^2)} \times \frac{c}{\sum _i \frac{\sqrt{\dot{x_i}}+\epsilon }{\gamma _i}}, \end{aligned}$$ in which c represents the number of channels. The full proof of this formula is given in the Appendix. The only assumption made by this proof is that both the input x and output noise \(\epsilon ^y\) are identically distributed across all channels, which we have found to hold true in practice. In essence, numerical instabilities in the inverse computation of the batch normalization layer arise from the fact that the signal across different channels i and j are amplified by different factors \(\gamma _i\) and \(\gamma _j\). While the signal amplification in the forward and inverse path cancel out each other (\(x=f^{-1}(f(x))\)), the noise only gets amplified in the backward pass, which degrades the reconstructed signal. We verify the validity of equation (22ac) by empirically evaluating the different \(\alpha\) ratio yielded by a toy parameterization of the batch normalization using only two channels with parameters and \(\gamma = [1, \rho ]\). This toy parameterization has been used by the proof in the Appendix. The factor \(\rho\) there represents the imbalance in the multiplicative factor between both channels. Figure 5 shows the expected evolution of \(\alpha\) through our toy layer for different values of the factor \(\rho\). and find it to closely match the theoretical results we derived. Illustration of the numerical errors arising from batch normalization layers. Comparison of the theoretical and empirical evolution of the \(\alpha\) ratio for different \(\rho\) values in our toy example. Empirical values were computed for a Gaussian input signal with zero mean and standard deviation 1 and a white Gaussian noise of standard deviation \(10^{-5}\) Finally, we propose the following modification, introducing the hyperparameter \(\epsilon _i\), to the invertible batch normalization layer: $$\begin{aligned} y = f(x)&= |\gamma + \epsilon _i| \times \frac{x - \hat{x}}{\sqrt{\dot{x}} + \epsilon } + \beta , \end{aligned}$$ $$\begin{aligned} x = f^{-1}(y)&= (\sqrt{\dot{x}} + \epsilon ) \times \frac{y - \beta }{|\gamma + \epsilon _i|} + \hat{x}. \end{aligned}$$ The introduction of the \(\epsilon _i\) hyperparameter serves two purposes: first, it stabilizes the numerical errors described above by lower bounding the smallest \(\gamma\) parameters. Second, it prevents numerical instabilities that would otherwise arise from the inverse computation as \(\gamma\) parameters tend towards zero. Invertible activation function A good invertible activation function must be bijective (to guarantee the existence of an inverse function) and non-saturating (for numerical stability). For these properties, we focus our attention on Leaky ReLUs whose forward f and inverse \(f^{-1}\) computations are defined, for a negative slope parameter n, as follows: $$\begin{aligned} y&= f(x) = {\left\{ \begin{array}{ll} x, &{} \text {if}\ x>0 \\ x / n, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$ $$\begin{aligned} x&= f^{-1}(y) = {\left\{ \begin{array}{ll} y, &{} \text {if}\ y>0 \\ y \times n, &{} \text {otherwise} \end{array}\right. }. \end{aligned}$$ As derived in the Appendix, and following a similar proof to the batch normalization, we find the below formula for the SNR reduction factor: $$\begin{aligned} \alpha = \frac{4}{(1+\frac{1}{n^2}) \times (1 + n^2)}. \end{aligned}$$ Hence numerical errors can be controlled by setting the value of the negative slope n. As n tends towards 1, \(\alpha\) converges to 1, yielding minimum signal degradation. However, as n tends towards 1, the network tends toward a linear behavior, which hurts the model expressivity. Figure 6 shows the evolution of the SNR degradation \(\alpha\) for different negative slopes n; and, in section 5, we investigate the impact of the negative slope parameter on the model accuracy. Illustration of the numerical errors arising from invertible activation layers. Comparison of the theoretical and empirical evolution of the \(\alpha\) ratio for different negative slopes n. Empirical values were computed for a Gaussian input signal with zero mean and standard deviation 1 and a white Gaussian noise of standard deviation \(10^{-5}\) It should be noted that this equation only holds for the regime \(|y|^2 \gg |\epsilon _y|^2\). When the noise reaches an amplitude similar to or greater than the activation signal, this equation no longer holds. However, in this regime, the signal-to-noise ratio becomes too low for training to converge, as numerical errors prevent any useful weight update. We have thus left the problem of characterizing this regime open. Invertible convolutions Invertible convolution layers can be defined in several ways. The inverse operation of a convolution is often referred to as deconvolution, and is defined for a subspace of the kernel weight space. However, deconvolutions are computationally expensive and prone to numerical errors. Instead, we choose to implement invertible convolutions using the channel partitioning scheme of the reversible block for its simplicity, numerical stability and computational efficiency. Hence, invertible convolutions, in our architecture, can be seen as minimal reversible blocks in which both modules consist of a single convolution. Gomez et al. [6] found the numerical errors introduced by reversible blocks to have no impact on the model accuracy. Similarly, we found reversible blocks extremely stable yielding negligible numerical errors compared to the invertible Batch Normalization \(\alpha _{Rev} \ll \alpha _{BN}\) and Leaky ReLU layers \(\alpha _{Rev} \ll \alpha _{LReLU}\). Pooling In [7], the authors propose an invertible pooling operation that operates by stacking the neighboring elements of the pooling regions along the channel dimension. As noted in section 3.5, the increase in channel size at each pooling level induces a quadratic increase in the number of parameters of upstream convolution, which creates a new memory bottleneck. To circumvent this quadratic increase in the memory cost of the weight, we propose a new pooling layer that stacks the elements of neighboring pooling regions along the batch size instead of the channel size. We refer to both kind of pooling as channel pooling \(\mathcal {P}_c\) and batch pooling \(\mathcal {P}_b\), respectively, depending on the dimension along which activation features are stacked. Given a \(2 \times 2\) pooling region and an input activation tensor x of dimensions \(bs \times c \times h \times w\), where bs refers to the batch size, c to the number of channels and \(h \times w\) to the spatial resolution, the reshaping operation performed by both pooling layers can be formalized as follows: $$\begin{aligned} \mathcal {P}_c :&x \rightarrow y \end{aligned}$$ $$\begin{aligned} :&\mathbb {R}^{bs \times c \times h \times w} \rightarrow \mathbb {R}^{bs \times 4c \times \frac{h}{2} \times \frac{w}{2}} \end{aligned}$$ $$\begin{aligned} \mathcal {P}_b :&x \rightarrow y \end{aligned}$$ $$\begin{aligned} :&\mathbb {R}^{bs \times c \times h \times w} \rightarrow \mathbb {R}^{4bs \times c \times \frac{h}{2} \times \frac{w}{2}}. \end{aligned}$$ Channel pooling gives us a way to perform volume-preserving pooling operations while increasing the number of channels at a given layer of the architecture, while batch pooling gives us a way to perform volume-preserving pooling operations while keeping the number of channel constant. By alternating between channel and batch pooling, we can control the number of channels at each pooling level of the model's architecture. As this pooling operation only performs a reshaping between input and output, it does not induce any numerical error: \(\alpha _{Pool}=1.\) Layer-wise invertible architecture Putting together the above building blocks, Fig. 7 illustrates a layer-wise invertible architecture. The peak memory usage for a training iteration of this architecture, as parameterized in Table 1, can be computed as follows: Training an iteration over a typical batch of 32 images with resolution \(240 \times 240\) would require \(\mathcal {M}=590\)MB of VRAM. Similar to the RevNet architecture, the reconstruction of the hidden activations by inverse transformations during the backward pass comes with an additional computational cost similar to a forward pass. Illustration of a layer-wise invertible architecture and its memory consumption As analyzed in the previous section, the numerical errors in this architecture are dominated by Batch Normalization and Leaky ReLU layers. Following equation 19, the numerical error associated with the activations at a given layer i in this architecture can thus be approximated by: $$\begin{aligned} \epsilon _i |^2 = \frac{| \epsilon _y |^2 \times | z_i |^2}{| y |^2} \times \prod _i^{N} (\alpha _{LReLU} \times \alpha _{BN}), \end{aligned}$$ in which N represents the number of Batch Normalization and Leaky ReLU layers between the layer i and the output. Hybrid architecture In section 5.1, we saw that layer-wise activation and normalization layers degrade the signal-to-noise ratio of the reconstructed activations. In "Impact of numerical stability", we will quantify the accumulation of numerical errors through long chains of layer-wise invertible operations and show that numerical errors negatively impact model accuracy. To prevent these numerical instabilities, we introduce a hybrid architecture, illustrated in Fig. 8, combining reversible residual blocks with layer-wise invertible functions. Conceptually, the role of the residual-level reversible block is to reconstruct the input activation of residual blocks with minimal errors, while the role of the layer-wise invertible layers is to efficiently recompute the hidden activations within the reversible residual blocks at the same time as the gradient propagates to circumvent the local memory bottleneck of the reversible module. The backward pass through these hybrid reversible blocks is illustrated in Fig. 9 and proceeds as follows: first, the input x is computed from the output y through the analytical inverse of the reversible block. These computations are made without storing the hidden activation values of the sub-modules. Second, the gradient of the activations are propagated backward through the reversible of the block modules. As each layer within these modules is invertible, the hidden activation values are computed using the layer-wise inverse along the gradient. Illustration of a hybrid architecture and its peak memory consumption Illustration of the backpropagation process through a reversible block of our proposed hybrid architecture. In the forward pass (left), activations are propagated forward from top to bottom. The activations are not kept in live memory as they are to be recomputed in the backward pass so that no memory bottleneck occurs. The backward pass is made of two phases: first the input activations are recomputed from the output using the reversible block analytical inverse (middle). This step allows to reconstruct the input activations with minimal reconstruction error. During this step, hidden activations are not kept in live memory so as to avoid the local memory bottleneck of the reversible block. Once the input activation recomputed, the gradients are propagated backward through both modules of the reversible blocks (right). During this second phase, hidden activations are recomputed backward through each module using the layer-wise inverse operations, yielding minimal memory footprint The analytical inverse of the residual-level reversible blocks is used to propagate hidden activations with minimal reconstruction error to the lower modules, while layer-wise inversion allows us to alleviate the local bottleneck of the reversible block by computing the hidden activation values together with the backward flow of the gradients. As layer-wise inverses are only used for hidden feature computations within the scope of the reversible block, and reversible blocks are made of relatively short chains of operations, numerical errors do not accumulate up to a damaging degree. The peak memory consumption of our proposed architecture, as illustrated in Fig. 8 and parameterized in Table 1, can be computed as: Training an iteration over batch of 32 images of resolution \(240 \times 240\) would require \(\mathcal {M}=648\)MB of VRAM. It should be noted, however, that this architecture adds an extra computational cost as both the reversible block inverse and layer-wise inverse need to be computed. Hence, instead of one additional forward pass, as in the RevNet and layer-wise architectures, our hybrid architecture comes with a computational cost equivalent to performing two additional forward passes during the backward pass. Following equation 19, the numerical error associated with the activations at a given layer i in this architecture are given by: $$\begin{aligned} \epsilon _i |^2 = \frac{| \epsilon _y |^2 \times | z_i |^2}{| y |^2} \times \prod _i^{K} \alpha _{Rev}, \end{aligned}$$ in which K represents the number of reversible blocks between the layer i and the output. Comparing this equation to equation 27, the stability of this architecture is due to the following two factors: first the number of reversible blocks K is typically two to three times smaller than the number of layers N as a reversible block is typically made of several convolutions: \(K<N\). Second, and most importantly, the SNR reduction factor is much smaller in reversible blocks than in Batch Normalization \(\alpha _{Rev} \ll \alpha _{BN}\) and Leaky ReLU layers \(\alpha _{Rev} \ll \alpha _{LReLU}\). This section is organized in two parts: first, we start by analyzing numerical errors arising in both the layer-wise invertible and hybrid architectures. Second, we compare our hybrid architecture to existing models. All our experiments use the CIFAR10 dataset as a benchmark. The CIFAR10 dataset is complex enough to require efficient architectures to reach high accuracy, yet small enough to enable us to rapidly iterate over different architectural designs. Unless stated otherwise, all models were trained for 50 epochs of stochastic gradient descent with cyclical learning rate and momentum [23] with minimal image augmentation. The code used to produce the results is available at the link given in the "Availability of data and materials" section. Impact of numerical stability The idea of layer-wise invertibility is attractive for it maximally reduces the memory footprint of CNN training by bypassing the local bottleneck of architectures based on reversible blocks (i.e., RevNet or iRevNet). Unfortunately, we will show in this section that deep architectures based only on layer-wise invertibility cannot be successfully trained due to numerical errors preventing the model from converging to high quality solutions. Instead, layer-wise invertibility can be combined with reversible block-level invertibility to get the best of both world: reversible blocks allow for long chain of reconstruction without numerical errors reaching critical values, while layer-wise invertibility is used within reversible blocks to bypass the local memory bottleneck. Figure 10 shows the inverse reconstruction error for each layer of both architectures, in order to visualize these phenomenon. This figure suggests that layer-wise invertible architecture cannot scale with depth as numerical errors accumulate with depth. On the other hand, in the case of the hybrid architecture, one can see that numerical errors accumulate within reversible blocks, but that the long-term trend of the SNR is stable due to the stable inverse operation of the reversible blocks. Figure 11 quantifies the degradation of the inverse reconstruction for the two models. We found the two most impacting parameters to be the depth N of the network and the negative slope n of the activation function, so we show the evolution of the reconstruction errors when varying both parameters. Evolution of the SNR through the layers of a (left) layer-wise invertible model and (right) hybrid architecture model. The lower the SNR is, the more important numerical errors of the inverse reconstructions are. The x axis corresponds to layer indices of the model: right-most values represent the top layer of the model, in which the least noise is observed. Left-most values represent input layers in which maximum levels of noise accumulate. (Left): color boxes illustrate the span of two consecutive convolutional blocks (convolution–normalization–activation layers). The SNR gets continuously degraded throughout each block of the network, resulting in numerical instabilities. (Right): color boxes illustrate consecutive reversible blocks. Within reversible blocks, the SNR quickly degrades due to the numerical errors introduced by invertible layers. However, the signal propagated to the input of each reversible block is recomputed using the reversible block inverse, which is much more stable. Hence, we can see a sharp decline of the SNR within the reversible blocks, but the SNR almost raises back to its original level at the input of each reversible block Illustration of the impact of depth (in number of layers N) and negative slope n on the numerical errors of (left) the layer-wise invertible architecture and (right) the hybrid architecture. Both figures show the evolution of the SNR at the input layer of the network for increasing depth N on the x axis, and with different negative slopes n in different colors. (Left): the SNR decreases with depth until it reaches an SNR value of 1. At this point, the noise is of the same scale as the signal, and no learning can happen. It is impressive that with only four layers of depth, a negative slope of \(n=0.005\) reaches a SNR of 1. With such parameterization, even the most shallow models are not capable of learning. (Right) The hybrid architecture successfully stabilizes the numerical error propagation Finally, we investigate the impact of numerical errors on the accuracy. In order to isolate the impact of the numerical errors, we compare the accuracy reached by the same architectures with and without inverse reconstruction of the hidden layers activations. Without reconstruction, the hidden activation values are stored in memory along the forward pass, and the gradient updates are computed from the true, noiseless activation values. With inverse reconstructions, activation values are recovered by inverse operators during the backward pass. Hence, the only difference between both settings is the noise introduced by the inverse reconstructions. In Fig. 12, we show evolution of the accuracy with increasing depth. Impact of the numerical errors on the accuracy of (left) layer-wise invertible models and (right) hybrid architecture model. (Left): evolution of the accuracy with depth for a negative slope \(n=0.2\) with and without inverse reconstructions. Without reconstruction, the model accuracy benefits from depth. With inverse reconstructions, the model similarly benefits from depth as the number of layers grow from 3 to 7. For \(N>7\), however, the accuracy sharply decreases toward lower values due to numerical errors. (Right): our proposed hybrid architecture greatly stabilizes the numerical errors, which results in smaller effects of the depth and negative slope on accuracy In the case of the layer-wise invertible architecture: For small depths (or high negative slopes), in which the numerical errors are minimum, both models yield similar accuracy. However, as the numerical errors grow, the accuracy of the model goes down, while the accuracy of the ideal baseline keeps increasing, which can be seen with both depth and negative slopes. This loss in accuracy is the direct result of numerical errors, which prevent the model from converging to higher accuracies. In the case of the hybrid architecture, the negative impacts of numerical errors observed in the layer-wise architecture are gone, confirming that the numerical stability brought by the hybrid architecture effectively stabilizes training. Table 1 compares architectures with different patterns of reversibility. We called our model RevNeXt as a reference to both prior works and the eXtreme memory reduction of the RevNet architecture aimed by our work. The exact parameterization of our proposed RevNeXt is given together with other architectures in Table 1. To allow for a fair comparison, we have tweaked each architecture to keep the number of parameters as close as possible, with the notable exception of the i-RevNet architecture. The i-Revnet pooling scheme enforces a quadratic growth of its parameters with each level of pooling. In order to keep the number of parameters of the i-RevNet close to the other baselines, we would have to drastically reduce the number of channels of lower layers, which we found yield poor performance. Furthermore, it should be noted that the i-RevNet architecture we present slightly differs from the original i-Revnet model as our implementation uses RevNet-like reversible modules with one module per channel split for similarity with the other architecture we evaluate instead of the single module used in the original architecture. Table 1 Summary of architectures with different levels of reversibility Our model drastically cuts the memory cost of training, which comes at the cost of both a small degradation in accuracy, and additional computations. The additional computation requirements remain manageable though: Our hybrid architecture requires the computational equivalent of two additional forward passes within each backward pass. As an illustration of applications enabled by our model, Table 2, we compare the time of training our proposed architecture to 93.3% on a high-end Nvidia GTX 1080Ti and a low-end Nvidia GTX750. The GTX750 only has 1GB of VRAM, which results in roughly 400MB of available memory after the initialization of various frameworks. Training a vanilla ResNet with large batch sizes on such limited memory resources is impractical, while our architecture allows for efficient training. Table 2 Training statistics on different hardware Convolutional Neural Networks form the backbone of modern computer vision systems. However, the accuracy of these models comes at the cost of resource-intensive training and inference procedures. While tremendous efforts have been put into the optimization of the inference step on resource-limited device, relatively little work have focused on algorithmic solutions for limited resource training. In this paper, we have presented an architecture able to yield high accuracy classifications within very tight memory constraints. We highlighted several potential applications of memory-efficient training procedures, such as on-device training, and illustrated the efficiency of our approach by training a CNN to 93.3% accuracy on a low-end GPU with only 1GB of memory. The data used in this work are publicly available online. The code used for the experiments is available on GitHub at the following address: https://github.com/TristHas/reversibility_paper CNN: Convolutional Neural Network SNR: Graphical Processing Unit ResNet: Residual network RevNet: Reversible network B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, D. Kalenichenko, Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704–2713 (2018) P. Molchanov, S. Tyree, T. Karras, T. Aila, J. Kautz, Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440 (2016) D.P. Kingma, J. Ba, Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) C.J. Shallue, J. Lee, J. Antognini, J. Sohl-Dickstein, R. Frostig, G.E. Dahl, Measuring the effects of data parallelism on neural network training. arXiv preprint arXiv:1811.03600 (2018) S. McCandlish, J. Kaplan, D. Amodei, O. Dota Team, An empirical model of large-batch training. arXiv preprint arXiv:1812.06162 (2018) A.N. Gomez, M. Ren, R. Urtasun, R.B. Grosse, The reversible residual network: backpropagation without storing activations. In: Advances in Neural Information Processing Systems, pp. 2214–2224 (2017) J.-H. Jacobsen, A. Smeulders, E. Oyallon, i-revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088 (2018) L. Dinh, D. Krueger, Y. Bengio, Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516 (2014) L. Dinh, J. Sohl-Dickstein, S. Bengio, Density estimation using real nvp. arXiv preprint arXiv:1605.08803 (2016) D.P. Kingma, P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions. In: Advances in Neural Information Processing Systems, pp. 10215–10224 (2018) D.J. Rezende, S. Mohamed, Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770 (2015) T.Q. Chen, Y. Rubanova, J. Bettencourt, D.K. Duvenaud, Neural ordinary differential equations. In: Advances in Neural Information Processing Systems, pp. 6571–6583 (2018) J.-H. Jacobsen, J. Behrmann, R. Zemel, M. Bethge, Excessive invariance causes adversarial vulnerability. arXiv preprint arXiv:1811.00401 (2018) J. Behrmann, D., Duvenaud, J.-H. Jacobsen, Invertible residual networks. arXiv preprint arXiv:1811.00995 (2018) S. Rota Bulò, L. Porzi, P. Kontschieder, In-place activated batchnorm for memory-optimized training of dnns. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5639–5647 (2018) F.N. Iandola, S. Han, M.W. Moskewicz, K. Ashraf, W.J. Dally, K. Keutzer, Squeezenet: Alexnet-level accuracy with 50x fewer parameters and \(<\) 0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016) A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017) J. Frankle, M. Carbin, The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 (2018) P. Micikevicius, S. Narang, J. Alben, G. Diamos, E. Elsen, D. Garcia, B. Ginsburg, M. Houston, O. Kuchaiev, G. Venkatesh, et al.: Mixed precision training. arXiv preprint arXiv:1710.03740 (2017) S. Wu, G. Li, F. Chen, L. Shi, Training and inference with integers in deep neural networks. arXiv preprint arXiv:1802.04680 (2018) J. Martens, I. Sutskever, Training deep and recurrent networks with hessian-free optimization. In: Neural Networks: Tricks of the Trade, pp. 479–535. Springer, (2012) T. Chen, B. Xu, C. Zhang, C. Guestrin, Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174 (2016) L.N. Smith, N. Topin, Super-convergence: Very fast training of neural networks using large learning rates. arXiv preprint arXiv:1708.07120 (2017) At the time of this writing, the only contributors to the content of this work are the authors. We would be happy to thank reviewers for useful feedback. This work was supported by a scholarship MEXT from the Japanese Ministry of Education, Culture, Sports, Science, and Technology. A part of this study is subsidized by JSPS Grant-in-Aid for Scientific Research and Research granted JP 17K00236. A part of this study is subsidized by JSPS Grant-in-Aid for Scientific Research and Research granted JP 19K24344 and JP 20K19823. This work was supported in part by PRESTO, JST (Grant No. JPMJPR15D2). Tristan Hascoet and Quentin Febvre contributed equally Kobe University, 1-1 Rokkodaicho, Nada Ward, Kobe, 657-0013, Japan Tristan Hascoet, Weihao Zhuang, Yasuo Ariki & Tetsuya Takiguchi Association for Advanced Science and Technology, 1-1 Rokkodaicho, Nada Ward, Kobe, 657-0013, Japan Yasuo Ariki & Tetsuya Takiguchi IMT Atlantique, 655 Avenue du Technopôle, Plouzané, 29280, France Quentin Febvre Tristan Hascoet Weihao Zhuang Yasuo Ariki Tetsuya Takiguchi T.H and Q.F equally contributed to the investigation and implementation presented in this work. T.T and Y.A provided advice on the research, and W.Z provided figures and modifications as suggested by the reviewers. All authors have read and approved the final version of the manuscript. Correspondence to Tristan Hascoet. Proof of batch normalization results To illustrate the mechanism through which the batch normalization inverse operation reduces the SNR, let us consider a toy layer with only two channels and parameters \(\beta =[0,0]\) and \(\gamma = [1, \rho ]\). For simplicity, let us consider an input signal x independently and identically distributed across both channels with zero mean and standard deviation 1 so that, in the forward pass, we have: $$\begin{aligned} y =&[y_0, y_1] \end{aligned}$$ $$\begin{aligned} =&[x_0, x_1 \times \rho ], \end{aligned}$$ $$\begin{aligned} |y|^2 =&|x_0|^2 + |x_1|^2 \times \rho ^2 \end{aligned}$$ $$\begin{aligned} =&\frac{1}{2} \times |x|^2 + \frac{1}{2} \times |x|^2 \times \rho ^2 \end{aligned}$$ $$\begin{aligned} =&\frac{|x|^2}{2} \times (1+\rho ^2), \end{aligned}$$ (30e) in which we used the assumption that x is independently and identically distributed across both channels to factorize \(|x_0|^2 = |x_1|^2 = \frac{1}{2} \times |x|^2\) in Eq. (17ad). During the backward pass, the noisy estimate \(\tilde{y}=y+\epsilon ^y\) is fed back as input to the inverse operation. Similarly, let us suppose a noise \(\epsilon ^y\) identically distributed across both channels so that we have: $$\begin{aligned} \tilde{y} =&[ \tilde{y}_0, \tilde{y}_1 ] \end{aligned}$$ $$\begin{aligned} =&[ x_0 + \epsilon _0^y, x_1 \times \rho + \epsilon _1^y ], \end{aligned}$$ $$\begin{aligned} \tilde{x} =&[ \tilde{y}_0, \frac{\tilde{y}_1}{\rho }] \end{aligned}$$ $$\begin{aligned} =&[ x_0 + \epsilon _{0}^y, x_1 + \frac{\epsilon _{1}^y}{\rho } ], \end{aligned}$$ $$\begin{aligned} \epsilon ^x =&\tilde{x} - x \end{aligned}$$ $$\begin{aligned} =&[ \epsilon _0^y, \frac{\epsilon _{1}^y}{\rho } ] \end{aligned}$$ (31f) $$\begin{aligned} |\epsilon ^x|^2 =&|\epsilon _0^y|^2 + \frac{|\epsilon _1^y|^2}{\rho ^2}, \end{aligned}$$ (31g) $$\begin{aligned} =&\frac{1}{2} \times |\epsilon ^y|^2 + \frac{1}{2} \times \frac{|\epsilon ^y|^2}{\rho ^2} \end{aligned}$$ $$\begin{aligned} =&\frac{|\epsilon ^y|^2}{2} \times (1 + \frac{1}{\rho ^2}). \end{aligned}$$ (31i) Using the above formulation, the SNR reduction factor \(\alpha\) can be expressed as: $$\begin{aligned} \alpha =&\frac{snr_i}{snr_o} \end{aligned}$$ $$\begin{aligned} =&\frac{|x|^2}{|\epsilon ^x|^2} \times \frac{|\epsilon ^y|^2}{|y|^2} \end{aligned}$$ $$\begin{aligned} =&\frac{4}{(1+\frac{1}{\rho ^2}) \times (1 + \rho ^2)}. \end{aligned}$$ In essence, numerical instabilities in the inverse computation of the batch normalization layer arise from the fact that the signal across different channels i and j are amplified by different factors \(\gamma _i\) and \(\gamma _j\). While the signal amplification in the forward and inverse path cancel out each other (\(x=f^{-1}(f(x))\)), the noise only gets amplified in the backward pass. In the above demonstration, we have used a toy parameterization of the invertible batch normalization layer to illustrate the mechanism behind the SNR degradation. For arbitrarily parameterized batch normalization layers, the SNR degradation factor becomes: $$\begin{aligned} =&\frac{|x|^2}{|y|^2} \times \frac{|\epsilon ^y|^2}{|\epsilon ^x|^2}. \end{aligned}$$ Assuming a noise \(\epsilon ^y\), equally distributed across all channels, the noise ratio can be computed as follows: $$\begin{aligned} \tilde{y}_i =&\gamma _i \times \frac{x_i - \hat{x_i}}{\sqrt{\dot{x_i}} + \epsilon } + \beta _i + \epsilon ^y_i, \end{aligned}$$ $$\begin{aligned} \tilde{x}_i =&(\sqrt{\dot{x_i}} + \epsilon ) \times \frac{\tilde{y}_i - \beta _i}{\gamma _i} + \hat{x_i} \end{aligned}$$ $$\begin{aligned} =&x_i + \frac{\sqrt{\dot{x_i}}+\epsilon }{\gamma _i} \times \epsilon ^y_i, \end{aligned}$$ $$\begin{aligned} \epsilon ^x_i =&\tilde{x}_i - x_i \end{aligned}$$ $$\begin{aligned} =&\frac{\sqrt{\dot{x_i}}+\epsilon }{\gamma _i} \times \epsilon ^y_i, \end{aligned}$$ $$\begin{aligned} \frac{|\epsilon ^y|^2}{|\epsilon ^x|^2} =&\frac{|\epsilon ^y|^2}{\frac{|\epsilon ^y|^2}{c} \times \sum _i \frac{\dot{x_i}^2}{\gamma _i^2}} \end{aligned}$$ $$\begin{aligned} =&\frac{c}{\sum _i \frac{\sqrt{\dot{x_i}}+\epsilon }{\gamma _i}}. \end{aligned}$$ Assuming input x following a Gaussian distribution with channel-wise mean \(\hat{x_i}\) and variance \(\dot{x_i}\), the SNR reduction factor \(\alpha\) becomes: $$\begin{aligned} \frac{|x|^2}{|y|^2} =&\frac{\sum _i |x_i|^2}{\sum _i|y_i|^2} \end{aligned}$$ $$\begin{aligned} =&\frac{\sum _i (\hat{x}_i^2 + \dot{x_i})}{\sum _i (\gamma _i^2 + \beta _i^2)}, \end{aligned}$$ $$\begin{aligned} \alpha =&\frac{|x|^2}{|y|^2} \times \frac{|\epsilon ^y|^2}{|\epsilon ^x|^2} \end{aligned}$$ $$\begin{aligned} =&\frac{\sum _i (\hat{x}_i^2 + \dot{x_i})}{\sum _i (\gamma _i^2 + \beta _i^2)} \times \frac{c}{\sum _i \frac{\sqrt{\dot{x_i}}+\epsilon }{\gamma _i}}. \end{aligned}$$ Proof of activation function results The analysis of the numerical errors yielded by the invertible Leaky ReLU follows a similar reasoning as the toy batch normalization example with an additional subtlety: Similar to the toy batch normalization example, we can think of the leaky ReLU as artificially splitting the input x across two different channels, one channel leaving the output unchanged and one channel that divides the input by a factor n during the forward pass and multiplies its output by a factor n during the backward pass. However, these artificial channels are defined by the sign of the input and output during the forward and backward pass, respectively. Hence, we need to consider the cases in which the noise flips the sign of the output activations, which leads to different behaviors of the invertible Leaky ReLU across four cases: $$\begin{aligned} y = {\left\{ \begin{array}{ll} y_{nn} \ \text {if}\ \hat{y}<0 &{}\text {and}\ y<0 \\ y_{np} \ \text {if}\ \hat{y}>=0 &{}\text {and}\ y<0 \\ y_{pp} \ \text {if}\ \hat{y}>=0 &{}\text {and}\ y>=0 \\ y_{pn} \ \text {if}\ \hat{y}<0 &{}\text {and}\ y>=1 \end{array}\right. }, \end{aligned}$$ where the index np, for instance, represents negative activations whose reconstructions have become positive due to the added noise. The signal-to-noise ratio of the input and outputs can be expressed, respectively, as: In the case where \(y \gg \epsilon _y\), the probability of sign flips (\(y_{np}\), \(y_{pn}\)) is negligible, so that the output signal y is evenly split along \(y_{pp}\) and \(y_{nn}\). In this regime, the degradation of the SNR obeys a formula similar to the toy batch normalization example: $$\begin{aligned} y =&[y_{pp}, y_{nn}] \end{aligned}$$ $$\begin{aligned} =&[x_{pp}, \frac{x_{nn}}{n}], \end{aligned}$$ $$\begin{aligned} |y|^2 =&\frac{1}{2} \times |x|^2 + \frac{1}{2} \times \frac{|x|^2}{n^2} \end{aligned}$$ $$\begin{aligned} =&\frac{|x|^2}{2} \times (1+\frac{1}{n^2}). \end{aligned}$$ $$\begin{aligned} \tilde{y} =&[ \tilde{y}_{pp}, \tilde{y}_{nn}] \end{aligned}$$ $$\begin{aligned} =&[ x_{pp} + \epsilon _{pp}^y, \frac{x_{nn}}{n} + \epsilon _{nn}^y ], \end{aligned}$$ $$\begin{aligned} \tilde{x} =&[ \tilde{y}_{pp}, \tilde{y}_{nn} \times n] \end{aligned}$$ $$\begin{aligned} =&[ x_{pp} + \epsilon _{pp}^y, x_{nn} + \epsilon _{nn}^y \times n ], \end{aligned}$$ $$\begin{aligned} =&[ \epsilon _{pp}^y, \epsilon _{nn}^y \times n ], \end{aligned}$$ $$\begin{aligned} |\epsilon ^x|^2 =&\frac{1}{2} \times |\epsilon ^y|^2 + \frac{1}{2} \times |\epsilon ^y|^2 \times n^2 \end{aligned}$$ $$\begin{aligned} =&\frac{|\epsilon ^y|^2}{2} \times (1 + n^2). \end{aligned}$$ Using the above formulation, the signal-to-noise ratio reduction factor \(\alpha\) can be expressed as: $$\begin{aligned} =&\frac{4}{(1+\frac{1}{n^2}) \times (1 + n^2)}. \end{aligned}$$ When the noise reaches an amplitude similar to or greater than the activation signal, the effects of sign flips complicate the equation. However, in this regime, the signal-to-noise ratio becomes too low for training to converge, as numerical errors prevent any useful weight update, so we leave the problem of characterizing this regime open. Hascoet, T., Febvre, Q., Zhuang, W. et al. Reversible designs for extreme memory cost reduction of CNN training. J Image Video Proc. 2023, 1 (2023). https://doi.org/10.1186/s13640-022-00601-w
CommonCrawl
Was Gauss aware of the non-euclidean implications of his work on moduler forms? Until recently, i thought that the first to observe the connection between modular forms and the hyperbolic plane (a model of non-euclidean geometry) was Felix Klein. But in the last week i read the chapter on non-euclidean geometry in the book of John Stillwell - "Mathematics and it's history", where it's mentioned that using his theory of modular forms Gauss discovered a tessellation of the Poincare disk model by equilateral triangles of angles: $$\pi/4,\pi/4,\pi/4$$. So how Gauss could accomplish this without knowing the hyperbolic plane? i understand that this work was done in the context of elliptic functions and modular forms (not hyperbolic geometry), but the fact that he used triangles with angles sum less than pi, suggests that he might suspect that it's connected with hyperbolic geometry. Also, the fact that Klein studied very closely the writings of Gauss might suggest that this accomplishment has historical influence. So i'll be glad to clarify this "historical corner". biographical-details $\begingroup$ I seriously doubt that Gauss knew about Poincaré model. $\endgroup$ – Moishe Kohan Jul 30 '17 at 15:51 $\begingroup$ I think, Stilwell is over-interpreting what Gauss knew. He refers to Gauss' "Werke", Band 8, p. 104. Once you look there (assuming that you read German), you realize that most of what is written there is by Fricke who explains how Gauss' work can be interpreted using Poincare unit disk model (and the language of discontinuous group actions). It is entirely possible (or even likely) that Gauss knew much more than he wrote, but we will never know... $\endgroup$ – Moishe Kohan Aug 1 '17 at 1:07 The best source on this is Klein's Lectures on mathematics in XIX century, vol. I. It has a whole chapter on Gauss, with main focus on elliptic integrals and modular forms. Klein can be trusted because he himself worked on this and really read a lot of Gauss's writings. It seems to me from this book that Gauss did not know about the connection with non-Euclidean geometry, and Poincare's model is indeed Poincare's invention. $\begingroup$ So one can only say that Gauss discovered a tessellation of a circle using certain modular figures. I accept your answer since there is really not much evidence that Gauss systematiclly developed hyperbolic geometry. Can you answer another question of mine; Did Gauss know Jacoby's four squares theorem? the evidence there is much more solid since certainly Gauss knew the connection between certain quadratic forms and the theory of theta functions. The crucial identity which Jacoby used to prove his theorem is stated in Gauss's nachlass. $\endgroup$ – user2554 Aug 1 '17 at 18:53 $\begingroup$ @user2554: I cannot tell anything about 4 squares, I do not have Klein with me at this time. But from what I read I do not remember any claims that Gauss knew it. $\endgroup$ – Alexandre Eremenko Aug 1 '17 at 19:44 $\begingroup$ that's why i ask... i dont ask questions that have easy answers. I hope you read my post; the point of debate is in my opinion whether knowledge of the crucial part of the proof implies understanding of the connection between sum of squares functions and the coefficients of powers of the theta function. My question is therefore intended to clarify if the derivation of the main identity of the proofs already identifies quantatively this connection. $\endgroup$ – user2554 Aug 1 '17 at 20:06 Just to shed new light on Gauss's possible thoughts regarding hyperbolic geometry, i had to throw into this post two pieces of information i found. From modern viewpoint, these two pieces are connected to elliptic geometry, not to the geometry that Gauss termed "non-euclidean geometry" (which is actually hyperbolic). According to p.17 in the article "Loxodromic Spirals in M. C. Escher's Sphere Surface", Gauss found in 1819 (in his unpublished fragment "Die Kugel") that rotations of the Riemann sphere correspond to unitary Mobius transformations - that is, by assigning complex numbers to the sphere using the stereographic projection, than a rotation can be represented as a fractional linear transformation: $$z \mapsto \frac {{az + b}}{{-\bar{b}z + \bar{a}}}$$ This piece of information is important in two aspects: it shows that Gauss was the first to appreciate the importance and usefulness of the extended complex plane $C\cup (\infty)$ for geometric problems on the sphere, and secondly because it shows that Gauss understood that certain Mobius transformations generate the isometries of the Riemann sphere. Since the Riemann sphere is intimately connected to models of elliptic geometry, and since Gauss also encountered similar transformations in his analytic (modular forms) and number-theoretic (reduction of binary quadratic forms) studies, one can argue there is a faint relation between Gauss's meditations on non-euclidean geometries and his analytic work. The second fragment of Gauss dates from 1840, and is entitled "Der Kreis". It's less important than the first one, and deals with the so called "Cross-ratio" of four points, which is of significance in models of non-euclidean geometry. I don't understand exactly what the significance of it, but Stackel comments on it, stating that Gauss improved in it the notions of "Double-ratio" and "harmonic conjugates". This fragment might be entirely tangled in Gauss's mind with synthetic geometry considerations and Mobius's barycentric calculus (not to hyperbolic geometry), but from a modern point of view, there is a relation. Not the answer you're looking for? Browse other questions tagged biographical-details or ask your own question. What mathematical techniques Gauss used in order to tessellate the unit disk? How did Gauss arrived at the formula for the circumference of circle in hyperbolic geometry? Did Leibniz also design analog computers? What topological ideas did Gauss introduce to his student Möbius? Did Gauss's expression for the differential of the hyperbolic volume of the tetrahedron agree with later results? How to derive from Gauss's result on the volume of orthoscheme tetrahedron the formulas of Lobachevsky and Bolay? Why is there some doubt whether or not Gauss saw the pseudosphere as the embodiment of hyperbolic geometry? What is the modern interpretation of Gauss's "Summatorische Function"? Explanation of Gauss's late fragments dealing with "the conformal image of the ellipse" Meaning of a cryptic sentence by Gauss on "the mobility of figures in the hyperbolic plane"
CommonCrawl
Markov reward model In probability theory, a Markov reward model or Markov reward process is a stochastic process which extends either a Markov chain or continuous-time Markov chain by adding a reward rate to each state. An additional variable records the reward accumulated up to the current time.[1] Features of interest in the model include expected reward at a given time and expected time to accumulate a given reward.[2] The model appears in Ronald A. Howard's book.[3] The models are often studied in the context of Markov decision processes where a decision strategy can impact the rewards received. The Markov Reward Model Checker tool can be used to numerically compute transient and stationary properties of Markov reward models. Markov chain See Markov Chain See Markov chain Monte Carlo Continuous-time Markov chain The accumulated reward at a time t can be computed numerically over the time domain or by evaluating the linear hyperbolic system of equations which describe the accumulated reward using transform methods or finite difference methods.[4] References 1. Begain, K.; Bolch, G.; Herold, H. (2001). "Theoretical Background". Practical Performance Modeling. pp. 9. doi:10.1007/978-1-4615-1387-2_2. ISBN 978-1-4613-5528-1. 2. Li, Q. L. (2010). "Markov Reward Processes". Constructive Computation in Stochastic Models with Applications. pp. 526–573. doi:10.1007/978-3-642-11492-2_10. ISBN 978-3-642-11491-5. 3. Howard, R.A. (1971). Dynamic Probabilistic Systems, Vol II: Semi-Markov and Decision Processes. New York: Wiley. ISBN 0471416657. 4. Reibman, A.; Smith, R.; Trivedi, K. (1989). "Markov and Markov reward model transient analysis: An overview of numerical approaches" (PDF). European Journal of Operational Research. 40 (2): 257. doi:10.1016/0377-2217(89)90335-4.
Wikipedia
All issues Volume 35 / No 2 (March-April 2004) Vet. Res., 35 2 (2004) 199-212 References Vet. Res. Volume 35, Number 2, March-April 2004 https://doi.org/10.1051/vetres:2004006 Vet. Res. (2004) 199-212 References of Vet. Res. 35 199-212 Agron P.G., Walker R.L., Kinde H., Sawyer S.J., Hayes D.C., Wollard J., Andersen G.L., Identification by subtractive hybridization of sequences specific for Salmonella enterica serovar enteritidis, Appl. Environ. Microbiol. 67 (2001) 4984-4991. Akopyants N.S., Fradkov A., Diatchenko L., Hill J.E., Siebert P.D., Lukyanov S.A., Sverdlov E.D., Berg D.E., PCR-based subtractive hybridization and differences in gene content among strains of Helicobacter pylori, Proc. Natl. Acad. Sci. USA 95 (1998) 13108-13113. Aluotto B.B., Wittler R.G., Williams C.O., Faber J.E., Standardized bacteriologic techniques for the characterization of mycoplasma species, Int. J. Syst. Bacteriol. 20 (1970) 35-58. Bergonier D., De Simone F., Russo P., Solsona M., Lambert M., Poumarat F., Variable expression and geographic distribution of Mycoplasma agalactiae surface epitopes demonstrated with monoclonal antibodies, FEMS Microbiol. Lett. 143 (1996) 159-165. Bergonier D., Berthelot X., Poumarat F., Contagious agalactia of small ruminants: current knowledge concerning epidemiology, diagnosis and control, Rev. Sci. Tech. 16 (1997) 848-873. Bogush M.L., Velikodvorskaya T.V., Lebedev Y.B., Nikolaev L.G., Lukyanov S.A., Fradkov A.F., Pliyev B.K., Boichenko M.N., Usatova G.N., Vorobiev A.A., Andersen G.L., Sverdlov E.D., Identification and localization of differences between Escherichia coli and Salmonella typhimurium genomes by suppressive subtractive hybridization, Mol. Gen. Genet. 262 (1999) 721-729. Chen W.P., Kuo T.T., A simple and rapid method for the preparation of gram-negative bacterial genomic DNA, Nucleic Acids Res. 21 (1993) 2260. Church G.M., Gilbert W., Genomic sequencing, Proc. Natl. Acad. Sci. USA 81 (1984) 1991-1995. Corpet F., Multiple sequence alignment with hierarchical clustering, Nucleic Acids Res. 16 (1988) 10881-10890. Diatchenko L., Lau Y.F., Campbell A.P., Chenchik A., Moqadam F., Huang B., Lukyanov S., Lukyanov K., Gurskaya N., Sverdlov E.D., Siebert P.D., Suppression subtractive hybridization: a method for generating differentially regulated or tissue-specific cDNA probes and libraries, Proc. Natl. Acad. Sci. USA 93 (1996) 6025-6030. Fleury B., Bergonier D., Berthelot X., Schlatter Y., Frey J., Vilei E.M., Characterization and analysis of a stable serotype-associated membrane protein (P30) of Mycoplasma agalactiae, J. Clin. Microbiol. 39 (2001) 2814-2822. Freudnt E.A., Edward D.G., Classification and taxonomy, in: Barille M.F., Razin S. (Eds.), the Mycoplasmas, Academic Press, New York, 1979, pp. 1-41. Glew M.D., Papazisi L., Poumarat F., Bergonier D., Rosengarten R., Citti C., Characterization of a multigene family undergoing high-frequency DNA rearrangements and coding for abundant variable surface proteins in Mycoplasma agalactiae, Infect. Immun. 68 (2000) 4539-4548. Glew M.D., Marenda M., Bergonier D., Rosengarten R., Citti C., The vpma gene repertoires of Mycoplasma agalactiae, in: Poveda J., Fernandez A., Frey J., Johansson K.E. (Eds.), COST action 826-Mycoplasmas of ruminants: pathogenicity, diagnostics, epidemiology and molecular genetics, European Commission, 2001, pp. 18-21. Glew M.D., Marenda M., Rosengarten R., Citti C., Surface diversity in Mycoplasma agalactiae is driven by site-specific DNA inversions within the vpma multigene locus, J. Bacteriol. 184 (2002) 5987-5998. Gummelt I., Hotzel H., Runge M., Kirchhoff H., Taxonomic relationship between Mycoplasma bovis and Mycoplasma agalactiae, in: Frey J., Sarris K. (Eds.), COST action 826-Mycoplasmas of ruminants: pathogenicity, diagnostics, epidemiology and molecular genetics, European Commission, 1996, pp. 27-29. Janke B., Dobrindt U., Hacker J., Blum-Oehler G., A subtractive hybridisation analysis of genomic differences between the uropathogenic E. coli strain 536 and the E. coli K-12 strain MG1655, FEMS Microbiol. Lett. 199 (2001) 61-66. Konigsson M.H., Bolske G., Johansson K.E., Intraspecific variation in the 16S rRNA gene sequences of Mycoplasma agalactiae and Mycoplasma bovis strains, Vet. Microbiol. 85 (2002) 209-220. Pilo P., Fleury B., Marenda M., Frey J., Vilei E.M., Prevalence and distribution of the insertion element ISMag1 in Mycoplasma agalactiae, Vet. Microbiol. 92 (2003) 37-48. Poumarat F., Perrin B., Longchambon D., Identification of ruminant mycoplasmas by dot immunobinding on membrane filtration (MF dot), Vet. Microbiol. 29 (1991)329-338. Radnedge L., Gamez-Chin S., McCready P.M., Worsham P.L., Andersen G.L., Identification of nucleotide sequences for the specific and rapid detection of Yersinia pestis, Appl. Environ. Microbiol. 67 (2001) 3759-3762. Rossello-Mora R., Amann R., The species concept for prokaryotes, FEMS Microbiol. Rev. 25 (2001) 39-67. Sanchis R., Abadie G., Lambert M., Cabasse E., Dufour P., Guibert J.M., Pepin M., Inoculation of lactating ewes by the intramammary route with Mycoplasma agalactiae: comparative pathogenicity of six field strains, Vet. Res. 31 (2000) 329-337. Teachman A.M., French C.T., Yu H., Simmons W.L., Dybvig K., Gene transfer in Mycoplasma pulmonis, J. Bacteriol. 184 (2002) 947-951. Thompson J.D., Higgins D.G., Gibson T.J., CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice, Nucleic Acids Res. 22 (1994) 4673-4680. Tinsley C.R., Nassif X., Analysis of the genetic differences between Neisseria meningitidis and Neisseria gonorrhoeae: two closely related bacteria expressing two different pathogenicities, Proc. Natl. Acad. Sci. USA 93 (1996) 11109-11114. Tola S., Idini G., Rocchigiani A.M., Rocca S., Manunta D., Leori G., A physical map of the Mycoplasma agalactiae strain PG2 genome, Vet. Microbiol. 80 (2001) 121-130. Tully J.G., Culture medium formulation for primary isolation and maintenance of mollicutes, in: Tully R.S., Tully J.G. (Eds.), Molecular and Diagnostic Procedures in Mycoplasmology: Molecular Characterization, Academic Press, San Diego, 1995, pp. 33-39. Zhang Y.L., Ong C.T., Leung K.Y., Molecular analysis of genetic differences between virulent and avirulent strains of Aeromonas hydrophila isolated from diseased fish, Microbiology 146 (2002) 999-1009. Assessment of PCR for routine identification of species of the Mycoplasma mycoides cluster in ruminants Vet. Res. 35, 635-649 (2004) Antigenic and genetic characterisation of lipoprotein lppC from Mycoplasma mycoides subsp. mycoides SC Mycoplasma mycoides subsp. capri and Mycoplasma mycoides subsp. mycoides LC can be grouped into a single subspecies A specific pattern of splicing for the horse $\alpha$S1-Casein mRNA and partial genomic characterization of the relevant locus Genet. Sel. Evol. 34, 509-519 (2002) Review of molecular methods for identification, characterization and detection of bifidobacteria Lait 85, 23-32 (2005) Veterinary Research, a journal on Animal Infection © INRA / EDP Sciences
CommonCrawl
\begin{document} \title{Relative Riemann-Zariski spaces} \author{Michael Temkin} \address{\tiny{Einstein Institute of Mathematics, The Hebrew University of Jerusalem, Giv'at Ram, Jerusalem, 91904, Israel}} \email{\scriptsize{[email protected]}} \thanks{I want to express my deep gratitude to B. Conrad for pointing out various gaps and mistakes in an earlier version of the article and to thank R. Huber for a useful discussion. Also I thank D. Rydh and the referee for pointing out some mistakes in \S2.3. A first version of the article was written during my stay at the Max Planck Institute for Mathematics at Bonn. The final revision was made when the author was staying at IAS and supported by NFS grant DMS-0635607.} \begin{abstract} In this paper we study relative Riemann–Zariski spaces associated to a morphism of schemes and generalizing the classical Riemann–Zariski space of a field. We prove that similarly to the classical RZ spaces, the relative ones can be described either as projective limits of schemes in the category of locally ringed spaces or as certain spaces of valuations. We apply these spaces to prove the following two new results: a strong version of stable modification theorem for relative curves; a decomposition theorem which asserts that any separated morphism between quasi-compact and quasi-separated schemes factors as a composition of an affine morphism and a proper morphism. In particular, we obtain a new proof of Nagata’s compactification theorem. \end{abstract} \maketitle \section{Introduction} Let $K/k$ be a finitely generated field extension. In the first half of the 20-th century, Zariski defined a Riemann variety ${\rm RZ}_K(k)$ as the projective limit of all projective $k$-models of $K$. Zariski showed that this topological space, which is now called a Riemann-Zariski (or Zariski-Riemann) space, possesses the following set-theoretic description: to give a point ${\bf x}\in{\rm RZ}_K$ is equivalent to give a valuation ring ${\mathcal O}_{\bf x}$ with fraction field $K$ and such that $k\subset{\mathcal O}_{\bf x}$. The Riemann-Zariski space possesses a sheaf of rings ${\mathcal O}$ whose stalks are valuation rings of $K$ as above. Zariski made extensive use of these spaces in his desingularization works. Let $S$ be a scheme and $U$ be a subset closed under generalizations, for example $U=S_{\rm reg}$ is the regular locus of $S$, or $U=\eta$ is a maximal point of $S$. In many birational problems one wants to consider only $U$-modifications $S'\to S$, i.e. modifications which do not modify $U$. Then it is natural to consider the projective limit ${\mathfrak S}={\rm RZ}_U(S)$ of all $U$-modifications of $S$. It was remarked in \cite[\S3.3]{Temst} that working with such relative Riemann-Zariski spaces one can extend the $P$-modification results of \cite{Temst} to the case of general $U$ and $S$, and this plan is realized in ,\dots ,\ref{applchap}. In ,\dots ,\ref{affsec} we give a preliminary description of the space ${\mathfrak S}$, which is used in ,\dots ,\ref{applicsec} to prove the first main result of the paper, the stable modification theorem \ref{stabmodtheorU} generalizing its analog from \cite{Temst}. Our improvement to the stable modification theorem \cite[1.5]{Temst} is in the control on the base change one has to perform in order to construct a stable modification of a relative curve $C\to S$. Namely, we prove that in order to find a stable modification of a relative curve with semi-stable $U$-fibers it suffices to replace the base $S$ with a $U$-\'etale covering. Although a very rough study of relative RZ spaces suffices for the proof of Theorem \ref{stabmodtheorU}, it seems natural to investigate these spaces deeper. Furthermore, the definition of relative Riemann-Zariski spaces can be naturally generalized to the case of an arbitrary morphism $f:Y\to X$, and the case when $f$ is a dominant point was already applied in \cite{Tem1}. So, it is natural to investigate the relative RZ spaces associated to a morphism $f:Y\to X$. We will see that under a very mild assumption that $f$ is a separated morphism between quasi-compact quasi-separated schemes, one obtains a very specific description of the space ${\rm RZ}_Y(X)$ which is similar to the classical case of ${\rm RZ}_K(k)$. Let us say that $f$ is {\em decomposable} if it factors into a composition of an affine morphism $Y\to Z$ and a proper morphism $Z\to X$. Actually, in ,\dots ,\ref{affsec} we study ${\rm RZ}_Y(X)$ in the case of a general decomposable morphism because this case is not essentially easier than the case of an open immersion $Y\hookrightarrow X$. We define a set ${\rm Val}_Y(X)$ whose points are certain $X$-valuations of $Y$, and construct a surjection $\psi:{\rm Val}_Y(X)\to{\rm RZ}_Y(X)$. It will require some additional work to prove in Corollary \ref{homeomcor} that $\psi$ is actually a bijection (and even a homeomorphism with respect to natural topologies defined in the paper). Now, a natural question to ask is if the decomposition assumption is essential. Slightly surprisingly, the answer is negative because the assumption is actually empty. A second main result of this paper is decomposition theorem \ref{decompth} which states that a morphism of quasi-compact quasi-separated schemes is decomposable if and only if it is separated. Thus, the description of relative RZ spaces obtained in the decomposable case is actually the general one. We give two proofs of the decomposition theorem in this paper. The first proof is based on Nagata compactification and Thomason approximation theorems. Actually, we prove in ,\dots ,\ref{intrsec} that the decomposition theorem is essentially equivalent to the union of these two theorems. This accomplishes the first proof. On the other hand, it turns out that a deeper study of relative RZ spaces leads to an independent proof of the decomposition theorem as explained in ,\dots ,\ref{mainsec}. In particular, we obtain new proofs of Nagata's and Thomason's theorems. Though there are few known proofs of Nagata's theorem, see \cite{Con} and \cite{Lut}, the author expects that the new proof might be better suited for applying to algebraic spaces and (perhaps) certain classes of stacks (joint project with I. Tyomkin). Let us describe briefly the structure of the paper. In ,\dots ,\ref{intrsec} we prove a slight generalization of Thomason's theorem and show that the decomposition theorem is essentially equivalent to the union of Nagata's and Thomason's theorems. In ,\dots ,\ref{applchap} we start our study of relative RZ spaces and apply them to the strong stable modification theorem. Then, ,\dots ,\ref{nagchap} is devoted to further study of the relative RZ spaces. In ,\dots ,\ref{spasec} we establish an interesting connection between Riemann-Zariski spaces and adic spaces of R. Huber; in particular, we obtain an intrinsic topology on ${\rm Val}_Y(X)$. However, it turns out that the notion of an open subdomain in the spaces ${\rm Val}_Y(X)$ is much finer than its analog in the adic spaces. It requires some work to prove in Theorem \ref{affbaseth} that open subdomains of the form ${\rm Val}_{{\rm Spec}(B)}({\rm Spec}(A))$ form a basis for the topology of ${\rm Val}_Y(X)$. In ,\dots ,\ref{blowupsec} we study $Y$-blow ups of $X$, which are analogs of $U$-admissible or formal blow ups from Raynaud's theory, see \cite{BL}. As a corollary, we prove that $\psi:{\rm Val}_Y(X)\to{\rm RZ}_Y(X)$ is a homeomorphism in the decomposable case. Finally, we prove in Theorem \ref{domth} that any open quasi-compact subset of ${\rm Val}_Y(X)$ admits a scheme model of the form ${\rm Val}_{\ol Y}({\ol X})$ with ${\ol Y}$ being ${\ol X}$-affine. This result implies the decomposition theorem, and, therefore, leads to a new proof of Nagata's theorem. I want to mention that I was motivated by Raynaud's theory in my study of Riemann-Zariski spaces in the decomposable case, and some basic ideas are taken from \cite{BL}. I give a simple illustration of those ideas in the proof of the generalized Thomason's theorem. When this paper was almost finished I was informed about a recent paper \cite{FK} by Fujiwara and Kato, which contains a survey on a theory of generalized Riemann-Zariski spaces they are developing. The survey announces many interesting results, including Nagata compactification for algebraic spaces. It is clear that there is a certain overlap between that theory and the present paper which can be rather large, though it is difficult to make any conclusion on this subject until the actual proofs are published. The generalized RZ spaces mentioned in \cite{FK} are exactly the relative RZ spaces of open immersions $Y\hookrightarrow X$ (the same case which is used in the proof of the stable modification theorem). Finally, let us discuss the most recent progress that was made during the last year. Nagata compactification for algebraic spaces was proved independently by Conrad-Lieblich-Olsson in \cite{CLO} (implementing Gabber's approach) and D. Rydh in \cite{Rydh}. In both cases one reduces this to the scheme case rather than proving it from scratch. It should also be noted in this context that important particular cases of the latter theorem (when the algebraic spaces are normal or when the target is a field) were proved much earlier by Raoult, see \cite{R1} and \cite{R2}. \subsection{On noetherian approximation and Nagata compactification} \label{intrsec} For shortness, a filtered projective family of schemes with affine transition morphisms will be called {\em affine filtered family}. Also, we abbreviate the words "quasi-compact and qua\-si-separated" by the single "word" qcqs. In \cite[C.9]{TT}, Thomason proved a very useful approximation theorem, which states that any qcqs scheme $Y$ over a ring $\Lambda$ is isomorphic to a scheme $\projlim Y_\alpha$, where $\{Y_\alpha,\dots ,_\alpha$ is an affine filtered family of $\Lambda$-schemes of finite presentation. Due to the following lemma, this theorem may be reformulated in a more laconic way as follows: $Y$ is affine over a $\Lambda$-scheme $Y_0$ of finite presentation. \begin{lem} A morphism of qcqs schemes $f:Y\to X$ is affine if and only if $Y\widetilde{\to}\projlim Y_\alpha$, where $\{Y_\alpha,\dots ,_\alpha$ is a filtered family of $X$-affine finitely presented $X$-schemes. \end{lem} \begin{proof} If $Y\widetilde{\to}\projlim Y_\alpha$ is as in the lemma then $Y_\alpha={\bf Spec}({\mathcal E}_\alpha)$ for an ${\mathcal O}_X$-algebra ${\mathcal E}_{\alpha}$, hence $Y={\bf Spec}({\mathcal E})$ where ${\mathcal E}=\injlim{\mathcal E}_\alpha$. Conversely, suppose that $f$ is affine. By \cite[6.9.16(iii]{egaI}), $f_*({\mathcal O}_Y)\widetilde{\to}\injlim{\mathcal E}_\alpha$, where $,\dots ,{\mathcal E}_\alpha,\dots ,$ is a filtered family of finitely presented ${\mathcal O}_X$-algebras. Hence $Y=\projlim{\bf Spec}({\mathcal E}_\alpha)$. \end{proof} We generalize Thomason's theorem below. As a by-product, we obtain a simplified proof of the original theorem. \begin{theor} \label{approxtheor} Let $f:Y\to X$ be a (separated) morphism of qcqs schemes. Then $f$ can be factored into a composition of an affine morphism $Y\to Z$ and a (separated) morphism $Z\to X$ of finite presentation. \end{theor} \begin{proof} Step 1. {\sl Preliminary work.} First we observe that if $f$ is separated and $Y\to Z\to X$ is a factorization as in the theorem, then $Y$ is the projective limit of schemes $Y_{\alpha}$ which are affine over $Z$ and of finite presentation. By \cite[C.7]{TT}, already some $Y_{\alpha}$ is separated over $X$, hence replacing $Z$ with $Y_{\alpha}$, we achieve a factorization with $X$-separated $Z$. This allows us to deal only with the general (not necessarily separated) case in the sequel. If $Y$ is affine and $f(Y)$ is contained in an open affine subscheme $X'\subset X$ then the claim is obvious. So, $Y$ admits a finite covering by open qcqs subschemes $Y_1,\dots , Y_n$ such that the induced morphisms $Y_i\to X$ satisfy the conclusion of the theorem. It suffices to prove that one can decrease the natural number $n$ until it becomes $1$, and, obviously, it suffices to deal only with the case of $n=2$. Then the schemes $U:=Y_1$ and $V:=Y_2$ can be represented as $U=\projlim U_\beta$ and $V=\projlim V_\gamma$, where the limits are taken over $X$-affine filtered families of $X$-schemes of finite presentation. Step 2. {\sl Affine domination.} By \cite[$\rm IV_3$, 8.2.11]{ega}, for $\beta\ge\beta_0$ and $\gamma\ge\gamma_0$, the schemes $U_\beta$ and $V_\gamma$ contain open subschemes $U'_\beta$ and $V'_\gamma$, whose preimages in $U$ and $V$ coincide with $W:=U\cap V$. By \cite[$\rm IV_3$, 8.13.1]{ega}, the morphism $W\to U'_{\beta_0}$ factors through $V'_\gamma$ for sufficiently large $\gamma$. Replace $\gamma_0$ by $\gamma$. By the same reason, the morphism $W\to V'_{\gamma_0}$ factors through some $U'_\beta$ and the morphism $W\to U'_\beta$ factors through some $V'_\gamma$. Let us denote the corresponding morphisms as $f_{\gamma,\beta}:V'_\gamma\to U'_\beta$, $f_{\beta,\gamma_0}$ and $f_{\gamma_0,\beta_0}$. Now comes an obvious but critical argument: $f_{\beta,\gamma_0}$ is separated because the composition $f_{\gamma_0,\beta_0}\circ f_{\beta,\gamma_0}:U'_\beta\to U'_{\beta_0}$ is separated (and even affine); $f_{\gamma,\beta}$ is affine because its composition with the separated morphism $f_{\beta,\gamma_0}$ is affine. We gather the already defined objects in the left diagram below. Note that everything is defined over $X$, the horizontal arrows are open immersions, the vertical arrows are affine morphisms and the indexed schemes are of finite $X$-presentation. $$ \xymatrix{ V\ar[d] & W \ar[d]^{\phi'}\ar@{_{(}->}[l]\ar@{^{(}->}[r] & U\ar[dd]^h & & & V\ar[d] & W \ar[d]^{\phi'}\ar@{_{(}->}[l]\ar@{^{(}->}[r] & U\ar[d]^\phi,\dots , V_\gamma& V'_\gamma \ar[d]\ar@{_{(}->}[l] & & & & V_\gamma& V'_\gamma \ar[d]^{f_{\gamma,\beta}}\ar@{_{(}->}[l]\ar@{^{(}->}[r] & U_\gamma\ar[d],\dots , & U'_\beta \ar@{^{(}->}[r] & U_\beta & & & & U'_\beta \ar@{^{(}->}[r] & U_\beta} $$ Step 3. {\sl Affine extension.} The main task of this step is to produce the right diagram from the left one. It follows from the previous stage that $V'_\gamma={\bf Spec}({\mathcal E}')$, where ${\mathcal E}'$ is a finitely presented ${\mathcal O}_{U'_\beta}$-algebra. The morphism $\phi':W\to V'_\gamma$ to a $U'_\beta$-affine scheme corresponds to a homomorphism $\varphi':{\mathcal E}'\to h'_*({\mathcal O}_W)$, where $h':W\to U'_\beta$ is the projection. Obviously $h_*({\mathcal O}_U)|_{U'_\beta}\widetilde{\to} h'_*({\mathcal O}_W)$, where $h:U\to U_\beta$ is the projection. Hence we can apply \cite[6.9.10.1]{egaI}, to find a finitely presented ${\mathcal O}_{U_\beta}$-algebra ${\mathcal E}$ and a homomorphism $\varphi:{\mathcal E}\to h_*({\mathcal O}_U)$ such that ${\mathcal E}|_{U'_\beta}\widetilde{\to}{\mathcal E}'$ and the restriction of $\varphi$ to $U'_\beta$ is $\varphi'$. Set $U_\gamma={\bf Spec}({\mathcal E})$, then $U_\gamma\to U_\beta$ is an affine morphism whose restriction over $U'_\beta$ is $f_{\gamma,\beta}$, and $\varphi$ induces a morphism $\phi:U\to U_\gamma$. Finally, we glue $U_\gamma$ and $V_\gamma$ along $V'_\gamma$ obtaining a finitely presented $X$-scheme $Z$, and notice that the affine morphisms $U\to U_\gamma$ and $V\to V_\gamma$ glue to an affine morphism $Y\to Z$ over $X$. \end{proof} Our proof is a simple analog of Raynaud's theory. Thomason used the first two steps (induction argument in the proof of Theorem C.9 and Lemma C.6). Our simplification of his proof is due to the third step. The same arguments are used in Raynaud's theory, see the end of the proof of \cite[4.1(d)]{BL} and \cite[2.6(a)]{BL}. In our paper, they also appear in the proofs of Lemmas \ref{blowuplem}(i) and \ref{extblowuplem}, and Theorem \ref{domth}. Next, we recall Nagata compactification theorem, see \cite{Nag}. A scheme theoretic proof of the theorem can be found in \cite{Con} or \cite{Lut}. Recall that a morphism $f:Y\to X$ is called {\em compactifiable} if it can be factored as a composition of an open immersion $g:Y\to Z$ and a proper morphism $h:Z\to X$. Nagata proved that a finite type morphism $f:Y\to X$ of qcqs schemes is compactifiable if and only if it is separated. Actually, Nagata considered noetherian schemes, and the general case was proved by B. Conrad in \cite{Con}. Assume that $f$ is factored as above. Let ${\mathcal I}\subset{\mathcal O}_Z$ be an ideal with support $Z\setminus Y$ and let $Z'$ be the blow up of $Z$ along ${\mathcal I}$. We can choose a finitely generated ${\mathcal I}$ because the morphism $Y\hookrightarrow Z$ is quasi-compact. The open immersion $g':Y\to Z'$ is affine because $Z'\setminus Y$ is a locally principal divisor. It follows that $g$ is a composition of an affine morphism $g'$ of finite type and a proper morphism $Z'\to X$. Conversely, assume that $g:Y\to Z$ is affine of finite type and $Z\to X$ is proper. Then $Y$ is quasi-projective over $Z$, hence there exists an open immersion of finite type $Y\hookrightarrow{\ol Y}$ with $Z$-projective and, therefore, $X$-proper ${\ol Y}$. Thus, Nagata's theorem can be reformulated as follows: a finite type morphism is separated if and only if it can be represented as a composition of an affine morphism of finite type and a proper morphism. Now, one sees that a weak form of Theorem \ref{approxtheor} ($f$ is separated and $Z\to X$ is of finite type) and Nagata's theorem are together equivalent to the following decomposition theorem, which will be also proved in ,\dots ,\ref{mainsec} by a different method. \begin{theor} \label{decompth} A morphism $f:Y\to X$ of quasi-compact quasi-separated schemes is separated if and only if it can be factored as a composition of an affine morphism $Y\to Z$ and a proper morphism $Z\to X$. \end{theor} \section{Preliminary description of relative RZ spaces and applications} \label{applchap} Throughout ,\dots ,\ref{applchap}, $f:Y\to X$ denotes a separated morphism between qcqs schemes. \subsection{Valuations and projective limits}\label{firstsec} We are going to recall some notions introduced in \cite[\S3.3]{Temst}. Consider a factorization of $f$ into a composition of a schematically dominant morphism $f_i:Y\to X_i$ and a proper morphism $g_i:X_i\to X$. We call the pair $(f_i,g_i)$ a {\em $Y$-modification} of $X$, and usually it will be denoted simply as $X_i$. Given two $Y$-modifications of $X$, we say that $X_j$ {\em dominates} or {\em refines} $X_i$, if there exists an $X$-morphism $g_{ji}:X_j\to X_i$ compatible with $f_i,f_j,g_i$ and $g_j$. A standard graph argument shows that if $g_{ji}$ exists then it is unique (one uses only that $f_j$ is schematically dominant and $X_i$ is $X$-separated). The family $\{X_i,\dots ,_{i\in I}$ of all $Y$-modifications of $X$ is filtered because any two $Y$-modifications $X_i,X_j$ are dominated by the scheme-theoretic image of $Y$ in $X_i\times_X X_j$, and it has an initial object corresponding to the schematic image of $Y$ in $X$ A relative Riemann-Zariski space ${\mathfrak X}={\rm RZ}_Y(X)$ is defined as the projective limit of the underlying topological spaces of $Y$-modifications of $X$. Note that if $X$ is integral and $Y$ is its generic point then one recovers the classical Riemann-Zariski spaces. A slightly more general case, when $Y$ is a dominant point, was considered in \cite[\S1]{Tem1}. Let $\pi_i:{\mathfrak X}\to X_i$ be the projections and $\eta:Y\to{\mathfrak X}$ be the map induced by $f_i$'s. We provide ${\mathfrak X}$ with the sheaf ${\mathcal M}_{\mathfrak X}=\eta_*({\mathcal O}_Y)$, which will be called the sheaf of {\em meromorphic functions}, and with the sheaf ${\mathcal O}_{\mathfrak X}=\injlim\pi_i^{-1}({\mathcal O}_{X_i})$, which will be called the sheaf of {\em regular functions}. The natural homomorphisms ${\alpha}_i:\pi_i^{-1}({\mathcal O}_{X_i})\to{\mathcal M}_{\mathfrak X}$ induce a homomorphism ${\alpha}:{\mathcal O}_{\mathfrak X}\to{\mathcal M}_{\mathfrak X}$, and we will prove later that $\eta$ is injective and ${\alpha}$ is a monomorphism. Actually, we will give in Corollary \ref{lastcor} a rather precise meaning to a claim that ${\mathcal M}_{\mathfrak X}$ is a sheaf of semi-fractions of the sheaf ${\mathcal O}_{\mathfrak X}$. \begin{rem} For any filtered projective family of locally ringed spaces $\{Y_j,\dots ,_{j\in J}$ the projective limit ${\mathfrak Y}=\projlim_{j\in J}Y_i$ always exists and satisfies $|{\mathfrak Y}|:=\projlim|Y_j|$ and ${\mathcal O}_{\mathfrak Y}=\injlim \pi_j^{-1}{\mathcal O}_{Y_j}$ where $\pi_j:{\mathfrak Y}\to Y_j$'s are the projections. Assume now that $Y_j$'s are schemes. Then ${\mathfrak Y}$ is known to be a scheme when the transition morphisms are affine: this situation is studied very extensively in \cite[$\rm IV_3$, \S8]{ega} and the obtained results have a plenty of various very important applications. Although ${\mathfrak Y}$ does not have to be a scheme in general, it is a locally ringed space of a rather special form which deserves a study. Our relative RZ spaces $({\mathfrak X},{\mathcal O}_{\mathfrak X})$ provide a nice example of such pro-schemes (while ${\mathcal M}_{\mathfrak X}$ corresponds to an extra-structure related to $Y$), and we will later obtain a very detailed description of these spaces (e.g. we will describe the stalks of ${\mathcal O}_{\mathfrak X}$). Another interesting example of a pro-scheme which is not a scheme but has a very nice realization is as follows: let $X$ be a scheme with a subset $U$ closed with respect to generalization, then $(U,{\mathcal O}_X|_U)$ is the projective limit of all open neighborhoods of $U$. Note that this locally ringed space does not have to be a scheme: for example, take $U$ to be the set of all non-closed points on an algebraic surface $X$. \end{rem} The classical absolute RZ spaces viewed either as topological spaces or, more generally, as locally ringed spaces admit two alternative descriptions: (a) a projective limit of schemes, (b) a space whose points are valuations. We defined the relative spaces ${\rm RZ}_Y(X)$ using projective limits, but they also admit a "valuative" description as spaces ${\rm Val}_Y(X)$. In ,\dots ,\ref{applchap} we only introduce the sets ${\rm Val}_Y(X)$ and establish a certain connection between ${\rm RZ}_Y(X)$ and ${\rm Val}_Y(X)$ which suffices for application to the stable modification theorem \ref{stabmodtheorU}. Throughout this paper by a {\em valuation} on a ring $B$ we mean a commutative ordered group $\Gamma$ with a multiplicative map $|,\dots ,|:B\to\Gamma\cup\{0,\dots ,$ which satisfies the strong triangle inequality and sends $1$ to $1$. Recall that if $B$ is a field then $R=\{x\in B|,\dots ,|x|\le 1,\dots ,$ is a valuation ring of $B$ (i.e. ${\rm Frac}(R)=B$) which defines $|,\dots ,|$ up to an equivalence. In general, a valuation is defined up to an equivalence by its kernel $p$, which is a prime ideal, and by the induced valuation on the residue field ${\rm Frac}(B/p)$. By slight abuse of language, the point of ${\rm Spec}(B)$ given by $p$ will be also called the {\em kernel} of $|,\dots ,|$. Also, we will often identify equivalent valuations. \begin{rem}\label{semivalrem} We follow R. Huber by using the notion of a valuation. Since these valuations may have a non-empty kernel, a reasonable alternative, however, would be the notion of a semivaluation. Note also that in the literature on abstract algebra this object is often called Manis valuation. \end{rem} Now, let ${\rm Val}_Y(X)$ be the set of triples ${\bf y}=(y,R,\phi)$, where $y\in Y$ is a point, $R$ is a valuation ring of $k(y)$ (in particular ${\rm Frac}(R)=k(y)$) and $\phi:S={\rm Spec}(R)\to X$ is a morphism compatible with $y={\rm Spec}(k(y))\to Y$ and such that the induced morphism $y\to S\times_X Y$ is a closed immersion. Let ${\mathcal O}_{\bf y}$ denote the preimage of $R$ in ${\mathcal O}_{Y,y}$ (currently, it is just a ring attached to ${\bf y}$). We would like to axiomatize the properties of ${\mathcal O}_{\bf y}$ as follows. By a {\em semi-valuation ring} we mean a ring ${\mathcal O}$ with a valuation $|,\dots ,|$ such that any zero divisor of ${\mathcal O}$ lies in the kernel $m={\rm Ker}(|,\dots ,|)$ and for any pair $g,h\in{\mathcal O}$ with $|g|\le |h|\neq 0$ one has that $h|g$. Two structures of a semi-valuation ring on ${\mathcal O}$ are {\em equivalent} if their valuations are equivalent. Note that ${\mathcal O}$ embeds into $A={\mathcal O}_m$ by our assumption on zero divisors, $mA=m$ because the prime ideal $m$ is $({\mathcal O}\setminus m)$-divisible, and $R={\mathcal O}/m$ is the valuation ring of $A/m$ corresponding to the valuation induced by $|,\dots ,|$. Therefore, ${\mathcal O}$ is {\em composed} from the local ring $A$ and the valuation ring $R\subset A/m$ in the sense that ${\mathcal O}$ is the preimage of $R$ in $A$. We say that $A$ is a {\em semi-fraction ring} of ${\mathcal O}$. Conversely, any ring composed from a local ring and a valuation ring is easily seen to be a semi-valuation ring. Semi-valuation rings play the same role in the theory of relative RZ spaces as valuation rings do in the theory of usual RZ spaces. \begin{rem}\label{lastrem} (i) The structure of a semi-valuation ring on an abstract local ring ${\mathcal O}$ is uniquely defined (up to an equivalence) by its kernel $m$ because ${\mathcal O}/m$ is a valuation ring and hence defines the valuation. Since $A={\mathcal O}_m$ we obtain that the semi-valuation ring structure on ${\mathcal O}$ is uniquely defined by its embedding into the semi-fraction ring $A$. (ii) An abstract ring ${\mathcal O}$ can admit many semi-valuation ring structures. For example, if ${\mathcal O}$ is a valuation ring then any its localization (i.e. a larger valuation ring in its field of fractions) can serve as its semi-fraction ring. \end{rem} Here is a generalization of the classical criterion that an integral domain ${\mathcal O}$ is a valuation ring if and only if for any pair of elements $f,g\in{\mathcal O}$ either $f|g$ or $g|f$. \begin{lem}\label{valcritlem} Let ${\mathcal O}\subset A$ be two rings. Then the following conditions are equivalent: (i) ${\mathcal O}$ admits a structure of a semi-valuation ring such that $A$ is ${\mathcal O}$-isomorphic to the semi-fraction ring of ${\mathcal O}$, (ii) if $f,g\in A$ are co-prime (i.e. $fA+gA=A$) then either $f\in g{\mathcal O}$ or $g\in f{\mathcal O}$. \end{lem} \begin{proof} We should only prove that (ii) implies (i), since the opposite implication is obvious. We claim that $A$ is a local ring. Indeed, if it is not local then $A\setminus A^\times$ is not an ideal, hence there exist non-invertible $f,g$ with invertible $f+g$. But by our assumption either $f\in gA$ or $g\in fA$, hence $f+g$ is contained in a proper ideal equal to either $fA$ or $gA$, that is an absurd. Let $m\subset A$ be the maximal ideal, then taking $f\in m$ and $g=1$ and observing that $f$ does not divide $1$ in ${\mathcal O}$ (and even in $A$), we deduce that $f\in{\mathcal O}$. Thus, we proved that $m\subset{\mathcal O}$, in particular, ${\mathcal O}$ is the preimage of the ring ${\mathcal O}/m\subset A/m$ under the surjection $A\to A/m$. It remains to show that ${\mathcal O}/m$ is a valuation ring of $A/m$. For a pair of elements ${\wt f},{\wt g}\in {\mathcal O}/m$ choose liftings $f,g\in{\mathcal O}$. Since either $f|g$ or $g|f$ in ${\mathcal O}$, it follows that either ${\wt f}|{\wt g}$ of ${\wt g}|{\wt f}$. Hence ${\mathcal O}/m$ is a valuation ring, and we are done. \end{proof} \subsection{RZ space of a decomposable morphism} \label{affsec} Let ${\bf y}=(y,R,\phi)$ be a point of ${\rm Val}_Y(X)$ and let $S={\rm Spec}(R)$. By the valuative criterion of properness, $\phi$ factors uniquely through a morphism $\phi_i:Y\to X_i$ for any $Y$-modification $X_i\to X$. Since $S\times_{X_i}Y$ is a closed subscheme of $S\times_X Y$ by $X$-separatedness of $X_i$, we obtain that $\phi_i$ induces a closed immersion $y\to S\times_{X_i}Y$, and, in particular, $(y,R,\phi_i)$ is an element of ${\rm Val}_Y(X_i)$. It follows that the natural map ${\rm Val}_Y(X_i)\to{\rm Val}_Y(X)$ is a bijection. So, ${\rm RZ}_Y(X)$ and ${\rm Val}_Y(X)$ depend on $X$ and $Y$ only up to replacing $X$ with its $Y$-modification. Now we will construct a map of sets $\psi:{\rm Val}_Y(X)\to{\rm RZ}_Y(X)$. For any $i\in I$, let $x_i\in X_i$ be the {\em center} of $R$ on $X_i$, i.e. the image of the closed point of $S$ under $\phi_i$. Then the family of points $(x_i)$ defines a point ${\bf x}\in{\mathfrak X}$ and we obtain a map $\psi$ as above. For any $i$, $x_i$ is a specialization of $f_i(y)$, hence we obtain a homomorphism ${\mathcal O}_{X_i,x_i}\to{\mathcal O}_{X_i,f_i(y)}\to{\mathcal O}_{Y,y}\to k(y)$ whose image lies in $R$ because $x_i$ is the center of $R$ on $X_i$. Therefore, the image of ${\mathcal O}_{X_i,x_i}$ in ${\mathcal O}_{Y,y}$ lies in ${\mathcal O}_{\bf y}$, and we obtain a natural homomorphism ${\mathcal O}_{{\mathfrak X},{\bf x}}=\injlim{\mathcal O}_{X_i,x_i}\to{\mathcal O}_{\bf y}$. \begin{prop} \label{goodcaseprop} Suppose that $f$ is decomposable. Then any point ${\bf x}\in{\mathfrak X}$ possesses a preimage ${\bf y}={\lambda}({\bf x})$ in ${\rm Val}_Y(X)$ such that the homomorphism ${\mathcal O}_{{\mathfrak X},{\bf x}}\to{\mathcal O}_{\bf y}$ is an isomorphism. In particular, ${\lambda}$ is a section of $\psi$. \end{prop} Actually, we will prove in ,\dots ,\ref{nagchap} that $\psi$ is a bijection (so ${\lambda}$ is its inverse), but the proposition as it is already covers our applications in ,\dots ,\ref{applchap}. \begin{proof} Factor $f$ into a composition of an affine morphism $Y\to Z$ and a proper morphism $Z\to X$. After replacing $X$ with the scheme-theoretic image of $Y$ in $Z$, we can assume that $f$ is affine. Note that then for any $Y$-modification $X_i\to X$, the morphism $f_i:Y\to X_i$ is affine. Let $x_i$ be the image of ${\bf x}$ in $X_i$. Obviously, the schemes $U_i={\rm Spec}({\mathcal O}_{X_i,x_i})\times_{X_i}Y$ are affine. In addition, on the level of sets each $U_i$ consists of points $y\in Y$ such that $x_i$ is a specialization of $f_i(y)$, the morphisms $U_i\to Y$ are topological embeddings and ${\mathcal O}_Y|_{U_i}\widetilde{\to}{\mathcal O}_{U_i}$. Notice that the schemes $U_i={\rm Spec}(B_i)$ form a filtered family, hence $U_\infty:=\projlim U_i={\rm Spec}(B_\infty)$, where $B_\infty=\injlim B_i$. By \cite[$\rm IV_3$, \S8]{ega}, $U_\infty=\cap U_i$ set-theoretically. Since $f_i:Y\to X_i$ is schematically dominant and the latter property is preserved under (possibly infinite) localizations on the base, the morphism $U_i\to{\rm Spec}({\mathcal O}_{X_i,x_i})$ is schematically dominant too. So, for each $i\in I$ we have that ${\mathcal O}_{X_i,x_i}\hookrightarrow B_i$, and then an embedding of the direct limits ${\mathcal O}_{{\mathfrak X},{\bf x}}\hookrightarrow B_\infty$ arises. \begin{lem} Suppose that elements $g,h\in B_\infty$ do not have common zeros on $U_\infty$. Then either $g\in h{\mathcal O}_{{\mathfrak X},{\bf x}}$ or $h\in g{\mathcal O}_{{\mathfrak X},{\bf x}}$. \end{lem} \begin{proof} Find $i$ such that $g$ and $h$ are defined and do not have common zeros on $U_i$. Note that $U_i=\cap f^{-1}(V_j)$, where $V_j$ runs over affine neighborhoods of $x_i$. Hence we can choose a neighborhood $X'_i={\rm Spec}(A)$ of $x_i$ such that $g,h\in B$ and $gB+hB=1$, where $Y'={\rm Spec}(B)$ is the preimage of $X'_i$ in $Y$. To ease the notation we will write $X$ and $x$ instead of $X_i$ and $x_i$ (we can freely replace $X$ with $X_i$ because ${\rm RZ}_Y(X)$ remains unchanged). Now, the pair $(g,h)$ induces a morphism ${\alpha}':Y'\to P':={\rm Proj}(A[T_g,T_h])$, whose scheme-theoretic image ${\ol X}'$ is a $Y'$-modification of $X'$. It would suffice to extend the $Y'$-modification ${\alpha}':{\ol X}'\to X'$ to a $Y$-modification ${\alpha}:{\ol X}\to X$. Indeed, either $T_g\in T_h{\mathcal O}_{{\ol X}',x'}$ or $T_h\in T_g{\mathcal O}_{{\ol X}',x'}$, where $x'\in{\ol X}'$ is the image of ${\bf x}$ in ${\ol X}$. So, existence of ${\alpha}$ would imply that $g|h$ or $h|g$ already in the image of ${\mathcal O}_{{\ol X},x'}$ in $B_\infty$, which is by definition contained in ${\mathcal O}_{{\mathfrak X},{\bf x}}$. It can be difficult to extend ${\alpha}'$ (without applying Nagata compactification), but fortunately we can replace ${\ol X}'$ with any its $Y'$-modification ${\ol X}''$ and it suffices to extend ${\ol X}''\to X'$ to a $Y$-modification of $X$. Choose $a,b$ such that $ag+bh=1$. Then there exists a natural morphism $\beta':Y'\to P'':={\rm Proj}(A[T_{ag},T_{ah},T_{bg},T_{bh}])$ which takes $Y'$ to the affine chart on which $T_{ag}+T_{bh}$ is invertible. We define ${\ol X}''$ to be the scheme-theoretic image of $\beta'$. Since $\beta'$ factors through Segre embedding ${\rm Proj}(A[T_g,T_h])\times{\rm Proj}(A[T_a,T_b])\hookrightarrow P''$, we obtain that ${\ol X}''$ is a closed subscheme of the source which is mapped to ${\ol X}'$ by the projection onto the first factor. In particular, ${\ol X}''$ is a $Y'$-modification of ${\ol X}'$. We will show that ${\ol X}''\to X'$ extends to a $Y$-modification ${\ol X}\to X$. Let $E\subset B$ be the $A$-submodule generated by $ag,ah,bg,bh$ and consider the graded algebra $A_E:=\oplus_{n=0}^\infty E^n$, where $E^n$ is the $n$-th power of $E$ in $B$ and $E^0$ is the image of $A$. Note that $1\in E$, and we will denote by $1_E$ the associated $1$-graded element of $A_E$. Set $P:={\rm Proj}(A_E)$ and observe that the affine chart corresponding to $1_E$ is $P_1={\rm Spec}(\cup_{n=0}^\infty E^n)$ (where the union is taken inside of $B$). Clearly, $P$ is a closed subscheme in $P''$ and the morphism $Y\to P''$ factors through $P_1$. In particular, ${\ol X}''$ is the schematical image of $Y\to P$, and the latter coincides with the schematical closure of $P_1$ because the morphism $Y\to P_1$ is schematically dominant by injectivity of the homomorphism $\cup_{n=0}^\infty E^n\to B$. By \cite[6.9.7]{egaI}, $E$ can be extended to a finitely generated ${\mathcal O}_X$-submodule ${\mathcal E}\subset f_*({\mathcal O}_Z)$, and replacing ${\mathcal E}$ by ${\mathcal E}+{\mathcal O}_X$ we achieve in addition that ${\mathcal E}$ contains the image of ${\mathcal O}_X$ in $f_*({\mathcal O}_Y)$. Let ${\mathcal E}^n$ be the $n$-th power of ${\mathcal E}$ in the sheaf of ${\mathcal O}_X$-algebras $f_*({\mathcal O}_Y)$ (so, ${\mathcal E}^0$ is the image of ${\mathcal O}_X$) and form the graded ${\mathcal O}_X$-algebra ${\mathcal A}_{\mathcal E}:=\oplus_{n=0}^\infty{\mathcal E}^n$. Then exactly the same computation as was used above shows that the schematical closure of ${\bf Spec}(\cup_{n=0}^\infty{\mathcal E}^n)$ in ${\bf Proj}({\mathcal A}_{\mathcal E})$ is a $Y$-modification of $X$, which we denote ${\ol X}$. Since ${\ol X}\to X$ obviously extends ${\ol X}''\to X'$, we are done. \end{proof} The above lemma combined with Lemma \ref{valcritlem} provides ${\mathcal O}_{{\mathfrak X},{\bf x}}$ with a semi-valuation ring structure such that $B_\infty$ is its semi-fraction ring. In particular, $B_\infty$ is a local ring and so $U_\infty$ possesses a unique closed point $y$. Thus, $B_\infty={\mathcal O}_{Y,y}$, its subring ${\mathcal O}_{{\mathfrak X},{\bf x}}$ contains $m_y$ and $R:={\mathcal O}_{{\mathfrak X},{\bf x}}/m_y$ is a valuation ring of $k(y)$. Define $\phi:S={\rm Spec}(R)\to X$ as the composition of the closed immersion $S\to{\rm Spec}({\mathcal O}_{{\mathfrak X},{\bf x}})$ with the natural morphism ${\rm Spec}({\mathcal O}_{{\mathfrak X},{\bf x}})\to X$. Since ${\mathcal O}_{{\mathfrak X},{\bf x}}$ is composed from ${\mathcal O}_{Y,y}$ and $R$, the triple ${\bf y}:=(y,R,\phi)$ is a candidate for being ${\lambda}({\bf x})$ and it only remains to check that $y\to S\times_XY$ is a closed immersion (and so ${\bf y}$ is indeed an element of ${\rm Val}_Y(X)$). For any $i$, $U_i={\rm Spec}({\mathcal O}_{X_i,x_i})\times_{X_i} Y$ is a closed subscheme of ${\rm Spec}({\mathcal O}_{X_i,x_i})\times_X Y$, hence $U_\infty\widetilde{\to}\projlim_{i\in I}U_i$ is a closed subscheme of $${\rm Spec}({\mathcal O}_{{\mathfrak X},{\bf x}})\times_X Y\widetilde{\to}\projlim_{i\in I}{\rm Spec}({\mathcal O}_{X_i,x_i})\times_X Y$$ Since $y$ is closed in $U_\infty$, we obtain that the morphism $y\to{\rm Spec}({\mathcal O}_{{\mathfrak X},{\bf x}})\times_X Y$ is a closed immersion. Hence the morphism from $y$ to a closed subscheme $S\times_X Y$ of ${\rm Spec}({\mathcal O}_{{\mathfrak X},{\bf x}})\times_X Y$ is a closed immersion too and we are done. \end{proof} \subsection{Applications} \label{applicsec} A preliminary description of relative Riemann-Zariski spaces obtained in the previous section, suffices for some applications. Assume we are given a qcqs scheme $S$ with a schematically dense quasi-compact subset $U$ (i.e. any neighborhood of $U$ is schematically dense) which is closed under generalizations. An $S$-scheme $X$ is called {\em $U$-admissible} if the preimage of $U$ in $X$ is schematically dense. By a {\em $U$-\'etale covering} we mean a separated finite type morphism $\phi:S'\to S$ such that $\phi$ is \'etale over $U$, $S'$ is $U$-admissible, and for any valuation ring $R$ any morphism ${\rm Spec}(R)\to S$ taking the generic point to $U$ lifts to a morphism ${\rm Spec}(R')\to S'$ where $R'$ is a valuation ring dominating $R$ and such that ${\rm Frac}(R')/{\rm Frac}(R)$ is finite. (Actually those are finite type $h$-covers of $S$ which are \'etale over $U$.) Note that in \cite{BLR} one considers a more restrictive class of coverings, namely $U$-\'etale maps $S'\to S$, which split to a composition of a surjective flat $U$-\'etale morphism and a $U$-modification. However, it follows from the flattening theorem \cite[5.2.2]{RG} of Raynaud-Gruson that the latter class of coverings is cofinal in ours. In order to make use of Riemann-Zariski spaces we have first to establish some properties of schemes over semi-valuation rings. So, let ${\mathcal O}$ be a semi-valuation ring with semi-fraction ring $A$ and let $m$ be the maximal ideal of $A$. Recall that $A={\mathcal O}_m$, $R:={\mathcal O}/m$ is the valuation ring in $K:=A/m$, the scheme $S={\rm Spec}({\mathcal O})$ is covered by pro-open subscheme $U={\rm Spec}(A)$ (i.e. $U$ is the intersection of open subschemes) and closed subscheme $T={\rm Spec}(R)$, and the intersection $U\cap T$ is a single point $\eta={\rm Spec}(K)$, which is the generic point of $T$ and the closed point of $U$. Note that in some sense $S$ is glued from $U$ and $T$ along $\eta$, for example, there is a bi-Cartesian square $$ \xymatrix{ \eta \ar[d]\ar[r] & U\ar[d],\dots , T\ar[r] & S} $$ Next we will study how $U$-admissible $S$-schemes (resp. quasi-coherent ${\mathcal O}_S$-modules) can be glued from $T$-schemes and $U$-schemes (resp. modules), and we will call such gluing $(U,T)$-descent. Given a quasi-coherent ${\mathcal O}_S$-module $M$, which we identify with an ${\mathcal O}$-module, set $M_U=M\otimes_{\mathcal O} A$, $M_T=M\otimes_{\mathcal O} R=M/mM$ and $M_\eta=M\otimes_{\mathcal O} K$. We say that $M$ is $U$-admissible if the localization homomorphism $M\to M_U$ is injective. Note that any ${\mathcal O}$-module $M$ defines a descent datum consisting of $M_U,M_T$ and an isomorphism $\phi_M:M_U\otimes_AK\widetilde{\to} M_T\otimes_R K$, and a similar claim holds for $S$-schemes. The corresponding categories of descent data are defined in an obvious way, and, naturally, we have a $(U,T)$-descent lemma below. Slightly more generally, we fix a qcqs $U$-admissible $S$-scheme ${\ol S}$ with ${\ol U}=U\times_S{\ol S}$, ${\ol T}=T\times_S{\ol S}$ and ${\ol\eta}=\eta\times_S{\ol S}$ and we will glue objects defined over ${\ol U}$ and ${\ol T}$ along their restrictions over ${\ol\eta}$. For example, an ${\mathcal O}_{\ol S}$-module ${\mathcal M}$ induces a descent data $\phi_{\mathcal M}:{\mathcal M}_{\ol U}|_{\ol\eta}\widetilde{\to}{\mathcal M}_{\ol T}|_{\ol\eta}$. If we want to stress the choice of ${\ol S}$ we will call such gluing $({\ol U},{\ol T})$-descent. For an ${\ol S}$-scheme $X$ we will use the notation $X_U=X\times_{\ol S}{\ol U}$ (and so $X_U\widetilde{\to} X\times_SU$), $X_T=X\times_{\ol S}{\ol T}$ and $X_\eta=X\times_{\ol S}{\ol\eta}$. \begin{lem} \label{gluelem} Keep the above notation. (i) The natural functor from the category of $U$-admissible quasi-coherent ${\mathcal O}_{\ol S}$-modules (resp. ${\mathcal O}_{\ol S}$-algebras) ${\mathcal M}$ to the category of descent data $({\mathcal M}_{\ol U},{\mathcal M}_{\ol T},\phi_{\mathcal M})$ with quasi-coherent ${\mathcal O}_{\ol U}$-module (resp. ${\mathcal O}_{\ol U}$-algebra) ${\mathcal M}_{\ol U}$ and quasi-coherent ${\ol\eta}$-admissible ${\mathcal O}_{\ol T}$-module (resp. ${\mathcal O}_{\ol T}$-algebra) ${\mathcal M}_{\ol T}$ is an equivalence of categories. (ii) The $({\ol U},{\ol T})$-descent is effective on ${\ol U}$-flat ${\ol S}$-projective schemes with fixed relatively ample sheaves. More concretely, assume that we are given a descent datum $((X_U,{\mathcal L}_U),(X_T,{\mathcal L}_T),(\phi_X,\phi_{\mathcal L}))$, where $f_U:X_U\to{\ol U}$ and $f_T:X_T\to{\ol T}$ are projective morphisms with relatively ample invertible modules ${\mathcal L}_U$ and ${\mathcal L}_T$, respectively, $f_U$ is flat, $X_T$ is $\eta$-admissible, $\phi_X:X_U\times_U\eta\widetilde{\to} X_T\times_T\eta$ and $\phi_{\mathcal L}$ is an isomorphism between the restrictions of ${\mathcal L}_U$ and ${\mathcal L}_T$ on the $\eta$-fibers which agrees with $\phi_X$. Then there exists a projective morphism $f:X\to{\ol S}$ with a relatively ample ${\mathcal O}_X$-module ${\mathcal L}$ whose restriction over ${\ol U}$ and ${\ol T}$ give rise to the above descent datum. (iii) A qcqs $U$-admissible $S$-scheme $X$ is of finite type if and only if $X_U$ and $X_T$ are so. If in addition $X\times_SU\to U$ is flat and finitely presented then $X\to S$ is flat and finitely presentated. \end{lem} \begin{proof} The claim of (i) is local on ${\ol S}$, so we can assume that ${\ol S}$ is affine. Then ${\mathcal O}_{\ol S}$, ${\mathcal O}_{\ol U}$ or ${\mathcal O}_{\ol T}$-modules can be viewed simply as ${\mathcal O}$, $A$ or $R$-modules, and this reduces our problem to the case when ${\ol S}=S$. In particular, we will now denote the modules as $M$, $M_T$, etc. Next we note that $mM_U=mM$ because $mA=m$, and hence $M_T=M/mM$ embeds into $M_\eta=M_U/mM_U$. So, $M_T$ is $\eta$-admissible and the embedding $M\hookrightarrow M_U$ identifies $M$ with the preimage of $M_T$ under the projection $M_U\to M_\eta$. In particular, an exact sequence $0\to M\to M_U\oplus M_T\to M_\eta\to 0$ arises. Conversely, given a descent datum as in (i), we can define an ${\mathcal O}$-module $M={\rm Ker}(M_U\oplus M_T\to M_\eta)$, and one easily sees that $M$ is actually the preimage of $M_T\subset M_\eta$ under the projection $M_U\to M_U/mM_U\widetilde{\to} M_\eta$ and hence $M_U$ and $M_T$ are the base changes of this $M$. We constructed maps from ${\mathcal O}_S$-modules to descent data and vice versa, and one immediately sees that these maps extend to functors. Then it is obvious from the above that these functors are equivalences of categories which are inverse one to another. To prove (ii) we find sufficiently large $n$ so that the $n$-th tensor powers of the initial sheaves induce closed immersions $X_U\to{\bf P}((f_U)_*({\mathcal L}^{\otimes n}_U))$ and $X_T\to{\bf P}((f_T)_*({\mathcal L}^{\otimes n}_T))$ into the associated projective fibers. Moreover, the higher direct images of ${\mathcal L}^{\otimes n}_U$ vanish for large $n$ and then $(f_\eta)_*({\mathcal L}^{\otimes n}_\eta)\widetilde{\to}((f_U)_*({\mathcal L}^{\otimes n}_U))_\eta$ by the theorem on base changes and direct images, see \cite[III.12.9]{Har}. By part (i) the sheaves $(f_U)_*({\mathcal L}^{\otimes n}_U)$ and $(f_T)_*({\mathcal L}^{\otimes n}_T)$ glue along $(f_\eta)_*({\mathcal L}^{\otimes n}_\eta)$ to an ${\mathcal O}_{\ol S}$-module ${\mathcal M}$ and so ${\bf P}:={\bf P}({\mathcal M})$ is glued from ${\bf P}_U:={\bf P}((f_U)_*({\mathcal L}^{\otimes n}_U))$ and ${\bf P}_T:={\bf P}((f_T)_*({\mathcal L}^{\otimes n}_T))$ along ${\bf P}((f_\eta)_*({\mathcal L}^{\otimes n}_\eta))$. In particular, the closed subschemes $X_U\hookrightarrow{\bf P}_U$ and $X_T\hookrightarrow{\bf P}_T$ glue to a closed subscheme $i:X\hookrightarrow{\bf P}$ with a relatively very ample sheaf ${\mathcal K}:=i^*({\mathcal O}_{\bf P}(1))$. Note that ${\mathcal K}$ is glued from ${\mathcal L}_U^{\otimes n}$ and ${\mathcal L}_T^{\otimes n}$. Finally, the modules ${\mathcal L}_U$ and ${\mathcal L}_T$ glue to an invertible ${\mathcal O}_X$-sheaf ${\mathcal L}$ with ${\mathcal L}^{\otimes n}\widetilde{\to}{\mathcal K}$ by $(X_U,X_T)$-descent of modules, which was established in (i). In particular, ${\mathcal L}$ is relatively ample. The first assertion of (iii) is exactly Step 2 from the proof of \cite[2.5.3]{Temst}. So, let us assume that $X\times_{\ol S}{\ol U}\to{\ol U}$ is flat and finitely presented (in addition to the assumption that $X$ is of finite type over ${\ol S}$ and ${\ol U}$-admissible). The claim is local on $X$ so we can assume that $X={\rm Spec}(C)$ is affine. Note also that $X_T$ is $\eta$-admissible, so $C/mC$ embeds into $(C/mC)\otimes_RK$ and then $C/mC$ is flat and finitely presented over $R$ by \cite[3.5.1]{Temst}. We first deal with finite presentation, so fix an epimorphism $\phi:{\mathcal O}[T]\to C$ with $T=(T_1,\dots , T_k)$ and let us prove that its kernel $I$ is finitely generated. Localizing at $m$ we obtain an epimorphism $\phi\otimes_{\mathcal O} A:A[T]\to B=C_m$ with kernel $J=I_m$. Then $B$ is $A$-flat by our assumption and we claim that this implies that $J\cap m[T]=mJ$. Indeed, if $x$ is contained in $J\cap m[T]$ but not in $mJ$ then it reduces to a non-zero element ${\wt x}$ in the kernel of $J/mJ\to A[T]/m[T]$. However, this kernel is an epimorphic image of ${\rm Tor}_1^{A}(B,m)=0$ and hence ${\wt x}=0$. By finite presentation of $A\to B$ we have that $J=\sum_{i=1}^nf_iB$ and multiplying $f_i$'s by elements of ${\mathcal O}\setminus m$ we can achieve that $f_i\in I$ and so they generate an ideal $I':=\sum_{i=1}^n f_iC\subset I$. Since $m$ is $({\mathcal O}\setminus m)$-divisible, $mJ=mI'\subset I'$ and hence ${\ol I}:=I\cap m[T]\subset mJ\subset I'$. Note that $I/{\ol I}$ is the kernel of $\phi\otimes_{\mathcal O} R:R[T]\to C/mC$, and so is finitely generated over $R[T]$ (and hence over ${\mathcal O}[T]$). Choose any finite set of generators ${\wt g}_1,\dots , {\wt g}_l\in I/{\ol I}$, lift each ${\wt g}_j$ to $g_j\in I$ and consider the ideal $I''=(I',g_1,\dots , g_l)$ in $C[T]$. Then $I''$ contains ${\ol I}$ and $I''/{\ol I}$ contains $I/{\ol I}$, and so $I=I''$ is finitely generated. Finally, let us show that $X$ is $S$-flat. We already know that $X$ is of finite presentation over $S$, therefore the flattening theorem of Raynaud-Gruson \cite[5.2.2]{RG} asserts that $X$ can be flattened by performing a $U$-modification (and even a $U$-admissible blow up) on $S$ and replacing $X$ with its strict transform. However, $S$ has no non-trivial $U$-modifications because $T$ (being the spectrum of a valuation ring) is the only modification of itself. Thus, $X$ has to be $S$-flat and we conclude the proof. \end{proof} In the first version of the paper, the lemma was formulated in a larger (and incorrect) generality, as was pointed out by D. Rydh. So, let us discuss briefly true and false generalizations. \begin{rem} (i) Lemma \ref{gluelem}(i) implies that descent data of the form $X_U\times_U\eta\widetilde{\to} X_T\times_T\eta$ is always effective for $U$-admissible ${\ol S}$-affine schemes. Some examples show that for general $U$-admissible schemes the descent of this type is not effective, though it exists as an algebraic space. More generally, Rydh recently showed in \cite[\S6]{Rydh} that general descent of this type can be made in the category of stacks with quasi-finite diagonal. (ii) Also, Rydh observed that the flatness assumption in Lemma \ref{gluelem}(iii) is essential for finite presentation. Without flatness finite presentation can be lost after gluing even in the case when ${\mathcal O}$ is a height two valuation ring composed from DVR's $A$ and $R$ and $X$ is a (non-reduced) closed subscheme in $S$. \end{rem} We assume again that $S$ is a qcqs scheme with a schematically dense quasi-compact subset $U$ which is closed under generalizations. We will prove a stable modification theorem which strengthens its analog from \cite{Temst}, and we refer to the introduction of loc.cit. for terminology. Our strengthening is in imposing natural restrictions on the base change required in order to construct a stable modification. It is reasonable to expect that in some sense one can preserve the locus $U$ of $S$ over which the given curve is already semi-stable. Since already when $U$ is the generic point of an integral base scheme $S$ one has to allow its finite \'etale coverings (i.e. one has to allow separable alterations rather then modifications), it seems that one cannot hope for something more restrictive than admitting general $U$-\'etale coverings of the base. \begin{theor} \label{stabmodtheorU} Let $(C,D)$ be an $S$-multipointed curve with semi-stable $U$-fibers. Then there exists a $U$-\'etale covering $S'\to S$ such that the curve $(C,D)\times_S S'$ admits a stable $U$-modification. \end{theor} \begin{proof} Step 1. {\it The theorem holds over a semi-valuation ring ${\mathcal O}$.} More concretely, throughout Step 1 we assume that ${\mathcal O}$ is composed from a local ring $(A,m)$ and a valuation ring $R$ of $K=A/m$, $S={\rm Spec}({\mathcal O})$ and $U={\rm Spec}(A)$. Set also $T={\rm Spec}(R)$ and $\eta={\rm Spec}(K)$. By \cite[1.5]{Temst}, the theorem is known in the case of a valuation ring, i.e. the case when $m=0$. Thus, there exists a finite separable extension $K'/K$ with a valuation ring $R'$ lying over $R$ and such that $(C,D)\times_S T'$ admits a stable modification, where $T'={\rm Spec}(R')$. Lift the extension $K'/K$ to a finite \'etale extension of local rings $A'/A$, and let ${\mathcal O}'$ be the semi-valuation ring composed from $A'$ and $R'$. We will show that the stable modification exists over ${\mathcal O}'$, but let us explain first how this concludes the Step. Clearly, $R'=\cup R_i$ where $R_i$'s are finitely generated $R$-subalgebras of $R'$ such that ${\rm Frac}(R_i)=K'$. Therefore ${\mathcal O}'=\cup {\mathcal O}_i$ where ${\mathcal O}_i$ is the preimage of $R_i$ in $A'$. It remains to note that for any $i$ we have that ${\rm Spec}({\mathcal O}_i)\times_SU\widetilde{\to}{\rm Spec}(A')$ is \'etale over $U$, and by approximation the stable modification exists already over some ${\mathcal O}_i$. Now we can work over ${\mathcal O}'$ and to simplify the notation we replace ${\mathcal O}$ by ${\mathcal O}'$ achieving that already $(C_T,D_T):=(C,D)\times_S T$ admits a stable modification $({\ol C}_T,{\ol D}_T)$. By \cite[1.1]{Temst} there is a canonical $C_T$-ample sheaf on ${\ol C}_T$, namely the sheaf ${\mathcal L}_T:=\omega_{({\ol C}_T,{\ol D}_T)/T}$. Set also ${\mathcal L}_U:=\omega_{(C_U,D_U)/U}$ and note that these sheaves agree over $\eta$ because the formation of $\omega$'s commutes with base changes (see \cite[\S1]{Temst}). By Lemma \ref{gluelem}(ii) applied with ${\ol S}=C$, we can glue $(C_U,{\mathcal L}_U)$ and $({\ol C}_T,{\mathcal L}_T)$ to a $U$-modification ${\ol C}\to C$. In addition, ${\ol C}$ is flat and finitely presented over $S$ by Lemma \ref{gluelem}(iii). Clearly, the closed subschemes $D_U\hookrightarrow C_U$ and ${\ol D}_T\hookrightarrow{\ol C}_T$ glue to a closed subscheme ${\ol D}\hookrightarrow{\ol C}$, and checking the $S$-fibers we obtain that $({\ol C},{\ol D})$ is a stable $U$-modification of $(C,D)$. Step 2. {\it The general case.} Since $(C,D)$ is semi-stable over an open subscheme of $S$, we can enlarge $U$ to an open schematically dense qcqs subscheme. Note that by noetherian approximation there exists a scheme $S'$ of finite type over ${\bf Z}$ with a morphism $S\to S'$ such that $U$ and $(C,D)$ are induced from a schematically dense open subscheme $U'\hookrightarrow S'$ and a multipointed curve $(C',D')\to S'$. Then it suffices to solve our problem for $S',U'$ and $(C',D')$, so we can assume that $S$ is of finite type over ${\bf Z}$. By \cite[3.3.1]{Temst}, ${\mathfrak S}={\rm RZ}_U(S)$ is a qcqs topological space. For any point ${\bf x}=(y,R,\phi)\in{\mathfrak S}$, set $S_{\bf x}={\rm Spec}({\mathcal O}_{{\mathfrak S},{\bf x}})$, $U_{\bf x}={\rm Spec}({\mathcal O}_{Y,y})$ and $(C_{\bf x},D_{\bf x})=(C,D)\times_S S_{\bf x}$. Since the embedding $U\hookrightarrow S$ is obviously decomposable, Proposition \ref{goodcaseprop} implies that ${\mathcal O}_{{\mathfrak S},{\bf x}}$ is a semi-valuation ring with the semi-fraction ring ${\mathcal O}_{Y,y}$. By Step 1, there exists a $U_{\bf x}$-\'etale covering $S'_{\bf x}\to S_{\bf x}$ such that the $S'_{\bf x}$-multipointed curve $(C_{\bf x},D_{\bf x})\times_{S_{\bf x}}S'_{\bf x}\widetilde{\to} (C,D)\times_S S'_{\bf x}$ admits a stable $U'_{\bf x}$-modification for $U'_{\bf x}=U_{\bf x}\times_{S_{\bf x}}S'_{\bf x}$. Note also that the morphism $S'_{\bf x}\to S_{\bf x}$ is flat and finitely presented by Lemma \ref{gluelem}(iii). Consider the family $\{S_i,\dots ,_{i\in I}$ of all $U$-modifications of $S$, and let $x_i$ be the center of ${\bf x}$ on $S_i$. Recall that ${\mathcal O}_{{\mathfrak S},{\bf x}}=\injlim{\mathcal O}_{S_i,x_i}$. By approximation, there exists $i=i({\bf x})$ and a flat finitely presented $U$-\'etale morphism $h_{\bf x}:S'\to S_i$ such that $x_i$ lies in its image and $(C,D)\times_S S'_i$ admits a stable $U$-modification. By flatness of $h_{\bf x}$, $h_{\bf x}(S')$ is open in $S_i$, and hence its preimage in ${\mathfrak S}$ is an open neighborhood of ${\bf x}$. Note that in the sequel we can replace $i$ by any larger index $k$ simply by replacing $h_{\bf x}$ by its base change with respect to the $U$-modification $S_k\to S_i$. Since ${\mathfrak S}$ is quasi-compact, there exist finitely many points ${\bf x}_j$, $1\le j\le n$ with associated flat morphisms $h_j:S'_j\to S_{i_j}$ so that ${\mathfrak S}$ is covered by the preimages of the sets $h_j(S'_j)$. By the above argument we can enlarge all indexes so that $i:=i_1=\dots =i_n$. The open subschemes $h_j(S'_j)\hookrightarrow S_i$ with $1\le j\le n$ cover $S_i$ because their preimages cover ${\mathfrak S}$, and so $S':=\sqcup_{j=1}^n S'_j$ is a flat cover of $S_i$. In particular, $S'$ is a $U$-\'etale covering of $S$ over which $(C,D)$ possesses a stable $U$-modification. \end{proof} A scheme version of the reduced fiber theorem of Bosch-L\"utkebohmert-Raynaud \cite[2.1']{BLR}, can be proved absolutely similarly. \begin{theor} \label{redfibthU} Let $X\to S$ be a schematically dominant finitely presented morphism whose $U$-fibers are geometrically reduced. Then there exists a $U$-\'etale covering $S'\to S$ and a finite $U$-modification $X'\to X\times_S S'$ such that $X'$ is flat, finitely presented and has reduced geometric fibers over $S'$. \end{theor} \begin{proof} If $S$ is the spectrum of a valuation ring and $U$ is its generic point then the theorem follows from \cite[3.5.5]{Temst} (actually it was the content of Steps 2--4 of the loc.cit.). Acting as in Step 1 of the previous proof, we deduce the case when $S$ is the spectrum of a semi-valuation ring and $U$ is the corresponding local scheme. Then it remains to repeat the argument of Step 2. \end{proof} \section{Relative RZ spaces and the decomposition theorem} \label{nagchap} Throughout ,\dots ,\ref{nagchap}, $f:Y\to X$ is a morphism of schemes and ${\mathfrak X}={\rm Val}_Y(X)$. Later we will also introduce a topological space ${\rm Spa}(Y,X)$ and then we will use the notation ${\ol\gtX}={\rm Spa}(Y,X)$. Sometimes we will consider another morphism of schemes $f':Y'\to X'$ and then ${\mathfrak X}'={\rm Val}_{Y'}(X')$, ${\ol\gtX}'={\rm Spa}(Y',X')$. \subsection{Connection to adic spaces} \label{spasec} Let $A$ be a ring and $B$ be an $A$-algebra. R. Huber considers in \cite{Hub1} the set ${\rm Spv}(B)$ of all equivalence classes of valuations on $B$ and provides it with the weakest topology in which the sets of the form $,\dots ,|,\dots ,|\in{\rm Spv}(B)|,\dots ,|a|\le |b|\neq 0,\dots ,$ are open for any $a,b\in B$. Huber proves in \cite[2.2]{Hub1} that the resulting topological space is quasi-compact. Furthermore, he considers the quasi-compact subspace ${\rm Spa}(B,A)\subset{\rm Spv}(B)$ consisting of the valuations of $B$ with $|A|\le 1$: see the definition on p. 467 in loc.cit., where one treats $A$ and $B$ as topological rings with discrete topology (note also that Huber actually considers the case when $A$ is an integrally closed subring of $B$, but this does not really restrict the generality because replacing $A$ by the integral closure of its image in $B$ has no impact on the topological space ${\rm Spa}(B,A)$). Actually, the topological space ${\rm Spa}(B,A)$ has a much finer structure of an adic space but we will not use it. Let us generalize the above paragraph to schemes. Note that a valuation on a ring $A$ is defined by its kernel $x\in{\rm Spec}(A)$ and the induced valuation on $k(x)$. So, by a {\em valuation on} a scheme $Y$ we mean a pair ${\bf y}=(y,R)$, where $y\in Y$ is a point called the {\em kernel} of ${\bf y}$ and $R$ is a valuation ring of $k(y)$. One can define ${\bf y}$ by giving a valuation $|,\dots ,|_{\bf y}:{\mathcal O}_{Y,y}\to\Gamma_{\bf y}$ whose kernel is $m_y$. By ${\mathcal O}_{\bf y}$ we denote the subring of ${\mathcal O}_{Y,y}$ given by the condition $|,\dots ,|_{\bf y}\le 1$; it is the preimage of $R$ in ${\mathcal O}_{Y,y}$. Remark that ${\mathcal O}_{\bf y}$ is a semi-valuation ring with the semi-fraction ring ${\mathcal O}_{Y,y}$. Often it is convenient to describe a valuation locally by choosing an affine neighborhood ${\rm Spec}(A)$ of $y$ and giving a valuation $A\to{\mathcal O}_{Y,y}\to\Gamma_{\bf y}$ on $A$. Furthermore, if $f:Y\to X$ is a morphism of schemes then by an {\em $X$-valuation on $Y$} me mean a valuation ${\bf y}=(y,R)$ provided with a morphism $\phi:S={\rm Spec}(R)\to X$ which is compatible with the natural morphism $\eta={\rm Spec}(k(y))\to X$. Recall that in the valuative criteria of properness/separatedness one considers commutative diagrams of the form \begin{equation} \label{valdiag} \xymatrix{ \eta \ar[d]\ar[r]^-{i} & Y\ar[d]^-{f},\dots , S\ar[r]^-{\phi} & X} \end{equation} where $S={\rm Spec}(R)$ is the spectrum of a valuation ring and $\eta={\rm Spec}(K)$ is its generic point, and studies liftings of $S$ to $Y$. It is easy to see (and will be proved in Lemma \ref{valdiaglem1}) that it suffices to consider only the case when $k(y)\widetilde{\to} K$ for $y=i(\eta)$ in the valuative criteria. In the latter particular case, diagrams of type (\ref{valdiag}) are exactly the diagrams which correspond to $X$-valuations of $Y$. Note also that an $X$-valuation ${\bf y}=(y,R,\phi)$ gives rise to the following finer diagram \begin{equation} \label{twosqdiag} \xymatrix{ \eta\ar[r]\ar[d]& {\rm Spec}({\mathcal O}_{Y,y})\ar[r]\ar[d]& Y\ar[d],\dots , S\ar[r]& {\rm Spec}({\mathcal O}_{\bf y})\ar[r]& X} \end{equation} Indeed, the center $x\in X$ of $R$ is a specialization the image of $y$ and the induced homomorphism ${\mathcal O}_{X,x}\to{\mathcal O}_{Y,y}\to k(y)$ coincides with ${\mathcal O}_{X,x}\to R\to {\rm Frac}(R)\widetilde{\to} k(y)$. Hence these homomorphisms factor through ${\mathcal O}_{\bf y}$ (actually, we have just shown that the left square is co-Cartesian). Let ${\rm Spa}(Y,X)$ denote the set of all isomorphism classes of $X$-valuations on $Y$. We claim that ${\rm Spa}(Y,X)$ depends functorially on $f$. Indeed, given a morphism $f':Y'\to X'$ and a morphism $g:f'\to f$ consisting of a compatible pair of morphisms $g_Y:Y'\to Y$ and $g_X:X'\to X$, there is a natural map ${\rm Spa}(g):{\rm Spa}(Y',X')\to{\rm Spa}(Y,X)$ which to a point $(y',R',\phi')$ associates a point $(y,R,\phi)$, where $y=g_Y(y')$, $R=R'\cap k(y)$ and $\phi$ is defined as follows. The morphism $g_X\circ\phi':{\rm Spec}(R')\to X$ factors through ${\rm Spec}({\mathcal O}_{X,x})$, where $x$ is the image of the closed point of the source, hence we obtain a homomorphism $\alpha:{\mathcal O}_{X,x}\to R'$. Since the morphism ${\rm Spec}(k(y'))\to X$ factors uniquely through ${\rm Spec}(k(y))$, the image of ${\alpha}$ is contained in $R$. So, $g_X\circ\phi'$ factors uniquely through a morphism $\phi:{\rm Spec}(R)\to X$ and the map ${\rm Spa}(g)$ is constructed. If $g_Y$ is an immersion and $g_X$ is separated then ${\rm Spa}(g)$ is injective. Indeed, if a point ${\bf y}=(y,R,\phi)\in{\ol\gtX}:={\rm Spa}(Y,X)$ has a non-empty preimage in ${\ol\gtX}':={\rm Spa}(Y',X')$, then $y\in Y'$ and any preimage of ${\bf y}$ is given by a lifting of $\phi:{\rm Spec}(R)\to X$ to $X'$, which is unique by the valuative criterion of separatedness. Furthermore, we say that ${\ol\gtX}'$ is an {\em affine subset} of ${\ol\gtX}$ if $Y'$ and $X'$ are affine, $g_Y$ is an open immersion and $g_X$ is of finite type. We provide ${\ol\gtX}$ with the weakest topology in which all affine subsets are open. Note that if we are given another morphism between morphisms $h:(Y_1\to X_1)\to(Y\to X)$ with the corresponding map ${\rm Spa}(h):{\ol\gtX}_1\to{\ol\gtX}$, then $Y'_1:=Y'\times_Y Y_1$ is a subscheme in $Y_1$ and $X'_1:=X'\times_X X_1$ is separated over $X_1$, hence ${\ol\gtX}'_1:={\rm Spa}(Y'_1,X'_1)$ embeds into ${\ol\gtX}_1$. \begin{lem}\label{afflem} Let ${\rm Spa}(g):{\ol\gtX}'\to{\ol\gtX}$ and ${\rm Spa}(h):{\ol\gtX}_1\to{\ol\gtX}$ be as above and assume that $g_Y$ is an immersion and $g_X$ is separated. (i) ${\ol\gtX}'_1$ is the preimage of ${\ol\gtX}'$ under ${\rm Spa}(h)$. (ii) If $X$ and $Y$ are separated, ${\ol\gtX}'$ is an affine subset of ${\ol\gtX}$ and both $X_1$ and $Y_1$ are affine, then ${\ol\gtX}'_1$ is an affine subset of ${\ol\gtX}_1$. In particular, if $X$ and $Y$ are separated then the intersection of affine subsets in ${\ol\gtX}$ is an affine subset. (iii) Affine subsets form a basis of the topology on ${\ol\gtX}$, and if $X$ and $Y$ are qcqs then any intersection of two affine subsets is a finite union of affine subsets. (iv) If $g_Y$ is an open immersion and $g_X$ is of finite type then ${\ol\gtX}'$ is open in ${\ol\gtX}$; (v) The maps ${\rm Spa}(h)$ are continuous. \end{lem} \begin{proof} The first claim is proved by a straightforward check. If $Y$ and $X$ are separated then $Y'\times_Y Y_1$ and $X'\times_X X_1$ are affine, hence (i) implies (ii). Furthermore, in general (i) implies that the intersection of affine subsets in ${\ol\gtX}$ is of the form ${\rm Spa}({\ol Y},{\ol X})$. Since an affine subset in ${\rm Spa}({\ol Y},{\ol X})$ is also an affine subset in ${\ol\gtX}$, to prove (iii) it suffices to show that any space ${\rm Spa}({\ol Y},{\ol X})$ (resp. with qcqs ${\ol X}$ and ${\ol Y}$) is covered by (resp. finitely many) affine subsets. Find open affine (resp. finite) coverings ${\ol X}=\cup{\ol X}_i$ and ${\ol Y}=\cup{\ol Y}_j$ such that each ${\ol Y}_j$ is mapped to some ${\ol X}_{i(j)}$, and note that ${\rm Spec}({\ol Y},{\ol X})$ is the union of affine subsets ${\rm Spa}({\ol Y}_j,{\ol X}_{i(j)})$. This proves (iii), and the same argument proves (iv). Finally, (v) follows from the fact the preimage of each affine subset is open due to (i) and (iv). \end{proof} We claim that in the affine case the above topology agrees with the topology defined by Huber. \begin{lem} \label{spahomlem} If $X={\rm Spec}(A)$ and $Y={\rm Spec}(B)$ are affine then the canonical bijection $\phi:{\rm Spa}(Y,X)\to{\rm Spa}(B,A)$ is a homeomorphism. \end{lem} \begin{proof} It follows from the definitions of the topologies that the map is continuous, so we have only to establish openness. Let ${\ol\gtX}'={\rm Spa}({\rm Spec}(C),{\rm Spec}(A'))$ be an affine subset in ${\ol\gtX}$, where ${\rm Spec}(C)$ is an open subscheme of ${\rm Spec}(B)$ and $A'$ is a finitely generated $A$-algebra. It suffices to prove that $\phi({\ol\gtX}')$ is a neighborhood of each point ${\bf z}$ it contains. Replacing $A'$ with its image in $C$ we can assume that it is an $A$-subalgebra of $C$ generated by $h_1,\dots , h_n\in C$. Note that if $\{U_i,\dots ,$ is an open covering of ${\rm Spec}(C)$ then the sets ${\rm Spa}(U_i,{\rm Spec}(A'))$ cover ${\ol\gtX}'$. Therefore, shrinking ${\rm Spec}(C)$ we can assume that $C=B_b$ for an element $b\in B$. Then $h_i=b_i/b^m$ with $b_i\in B$ and $m\in{\bf N}$, and $\phi({\ol\gtX}')$ consists of all valuations of $B$ with $|b_i|\le|b^m|\neq 0$ for any $i$. Thus, $\phi({\ol\gtX}')$ is open in ${\rm Spa}(B,A)$, and we are done. \end{proof} Since Huber's spaces ${\rm Spa}(B,A)$ are qcqs, we obtain the following corollary. \begin{cor}\label{qcqscor} If $X$ and $Y$ are qcqs schemes then the space ${\rm Spa}(Y,X)$ is qcqs. \end{cor} Let $B$ be a ring provided with a valuation $|,\dots ,|:B\to\Gamma\cup\{0,\dots ,$, and let $y\in{\rm Spec}(B)$ be its kernel. We say that a convex subgroup $\Gamma'\subseteq\Gamma$ {\em bounds} $B$, if for any element $b\in B$, there exists an element $h\in\Gamma'$ with $|b|\le h$. For any such subgroup we can define a valuation $|,\dots ,|':B\to\Gamma'$ by the rule $|x|'=|x|$ if $|x|\in\Gamma'$ and $|x|'=0$ otherwise. Obviously, the kernel $y'$ of $|,\dots ,|'$ isa specialization of $y$. Recall that $|,\dots ,|'$ is called a {\em primary specialization} of $|,\dots ,|$, see \cite[2.3]{Hub1}. Here are simple properties of primary specializations. \begin{rem}\label{primrem} (i) Primary specialization is a transitive operation and the set $P$ of primary specializations of $|,\dots ,|$ is ordered. (ii) The set $P$ possesses a minimal element corresponding to the intersection of all subgroups bounding $B$; it is called the {\em minimal primary specialization}. (iii) A valuation on $B$ is called {\em minimal} if it has no non-trivial primary specializations. For a valuation given by a point $y\in{\rm Spec}(B)$ and a valuation ring $R\subset k(y)$ the following conditions are equivalent: (a) $(y,R)$ is minimal; (b) $k(y)$ is generated by $R$ and the image of $B$; (c) the morphism ${\rm Spec}(k(y))\to{\rm Spec}(R)\times{\rm Spec}(B)$ is a closed immersion. (iv) Let $|,\dots ,|:B\to\Gamma\cup\{0,\dots ,$ be a valuation with kernel $y$, $\Gamma'\subseteq\Gamma$ be a convex subgroup, and $R\subseteq R'$ be the valuation ring of $k(y)$ corresponding to the induced valuations $k(y)\to\Gamma$ and $k(y)\to\Gamma\to\Gamma/\Gamma'$. Then the following conditions are equivalent: (a) there exists a primary specialization $|,\dots ,|'$ corresponding to $\Gamma'$; (b) the image of $B$ in $k(y)$ is contained in $R'$; (c) the morphism $y\to{\rm Spec}(B)$ extends to a morphism ${\rm Spec}(R')\to{\rm Spec}(B)$. Moreover, if the conditions are satisfied then the kernel $y'$ of $|,\dots ,|'$ is the center of $R'$ on ${\rm Spec}(B)$. The equivalences (a)$\Leftrightarrow$(b) and (b)$\Leftrightarrow$(c) are obvious. As for the additional claim, we note that the center of $R'$ corresponds to the kernel of the homomorphism $B\to R'\to R'/m_{R'}$, and the latter consists of the elements $b\in B$ with $|b|\notin\Gamma'$, i.e. coincides with the kernel of $|,\dots ,|'$. \end{rem} Let, more generally, ${\bf y}=(y,R)$ be a valuation on a scheme $Y$. By a {\em primary specialization} of ${\bf y}$ we mean a valuation ${\ol\bfy}=({\ol y},{\ol R})$ such that ${\ol y}$ is a specialization of $y$ and the valuation $|,\dots ,|_{\ol\bfy}$ on ${\mathcal O}_{Y,{\ol y}}$ is a primary specialization of the valuation induced from ${\bf y}$ via the homomorphism ${\mathcal O}_{Y,{\ol y}}\to{\mathcal O}_{Y,y}$. Equivalently, if ${\rm Spec}(A)$ is an affine neighborhood of ${\ol y}$ (and hence of $y$) then the valuation induced by $({\ol y},{\ol R})$ on $A$ is a primary specialization of the valuation induced by $(y,R)$. \begin{lem}\label{primlem} Let $(y,R)$ be a valuation on a separated scheme $Y$. (i) The set of primary valuations of $(y,R)$ is totally ordered by specialization; (ii) If $Y$ is also quasi-compact then $(y,R)$ admits a minimal primary specialization. \end{lem} \begin{proof} We claim that (i) follows from Remark \ref{primrem}(iv). Indeed, for any $R'$ with $R\subseteq R'\subseteq k(y)$ there exists at most one possibility to extend $y$ to a morphism ${\rm Spec}(R')\to Y$. So, if we have two primary specializations $(y_1,R_1)$ and $(y_2,R_2)$ corresponding to valuation rings $R\subseteq R',R''\subseteq k(y)$, then without loss of generality we have that $R'\subseteq R''$ and the unique morphism ${\rm Spec}(R'')\to Y$ is obtained by localizing the morphism ${\rm Spec}(R')\to Y$. Thus, $y_1$ is a specialization of $y_2$, and everything reduces to affine theory of primary specializations on ${\mathcal O}_{Y,y_1}$, see Remark \ref{primrem}(i). To prove (ii) we note that $(y,R)$ admits a minimal primary specialization because if $,\dots ,(y_i,R_i),\dots ,_{i\in I}$ denotes the set of all primary specializations then the set of kernels $\{y_i,\dots ,_{i\in I}$ is totally ordered with respect to specialization. By quasi-compactness there exists a point ${\ol y}\in Y$ which is a specialization all $y_i$'s. So, the claim reduces to the affine theory on ${\mathcal O}_{Y,{\ol y}}$, see Remark \ref{primrem}(ii). \end{proof} Finally, taking a morphism $f:Y\to X$ into account, by a {\em primary specialization} of an $X$-valuation ${\bf y}=(y,R,\phi)$ we mean an $X$-valuation ${\ol\bfy}=({\ol y},{\ol R},{\ol\phi})$ such that $({\ol y},{\ol R})$ is a primary specialization of $(y,R)$ and the image of ${\ol\phi}$ in $X$ is contained in the image of $\phi$ in $X$. Primary specialization is a particular case of a specialization relation in ${\rm Spa}(Y,X)$. An $X$-valuation $(y,R,\phi)$ (resp. a valuation $(y,R)$) on $Y$ is called {\em minimal} if it has no non-trivial primary specializations. \begin{lem}\label{primspecrem} Let $(y,R,\phi)$ be an $X$-valuation on $Y$. Then any primary specialization $({\ol y},{\ol R})$ of the valuation $(y,R)$ admits at most one extension to a primary specialization $({\ol y},{\ol R},{\ol\phi})$ of $(y,R,\phi)$, and the extension exists if and only if $f({\ol y})$ belongs to the image of $\phi$. The latter is automatically the case when $X$ is separated. \end{lem} \begin{proof} Obviously, the assertion on $f({\ol y})$ is necessary for an extension to exist. Furthermore, by Remark \ref{primrem}(iv) there exists a valuation ring $R'$ with $R\subseteq R'\subseteq k(y)$ such that $y$ extends to a morphism ${\rm Spec}(R')\to Y$ with ${\ol y}$ being the image of the closed point. If $X$ is separated then the induced map ${\rm Spec}(R')\to X$ must coincide with the corresponding localization of $\phi:{\rm Spec}(R)\to X$, hence we obtain the last assertion of the lemma. The remaining claims are local at the center $x\in X$ of $R$ (i.e. the image of the closed point of $\phi$). So, we can replace $X$ and $Y$ with a neighborhood of $x$ and its preimage achieving that the schemes become separated. The uniqueness is now clear. To establish existence we should check that the image of the homomorphism ${\mathcal O}_{X,x}\to{\mathcal O}_{Y,{\ol y}}\to k({\ol y})$ is in ${\ol R}$. The latter follows from the following two facts: (a) by existence of $\phi$ the image of ${\mathcal O}_{X,x}$ in $k(y)$ is in $R$, (b) ${\ol R}$ is induced from $R$ in the sense that an element ${\ol f}\in{\mathcal O}_{Y,{\ol y}}$ satisfies ${\ol f}({\ol y})\in{\ol R}$ if and only if $f(y)\in R$. \end{proof} The lemma shows that we can actually ignore $\phi$ when $X$ is separated. In particular, minimality of $(y,R,\phi)$ is then equivalent to that of $(y,R)$. \begin{cor} \label{minvallem} Let $f:Y\to X$ be a separated morphism of qcqs schemes and ${\bf y}=(y,R,\phi)$ be an $X$-valuation on $Y$. Then i) the set of primary specializations of ${\bf y}$ is totally ordered and contains a minimal element, (ii) ${\bf y}$ is minimal if and only if the morphism $h:{\rm Spec}(k(y))\to Y\times_X{\rm Spec}(R)$ is a closed immersion. \end{cor} \begin{proof} The claim is local at the center of $x\in X$ of $R$ (with respect to $\phi$), hence we can assume that $X$ and, hence, $Y$ are separated. Then primary specializations of ${\bf y}$ can be identified with primary specializations of the valuation $(y,R)$, hence (i) follows from Lemma \ref{primlem}. To prove (ii) we note that as soon as $X$ is separated, $h$ is a closed immersion if and only if the morphism ${\rm Spec}(k(y))\to Y\times{\rm Spec}(R)$ is a closed immersion. Hence the claim follows from Remark \ref{primrem}(iii). \end{proof} Until the end of ,\dots ,\ref{nagchap}, we assume that $f:Y\to X$ is a separated morphism of qcqs schemes, unless the contrary is said explicitly. We define the subset ${\mathfrak X}={\rm Val}_Y(X)\subset{\ol\gtX}$ as the set of all minimal valuations and note that in view of Lemma \ref{minvallem} this agrees with the case studied in \ref{affsec}. We do not introduce ${\rm Val}_Y(X)$ when $f$ is not separated: although the formal definition makes sense, it is not clear if the obtained object is interesting. Note also that in affine situation such subsets were considered by Huber, see \cite[2.6 and 2.7]{Hub1}. We provide ${\mathfrak X}$ with the induced topology. The following lemma follows easily from the valuative criterion of properness and Lemma \ref{afflem}. \begin{lem} \label{easylem} (i) If $X'$ is a $Y$-modification of $X$ then there are natural homeomorphisms ${\rm Spa}(Y,X')\widetilde{\to}{\rm Spa}(Y,X)$ and ${\rm Val}_Y(X')\widetilde{\to}{\rm Val}_Y(X)$. (ii) If $X'$ is an open subscheme of $X$ then its preimage in ${\rm Val}_Y(X)$ (resp. ${\rm Spa}(Y,X)$) is canonically homeomorphic to ${\rm Val}_{Y'}(X')$ (resp. ${\rm Spa}(Y',X')$), where $Y'=X'\times_X Y$. \end{lem} \begin{rem}\label{basarem} (i) If $f':Y'\to X'$ and $f:Y\to X$ are separated morphisms of qcqs schemes, and $g:f'\to f$ is a morphism such that $g_Y$ is an open immersion and $g_X$ is separated and of finite type, then ${\rm Spa}(Y',X')$ maps homeomorphically onto an open subspace of ${\ol\gtX}$. However, it may (and usually does) happen that the image of ${\rm Val}_{Y'}(X')$ in ${\ol\gtX}$ is not contained in ${\mathfrak X}$. The problem originates from the fact that a minimal valuation on $Y'$ may admit non-trivial primary specializations on $Y$. (ii) There exists a natural contraction $\pi_{\mathfrak X}:{\ol\gtX}\to{\mathfrak X}$ which maps any valuation to its minimal primary specialization, but it is a difficult fact that $\pi_{\mathfrak X}$ is continuous. (iii) Using $\pi_{\mathfrak X}$ we can extend ${\rm Val}$ to a functor by composing ${\rm Spa}(g)$ with the contraction $\pi_{\mathfrak X}$ as ${\rm Val}(g):{\mathfrak X}'\hookrightarrow{\ol\gtX}'\to{\ol\gtX}\to{\mathfrak X}$. However, we do not know that it is continuous until continuity of $\pi_{\mathfrak X}$ is established. \end{rem} Actually, the above problems are closely related, and we will solve them only in the end of ,\dots ,\ref{affinoidsec}. Recall that if $X$ and $Y$ are qcqs then so are ${\rm Spa}(Y,X)$ and ${\rm RZ}_Y(X)$ (by Corollary \ref{qcqscor} and \cite[3.3.1]{Temst}). Here is a partial (so far) result for ${\rm Val}_Y(X)$. \begin{prop} \label{qcprop} Assume that $f:Y\to X$ is a separated morphism of qcqs schemes. Then the spaces ${\rm Val}_Y(X)$ is quasi-compact and the map $\psi:{\rm Val}_Y(X)\to{\rm RZ}_Y(X)$ is continuous. \end{prop} \begin{proof} Let $,\dots ,{\mathfrak X}_i,\dots ,_{i\in I}$ be an open covering of ${\mathfrak X}$. Find open sets ${\ol\gtX}_i\subset{\ol\gtX}$ such that ${\mathfrak X}_i={\ol\gtX}_i\cap{\mathfrak X}$. Since any point of ${\ol\gtX}$ has a specialization in ${\mathfrak X}$ by Corollary \ref{minvallem}, $,\dots ,{\ol\gtX}_i,\dots ,_{i\in I}$ is a covering of ${\ol\gtX}$. By quasi-compactness of ${\ol\gtX}$, we can find a subcovering $,\dots ,{\ol\gtX}_i,\dots ,_{i\in J}$ with a finite $J$, and then $,\dots ,{\mathfrak X}_i,\dots ,_{i\in J}$ is a finite covering of ${\mathfrak X}$. Thus, ${\mathfrak X}$ is quasi-compact. We claim that for any $Y$-modification $X'\to X$, the map $\phi:{\mathfrak X}\to X'$ is continuous. Indeed, if $U\subset X'$ is open then its preimage in ${\ol\gtX}$ is the open subspace ${\ol\gtX}'\widetilde{\to}{\rm Spa}(Y\times_{X'} U,U)$. Therefore, the preimage of $U$ in ${\mathfrak X}$ is the open set ${\ol\gtX}'\cap{\mathfrak X}$, as required. Continuity of the maps $\phi$ (for each $X'$) implies that the map $\psi:{\mathfrak X}\to{\rm RZ}_Y(X)$ is continuous. \end{proof} \subsection{Valuative criteria}\label{valcritsec} In the sequel, we will need to strengthen the classical valuative criteria of separatedness and properness, \cite[$\rm II$, 7.2.3 and 7.3.8]{ega}. Our aim is to show that it suffices to consider valuative diagrams of specific types. We say that a morphism is compatible with a commutative diagram, if the diagram remains commutative after adjoining this morphism. Throughout ,\dots ,\ref{valcritsec} $f$ is not assumed to be separated. \begin{lem} \label{valdiaglem1} Keep the notation of diagram (\ref{valdiag}) and set $K'=k(i(\eta))$ and $R'=R\cap K$. Then diagram (\ref{valdiag}) completes uniquely to a commutative diagram \begin{equation*} \xymatrix{ {\rm Spec}(K) \ar[d]\ar[r] & {\rm Spec}(K') \ar[d]\ar[r] & Y\ar[d],\dots , {\rm Spec}(R)\ar[r] & {\rm Spec}(R')\ar[r] & X} \end{equation*} and any morphism $h:{\rm Spec}(R)\to Y$ compatible with the above diagram is induced from a morphism $h':{\rm Spec}(R')\to Y$ compatible with the diagram. In addition, $h$ determines $h'$ uniquely. \end{lem} \begin{proof} The morphism ${\rm Spec}(K)\to Y$ obviously factors through ${\rm Spec}(K')$. The morphism ${\rm Spec}(R)\to X$ factors through ${\rm Spec}({\mathcal O}_{X,x})$, where $x$ is the image of the closed point of ${\rm Spec}(R)$. The image of ${\mathcal O}_{X,x}$ in $R\subset K$ is contained in $K'$, hence the morphism ${\rm Spec}(R)\to X$ factors through ${\rm Spec}(R')$. By the same reasoning, a morphism $h:{\rm Spec}(R)\to Y$ compatible with the diagram factors through $h':{\rm Spec}(R')\to Y$, and they both are determined uniquely by the image of the closed point in $Y$. \end{proof} \begin{lem} \label{valdiaglem2} Keep the notation of diagram (\ref{valdiag}) and assume that $R\subseteq R'\subseteq K$ is such that the morphism ${\rm Spec}(R')\to X$ admits a lifting $g:{\rm Spec}(R')\to Y$ compatible with the diagram. Let ${\wt K}$ be the residue field of $R'$ and ${\wt R}$ be the image of $R$ in ${\wt K}$. \begin{equation*} \xymatrix{ & {\rm Spec}(K) \ar[d]\ar[r] & Y\ar[dd]^f,\dots , {\rm Spec}({\wt K})\ar[r]\ar[d] & {\rm Spec}(R')\ar[ru]^g\ar[d] & ,\dots , {\rm Spec}({\wt R})\ar[r] & {\rm Spec}(R)\ar[r] & X} \end{equation*} Then any morphism ${\wt h}:{\rm Spec}({\wt R})\to Y$ compatible with the above diagram is induced from a morphism $h:{\rm Spec}(R)\to Y$ compatible with the diagram and ${\wt h}$ determines $h$ uniquely. \end{lem} \begin{proof} Consider a morphism ${\wt h}:{\rm Spec}({\wt R})\to Y$ compatible with the diagram. It suffices to show that it factors through ${\rm Spec}(R)$, since the uniqueness is again trivial. Let $y$ be the image of the closed point of ${\rm Spec}({\wt R})$, so ${\wt h}$ induces a homomorphism ${\mathcal O}_{Y,y}\to{\wt R}$. Since $y$ is a specialization of the image $y'$ of the closed point of ${\rm Spec}(R')$, we have also a homomorphism ${\mathcal O}_{Y,y}\to{\mathcal O}_{Y,y'}\to R'$. Then the compatibility implies that the image of ${\mathcal O}_{Y,y}$ in ${\wt K}=R'/m_{R'}$ lies in ${\wt R}$. Therefore, the image of ${\mathcal O}_{Y,y}$ in $R'$ lies in $R$ which is the preimage of ${\wt R}$ under $R'\to{\wt K}$, and we obtain that the homomorphism ${\mathcal O}_{Y,y}\to {\wt R}$ factors through $R$. It gives the desired morphism $h:{\rm Spec}(R)\to Y$. \end{proof} Note that Lemma \ref{valdiaglem1} implies that it suffices to consider only the case when $k(i(\eta))\widetilde{\to} K$ in the valuative criteria (i.e. it suffices to take valuative diagrams corresponding to the elements of ${\rm Spa}(Y,X)$), and then Lemma \ref{valdiaglem2} and Remark \ref{primrem}(iv) imply that it even suffices to consider only the valuative diagrams corresponding to the elements of ${\rm Val}_Y(X)$. It is also well known that in the valuative criteria one can restrict to the case when the image of $\eta$ lies in a given dense subset which is closed under generalization (e.g. the generic point of an irreducible scheme), and such strengthening is the main issue of the following proposition. \begin{prop} \label{valcritprop} Assume that $h:Z\to Y$ and $f:Y\to X$ are morphisms of qcqs schemes and consider the natural map ${\ol\psi}:{\rm Spa}(Z,Y)\to{\rm Spa}(Z,X)$. (i) $f$ is separated if and only if ${\ol\psi}$ is injective. (ii) Assume that $f$ is of finite type. Then $f$ is proper if and only if ${\ol\psi}$ is bijective. (iii) If $f$ and $h$ are separated then ${\ol\psi}$ induces a map $\psi:{\rm Val}_Z(Y)\to{\rm Val}_Z(X)$, and ${\ol\psi}$ is bijective if and only if $\psi$ is bijective. \end{prop} \begin{proof} First we prove (iii). If ${\bf z}=(z,R,\phi_Y)$ is a point in ${\rm Val}_Z(Y)$ then the morphism $z\to{\rm Spec}(R)\times_YZ$ is a closed immersion. But the target is a closed subscheme in ${\rm Spec}(R)\times_XZ$ by separatedness of $f$, and hence ${\ol\psi}({\bf z})$ is also a minimal valuation. Thus, ${\ol\psi}$ induces a map $\psi$ between the subsets ${\rm Val}$. Next we relate the fibers of $\psi$ and ${\ol\psi}$. Consider any point ${\bf z}\in{\rm Spa}(Z,X)$ and let ${\bf z}_0\in{\rm Val}_Z(X)$ be its minimal primary specialization. Then Lemma \ref{valdiaglem2} implies that the sets ${\ol\psi}^{-1}({\bf z})$ and $\psi^{-1}({\bf z}_0)$ are naturally bijective, and this proves (iii). We will deal with (i) and (ii) simultaneously. The direct implications follow from the standard valuative criteria. We will prove the opposite implications (which are refined valuative criteria) by getting a contradiction. So, suppose that $f$ is not separated in (i), or of finite type, separated and not proper in (ii) (if $f$ is not separated in (ii) then ${\ol\psi}$ cannot be bijective by (i)). By the standard valuative criterion and Lemma \ref{valdiaglem1}, there exists an element ${\bf y}=(y,R_y,\phi_y)\in{\rm Spa}(Y,X)$ such that the number of liftings of the morphism $\phi_y:{\rm Spec}(R_y)\to X$ to $Y$ is at least two in (i) or zero in (ii). Let $x$ denote the center of $R_y$ on $X$. By \cite[6.6.5]{egaI}, there exists a point $z\in Z$ for which $h(z)$ is a generalization of $y$, and so a homomorphism ${\mathcal O}_{Y,y}\to{\mathcal O}_{Z,z}\to k(z)$ arises. Let $R'$ be any valuation ring of $k(z)$ which dominates the image of ${\mathcal O}_{Y,y}$. It gives rise to an element $(z,R',\phi')\in{\rm Spa}(Z,Y)$ centered on $y$. Choose a valuation ring ${\wt R}$ of the residue field ${\wt K}$ of $R'$ such that ${\wt R}$ dominates the valuation ring $R_y$ of $k(y)\subset{\wt K}$, and define a valuation ring $R$ of $k(z)$ as the composition of $R'$ and ${\wt R}$. The compatible homomorphisms ${\mathcal O}_{X,x}\to{\mathcal O}_{Y,y}\to R'$ and ${\mathcal O}_{X,x}\to R_y\to{\wt R}$ induce a homomorphism ${\mathcal O}_{X,x}\to R$, and we obtain the following commutative diagrams. \begin{equation*} \xymatrix{ & {\rm Spec}(k(z)) \ar[d]\ar[r] & Z\ar[d] & ,\dots , {\rm Spec}({\wt K})\ar[r]\ar[d] & {\rm Spec}(R')\ar[r]^{\phi'}\ar[d] & Y\ar[d] & {\rm Spec}({\wt K})\ar[r]\ar[d] & {\rm Spec}(k(y))\ar[r]\ar[d] & Y\ar[d] ,\dots , {\rm Spec}({\wt R})\ar[r] & {\rm Spec}(R)\ar[r]^{\phi_x} & X & {\rm Spec}({\wt R})\ar[r] & {\rm Spec}(R_y)\ar[r] & X} \end{equation*} Lemma \ref{valdiaglem1} implies that there is a one-to-one correspondence between morphisms ${\rm Spec}(R_y)\to Y$ and ${\rm Spec}({\wt R})\to Y$ compatible with the right diagram, and by Lemma \ref{valdiaglem2}, the latter morphisms are in one-to-one correspondence with the morphisms $\phi:{\rm Spec}(R)\to Y$ compatible with the left diagram. So, there are at least two such $\phi$'s in (i) and there is no such $\phi$ in (ii). Note that ${\bf z}=(z,R,\phi_x)$ is an element in ${\rm Spa}(Z,X)$, and any morphism $\phi$ as above gives a preimage of ${\bf z}$ in ${\rm Spa}(Z,Y)$. We obtain that in the case (i), ${\bf z}$ has at least two preimages and so ${\ol\psi}$ is not injective. The same argument would prove (ii) if we also know that, conversely, any preimage of ${\bf z}$ in ${\rm Spa}(Z,Y)$ comes from $\phi$ as above. In other words, we want to show that any lift of $\phi_x$ to ${\wt\phi}:{\rm Spec}(R)\to Y$ is compatible with the whole left diagram, and this actually reduces to compatibility of ${\wt\phi}$ with $\phi'$. Note that $Y\to X$ is separated by the already established case (i), and the valuative criterion of separatedness implies that the morphism $\phi'$ is uniquely determined by the morphisms ${\rm Spec}(k(z))\to Y$ and ${\rm Spec}(R')\to X$. So, compatibility of ${\wt\phi}$ with $\phi'$ is automatic. \end{proof} \subsection{Affinoid domains} \label{affinoidsec} Let $f':Y'\to X'$ be another separated morphism of qcqs schemes and $g:f'\to f$ be a morphism. Recall that we defined in ,\dots ,\ref{spasec} a continuous map ${\rm Spa}(g):{\ol\gtX}'\to{\ol\gtX}$ which was shown to be injective if $g_Y$ is an immersion and $g_X$ is separated. However, our definition of a map ${\rm Val}(g):{\mathfrak X}'\to{\mathfrak X}$ was rather cumbersome because even if ${\rm Spa}(g)$ is injective, it does not have to respect the subspaces ${\rm Val}$ in the spaces ${\rm Spa}$. The following proposition gives a criterion when ${\rm Spa}(g)$ does respect ${\rm Val}$'s. \begin{prop} \label{quasi-domprop} Suppose that $g_Y$ is an open immersion and $g_X$ is separated. Then ${\rm Spa}(g)({\mathfrak X}')\subset{\mathfrak X}$ if and only if the locally closed immersion $(g_Y,f'):Y'\to Y\times_X X'$ is a closed immersion, in which case one actually has that ${\mathfrak X}'={\rm Spa}(g)^{-1}({\mathfrak X})$. \end{prop} \begin{proof} Suppose that $h:=(g_Y,f')$ is a closed immersion. Let ${\bf y}'=(y',R',\phi)\in{\ol\gtX}'$ be a point with $\eta'={\rm Spec}(k(y'))$ and $S'={\rm Spec}(R')$, and let ${\bf y}=(y,R,\phi)$ be its image in ${\ol\gtX}$. By Lemma \ref{minvallem}(ii), ${\bf y}'$ is minimal if and only if the natural morphism $\eta'\to Y'\times_{X'}S'$ is a closed immersion. By closedness of $h$, the latter happens if and only if the composition morphism $\eta'\to Y'\times_{X'}S'\to(Y\times_X X')\times_{X'}S'\widetilde{\to} Y\times_X S'$ is a closed immersion. The latter happens if and only if ${\bf y}$ is minimal because $k(y)\widetilde{\to} k(y')$ and hence $R=R'\cap k(y)=R'$. Thus, under our assumption on $h$, minimality of ${\bf y}'$ is equivalent to minimality of its image. This establishes the inverse implication in the proposition, and the complement. It remains to show that if $h$ is not a closed immersion then ${\rm Spa}(g)$ does not respect the subsets ${\rm Val}$. Note that $h$ is a locally closed immersion because $g_Y$ is an open immersion, and assume that $h$ is not a closed immersion. Set $Z=Y\times_X X'$ and find a $Z$-valuation ${\bf y}'=(y',R',\phi')$ of $Y'$ such that the morphism $\phi':{\rm Spec}(R')\to Z$ cannot be lifted to a morphism ${\rm Spec}(R')\to Y'$. Replacing ${\bf y}'$ by its minimal primary specialization, we achieve that ${\bf y}'$ is minimal and $R'\subsetneq k(y')$. Clearly ${\bf y}'$ defines an $X'$-valuation ${\bf y}=(y',R',\phi)$ on $Y'$ with $\phi={\rm pr}_{X'}\circ\phi'$, and ${\bf y}$ is minimal because any its non-trivial primary specialization corresponds to a lifting ${\rm Spec}(R'')\to Y'$ for some $R'\subseteq R''\varsubsetneq k(y)$ and such a lifting would induce a lifting ${\rm Spec}(R'')\to Z$ corresponding to a non-trivial primary specialization of ${\bf y}'$. Thus, ${\bf y}\in{\mathfrak X}'$, but ${\rm Spa}(g)({\bf y})$ is not a minimal $X$-valuation on $Y$ because the morphism ${\rm Spec}(R')\to X$ lifts to the morphism ${\rm pr}_Y\circ\phi:{\rm Spec}(R')\to Y$. \end{proof} Let us assume that $g_Y$ is an open immersion and $g_X$ is separated and of finite type. We saw that if $h$ is a closed immersion then ${\mathfrak X}'$ is naturally identified with a quasi-compact open subset of ${\mathfrak X}$ via ${\rm Spa}(g)$, and we say in this case that ${\mathfrak X}'$ is an {\em open subdomain} of ${\mathfrak X}$. If, in addition, $X'$ and $Y'$ can be chosen to be affine then we say that ${\mathfrak X}'$ is an {\em affinoid subdomain} of ${\mathfrak X}$. Note also that the situation described in the proposition appears in Deligne's proof of Nagata compactification theorem under the name of quasi-domination. (Recall that by a quasi-domination of $Y$ over $X'$ one means an open subscheme $Y'\subset Y$ and a morphism $Y'\to X'$ such that the morphism $Y'\to Y\times_X X'$ is a closed immersion, see \cite[\S2]{Con}.) The notion of quasi-domination plays a central role in Deligne's proof. We list simple properties of open and affinoid subdomains in the following lemma and stress that it will be much more difficult to prove that open subdomains are preserved under taking finite unions (in a sense, this is a typical situation in algebraic geometry that preimages, intersections, projective limits, etc., are much easier for study than pushouts, images, direct limits, etc.). \begin{lem} \label{domlem} Open subdomains are transitive and are preserved by finite intersections. Moreover, the intersection of open subdomains ${\rm Val}_{Y_i}(X_i)$ with $i\in\{1,2,\dots ,$ is the open subdomain ${\rm Val}_{Y_1\cap Y_2}(X_1\times_X X_2)$. In particular, if $X$ is separated and ${\mathfrak X}_i$'s are affinoid then ${\mathfrak X}_{12}$ is affinoid. \end{lem} \begin{proof} This follows from the analogous Lemma \ref{afflem} concerning the spaces ${\rm Spa}$. \end{proof} The following remark will not be used in the sequel. \begin{rem} (i) Our definition of RZ spaces is a straightforward generalization of the classical one. It is also possible to define RZ spaces directly as follows: an affinoid space is a topological space ${\mathfrak X}={\rm Val}_B(A)$ provided with two sheaves of rings ${\mathcal O}_{\mathfrak X}\subset{\mathcal M}_{\mathfrak X}$ (which can be defined in a natural way), and general spaces are pasted from affinoid ones along affinoid subdomains. (ii) The following example illustrates a difference between adic and Riemann-Zariski spaces. Let $k$ be a field, $A=B=A'=k[T]$, $B'=k[T,T^{-1}]$ and ${\mathfrak X},{\mathfrak X}',{\ol\gtX},{\ol\gtX}'$ are as above. Then ${\ol\gtX}'$ is a rational subdomain in ${\ol\gtX}$ in the sense of \cite{Hub2}. From other side, ${\mathfrak X}'$ is not an affinoid domain in ${\mathfrak X}$. Note that actually $({\mathfrak X}',{\mathcal O}_{{\mathfrak X}'})\widetilde{\to}({\mathfrak X},{\mathcal O}_{\mathfrak X})\widetilde{\to} X:={\rm Spec}(A)$, but the sheaves ${\mathcal M}_{\mathfrak X}$ and ${\mathcal M}_{{\mathfrak X}'}$ are not isomorphic at the point $x\in X$ with $T=0$. This can happen because the local (and even a valuation) ring ${\mathcal O}_{X,x}$ can be provided with two different structures of a semi-valuation ring by choosing semi-fraction rings ${\mathcal M}_{{\mathfrak X}',x}=k(T)$ or ${\mathcal M}_{{\mathfrak X},x}={\mathcal O}_{X,x}$. (See also Remark \ref{lastrem}(ii).) \end{rem} \begin{theor} \label{affbaseth} The affinoid subdomains of ${\mathfrak X}$ form a basis of its topology. \end{theor} \begin{proof} It follows from Lemma \ref{domlem} that we should prove that for any affine subset ${\ol\gtX}_0={\rm Spa}(B_0,A_0)$ in ${\ol\gtX}$ and a point ${\bf y}=(y,R,\phi)\in{\mathfrak X}\cap{\ol\gtX}_0$ there exists an affinoid subdomain ${\rm Val}_{\ol Y}({\ol X})$ containing ${\bf y}$ and contained in ${\ol\gtX}_0$. Moreover, we can assume that $X={\rm Spec}(A)$ is affine because ${\mathfrak X}$ is covered by open subdomains of the form ${\rm Val}_{Y'}(X')$, where $X'={\rm Spec}(A)$ is an open subscheme of $X$ and $Y'=X'\times_X Y$. In order to construct ${\rm Val}_{\ol Y}({\ol X})$ as required we will extend diagram (\ref{twosqdiag}) to the following one, where ${\ol Y}={\rm Spec}({\ol B})$ and ${\ol X}={\rm Spec}({\ol A})$ will be finally defined in the end of the proof. Recall that ${\mathcal O}_{\bf y}$ is a semi-valuation ring with semi-fraction ring ${\mathcal O}_{Y,y}$ and such that ${\mathcal O}_{\bf y}/m_y=R$. $$ \xymatrix{ {\rm Spec}(k(y))\ar[r]\ar[d]& {\rm Spec}({\mathcal O}_{Y,y})\ar[r]\ar[d]& {\ol Y}\ar[r]\ar[d]& Y\ar[d],\dots , {\rm Spec}(R)\ar[r]& {\rm Spec}({\mathcal O}_{\bf y})\ar[r]& {\ol X}\ar[r]& X } $$ Since ${\rm Spec}(R)\times_X Y$ is closed in ${\rm Spec}(R)\times Y$ by separatedness of $X$, Lemma \ref{minvallem}(ii) implies that the morphism $h:{\rm Spec}(k(y))\to{\rm Spec}(R)\times Y$ is a closed immersion. To explain the strategy of the proof we remark that the morphism ${\rm Spec}({\mathcal O}_{Y,y})\to{\rm Spec}({\mathcal O}_{\bf y})\times Y$ is a closed immersion (actually it can be proved by the same argument as we use below), and our strategy will be to approximate ${\mathcal O}_{\bf y}$ and ${\mathcal O}_{Y,y}$ by $A$-rings ${\ol A}$ and ${\ol B}$ so that ${\ol A}$ is finitely generated over $A$, ${\ol Y}={\rm Spec}({\ol B})$ is a neighborhood of $y$ and ${\ol Y}\to{\ol X}\times Y$ is a closed immersion. It will be more convenient to work with affine schemes and $Y$ is the only non-affine scheme in our consideration, so let us cover $Y$ with open affine subschemes $Y_i={\rm Spec}(B_i),Z_j={\rm Spec}(C_j)$, where $1\le i\le n$, $1\le j\le m$, $y\in Y_i$ and $y\notin Z_j$. Since ${\rm Spec}(B_0)$ contains $y$ by our assumptions, we also set $Y_0={\rm Spec}(B_0)$. For each $i$, $h$ factors through a closed immersion ${\rm Spec}(k(y))\to{\rm Spec}(R)\times Y_i$, hence the images of $R$ and $B_i$ generate $k(y)$. Now, we will find a neighborhood ${\ol Y}={\rm Spec}({\ol B})$ of $y$ which is contained in all $Y_i$'s and satisfies the following condition: for each $i$, ${\ol B}$ is a localization of the form $(B_i)_{f_i}$ and, the most important, we have that $f_i(y)\notin m_R$. Let us (until the end of this paragraph only) call {\em $R$-localization} for localization of an affine neighborhood ${\rm Spec}(C)$ of $y$ at an element $f$ such that $f(y)\notin m_R$. Obviously, $R$-localizations are transitive and we claim that the family of $R$-localizations of each $Y_i$ form a basis of neighborhoods of $y$. Indeed, for any element $f\in B_i$ with $f(y)\neq 0$ we can find $g\in B_i$ with $f(y)g(y)\notin m_R$ (we use that $B_i(y)$ generates $k(y)$ over $R$, so it contains elements of arbitrary large valuation). Thus, $(B_i)_{fg}$ is an $R$-localization of $B_i$ where $f$ is inverted and we obtain that the maximal (infinite) $R$-localization of $B_i$ is actually ${\mathcal O}_{Y,y}$. Now, set ${\rm Spec}(B)=\cap_{i=1}^n Y_i$ and find $R$-localizations $Y'_i={\rm Spec}((B_i)_{g_i})$ contained in ${\rm Spec}(B)$, and let ${\ol Y}={\rm Spec}({\ol B})$ be an $R$-localization of ${\rm Spec}(B)$ contained in all $Y'_i$. Then ${\ol Y}$ is an $R$-localization of each $Y'_i$, hence an $R$-localization of each $Y_i$ too. So, ${\ol B}=(B_i)_{f_i}$ is as required. Let ${\ol A}$ be the preimage of $R$ under the character ${\ol B}\to k(y)$ corresponding to $y$. Clearly ${\ol A}$ contains each element $f_i^{-1}$, hence the ring ${\ol B}(y)=B_i(y)[f_i^{-1}(y)]$ is generated by ${\ol A}(y)$ and $B_i(y)$. So, we obtain epimorphisms ${\ol A}\otimes B_i\to k(y)$, and then the homomorphisms $h_i:{\ol A}\otimes B_i\to{\ol B}$ are also surjective because ${\ol A}$ contains the kernel $p_y$ of ${\ol B}\to k(y)$. In particular, each morphism ${\ol Y}\to{\ol X}\times Y_i$ is a closed immersion. We claim that actually, ${\alpha}:{\ol Y}\to{\ol X}\times Y$ is a closed immersion, and to prove this we should check in addition that the morphisms ${\alpha}_j:{\ol Y}\times_Y Z_j\to{\ol X}\times Z_j$ with $1\le j\le m$ are closed immersion. By separatedness of $Y$ the source is affine, hence ${\ol Y}\times_Y Z_j={\rm Spec}({\ol C}_j)$ where ${\ol C}_j$ is generated by the images of $c_j:C_j\to{\ol C}_j$ and $b_j:{\ol B}\to{\ol C}_j$. Since our claim about ${\alpha}$ would follow if we prove that the homomorphisms $h'_j:{\ol A}\otimes C_j\to{\ol C}_j$ are surjective, it remains only to prove that for each $j$ the image of $h'_j$ contains the image of $b_j$. Since $y\in{\ol Y}$ and $y\notin Z_j$ we have that $b_j(p_y){\ol C}_j={\ol C}_j$, and hence the equality ${\ol C}_j=b_j({\ol B})c_j(C_j)$ can be strengthened as ${\ol C}_j=b_j(p_y)c_j(C_j)$, i.e. ${\ol C}_j$ is actually generated by $b_j(p_y)$ and $c_j(C_j)$. Since $p_y\subset{\ol A}$ by the definition of ${\ol A}$, we obtain that $h'_j$ is onto, as claimed. Now, the morphism ${\ol Y}\to{\ol X}$ is almost as required: ${\ol Y}$ is open in $Y$ and ${\alpha}$ is a closed immersion. In addition, since ${\bf y}\subset{\ol\gtX}_0$, the image of $A_0$ under the homomorphism $A_0\to B_0\to{\ol B}\to{\ol B}(y)$ is contained in $R$, and hence the image of $A_0$ in ${\ol B}$ is actually contained in ${\ol A}$. So, it only remains to decrease the $A$-subalgebra ${\ol A}\subset{\ol B}$ so that ${\ol X}={\rm Spec}({\ol A})$ becomes of finite type over $X$ but all good properties are preserved: ${\alpha}$ is still a closed immersion, and ${\ol A}$ contains the image of $A_0$ in ${\ol B}$. As we saw, ${\alpha}$ being a closed immersion is equivalent to surjectivity of the homomorphisms $h_i:{\ol A}\otimes B_i\to{\ol B}$ and $h'_j:{\ol A}\otimes C_j\to{\ol C}_j$. Since the homomorphisms $B_i\to{\ol B}$ and $C_j\to{\ol C}_j$ are of finite type, all we need for surjectivity of $h_i$'s and $h'_j$'s is a finite subset $S\subset{\ol A}$. So, replacing ${\ol A}$ with its $A_0$-subalgebra generated by $S$ we obtain ${\ol X}$ as required. Obviously, ${\rm Val}_{\ol Y}({\ol X})$ is an affinoid domain containing ${\bf y}$, and ${\rm Val}_{\ol Y}({\ol X})$ is contained in ${\ol\gtX}_0$ because ${\ol Y}$ is an open subscheme in $Y_0$ and the morphism ${\ol Y}\to X_0$ (obtained as ${\ol Y}\to Y_0\to X_0$) factors through ${\ol X}$. \end{proof} \begin{cor}\label{qccor} The space ${\mathfrak X}$ is qcqs. \end{cor} \begin{proof} Any open subdomain is quasi-compact by Proposition \ref{qcprop}, and their intersection is quasi-compact by Lemma \ref{domlem}. Since open subdomains generate the topology of ${\mathfrak X}$ by Theorem \ref{affbaseth} we obtain the corollary. \end{proof} Recall that we defined in Remark \ref{basarem} the contraction $\pi_{\mathfrak X}:{\ol\gtX}\to{\mathfrak X}$ and used it to define the maps ${\rm Val}(g):{\mathfrak X}'\to{\mathfrak X}$ for $g:f'\to f$. \begin{cor} The contraction $\pi_{\mathfrak X}$ is continuous. In particular, the maps ${\rm Val}(g)$ are continuous. \end{cor} \begin{proof} Since open subdomains ${\mathfrak X}'={\rm Val}_{Y'}(X')$ form a basis of the topology of ${\mathfrak X}$ by Theorem \ref{affbaseth}, it suffices to prove that the preimage of ${\mathfrak X}'$ in ${\rm Spa}(Y,X)$ is open. Since the minimality condition in ${\rm Spa}(Y,X)$ and ${\rm Spa}(Y',X')$ agree, $\pi^{-1}({\mathfrak X}')$ coincides with the open affine subset ${\rm Spa}(Y',X')$. \end{proof} \subsection{$Y$-blow ups of $X$} \label{blowupsec} In this section we assume that $f$ is affine. Then we will show that there exists a large family of projective $Y$-modifications of $X$ having good functorial properties. Using these morphisms we will be able to describe the set ${\rm Val}_Y(X)$ very concretely. Since the results of ,\dots ,\ref{blowupsec} are inspired in part by Raynaud's theory of formal models, we will sometimes indicate similarity between our results and Raynaud's theory by referencing to \cite{BL}. \begin{defin}\label{blowdef} A $Y$-modification $g_i:X_i\to X$ is called a {\em $Y$-blow up of $X$} if there exists a $g_i$-ample ${\mathcal O}_{X_i}$-module ${\mathcal L}$ provided with a homomorphism $\veps:{\mathcal O}_{X_i}\to{\mathcal L}$ such that $f_i^*(\veps):{\mathcal O}_Y\widetilde{\to} f_i^*({\mathcal L})$. We call $\veps$ a {\em $Y$-trivialization} of ${\mathcal L}$; actually it is a section of ${\mathcal L}$ that is invertible on the image of $Y$. \end{defin} It will be more convenient to say $X$-ample instead of $g_i$-ample in the sequel. \begin{lem} \label{blowuplem} The $Y$-blow ups satisfy the following properties. (i) Suppose that $X_j\to X_i$ and $X_i\to X$ are $Y$-modifications such that $X_j$ is a $Y$-blow up of $X$. Then $X_j$ is a $Y$-blow up of $X_i$. (ii) The family of $Y$-blow ups of $X$ is filtered. (iii) The composition of $Y$-blow ups $g_{ij}:X_j\to X_i$ and $g_i:X_i\to X$ is a $Y$-blow up. \end{lem} \begin{proof} The first statement is obvious because any $X$-ample ${\mathcal O}_{X_j}$-module ${\mathcal L}$ is $X_i$-ample, and the notion of $Y$-trivialization of ${\mathcal L}$ depends only on the morphism $f_j:Y\to X_j$. (ii) Let $X_i,X_j$ be two $Y$-blow ups of $X$. Find $X$-ample sheaves ${\mathcal L}_i,{\mathcal L}_j$ with $Y$-trivializations $\veps_i,\veps_j$. Then the $X$-proper scheme $X_{ij}=X_i\times_X X_j$ possesses an $X$-ample sheaf ${\mathcal L}=p_i^*({\mathcal L}_1)\otimes p_j^*({\mathcal L}_2)$, where $p_i,p_j$ are the projections. The natural isomorphism ${\mathcal O}_{X_{ij}}\widetilde{\to}{\mathcal O}_{X_{ij}}\otimes{\mathcal O}_{X_{ij}}$ followed by $f_i^*(\veps_i)\otimes f_j^*(\veps_j):{\mathcal O}_{X_{ij}}\otimes{\mathcal O}_{X_{ij}}\to{\mathcal L}$ provides a $Y$-trivialization of ${\mathcal L}$. Consider the scheme-theoretic image $X'$ of $Y$ in $X_{ij}$, and let ${\mathcal L}'$ and $\veps'$ be the pull backs of ${\mathcal L}$ and $\veps$. Then $(X',{\mathcal L}',\veps')$ is a $Y$-blow up of $X$ which dominates $X_i$ and $X_j$. (iii) Choose an $X$-ample ${\mathcal O}_{X_i}$-sheaf ${\mathcal L}_i$ and an $X_i$-ample ${\mathcal O}_{X_j}$-sheaf ${\mathcal L}_j$ with $Y$-trivializations $\veps_i$ and $\veps_j$. By \cite[$\rm II$, 4.6.13(ii)]{ega}, the sheaf ${\mathcal L}_j\otimes g_{ij}^*({\mathcal L}_i^{\otimes n})$ is $X$-ample for sufficiently large $n$. It remains to notice that the composition of ${\mathcal O}_{X_j}\widetilde{\to}{\mathcal O}_{X_j}\otimes{\mathcal O}_{X_j}^{\otimes n}$ with $\veps_j\otimes g_{ij}^*(\veps_i^{\otimes n})$ is a $Y$-trivialization. \end{proof} We will need an explicit description of $Y$-blow ups. Let ${\mathcal E}\subset f_*({\mathcal O}_Y)$ be a finitely generated ${\mathcal O}_X$-submodule containing the image of ${\mathcal O}_X$, and let ${\mathcal E}^n\subset f_*({\mathcal O}_Y)$ denote the ${\mathcal O}_X$-modules which are powers of ${\mathcal E}$ with respect to the natural multiplication on $f_*({\mathcal O}_Y)$ (so ${\mathcal E}^0$ is the image of ${\mathcal O}_X$). We claim that $X_{\mathcal E}:={\bf Proj}(\oplus_{n=0}^\infty{\mathcal E}^n)$ is a $Y$-modification of $X$. Clearly, $X_{\mathcal E}$ is $X$-projective and there is a natural morphism $g_{\mathcal E}:Y={\bf Spec}(f_*({\mathcal O}_Y))\to{\bf Spec}(\cup_{n=0}^\infty{\mathcal E}^n)$ where the union is taken inside $f_*({\mathcal O}_Y)$. The target of $g_{\mathcal E}$ is the $X$-affine chart of $X_{\mathcal E}$ defined by non-vanishing of the section $s\in\Gamma({\mathcal E})$ which comes from the unit section of ${\mathcal O}_X$, in particular, a map $Y\to X_{\mathcal E}$ naturally arises. In addition, the very ample sheaf ${\mathcal O}_{X_{\mathcal E}}(1)$ on $X_{\mathcal E}$ has a $Y$-trivialization ${\mathcal O}_{X_{\mathcal E}}\to{\mathcal O}_{X_{\mathcal E}}(1)$ induced by $s$. So, among all properties of $Y$-blow ups it remains to check that $g_{\mathcal E}$ is schematically dominant. The latter can be checked locally over $X$, so assume that $X={\rm Spec}(A)$, $Y={\rm Spec}(B)$ and $E\subset B$ is an $A$-module containing $1$. Then $X_E={\rm Proj}(\oplus_{n=0}^\infty E^n)$ is glued from affine charts $(X_E)_b$ given by non-vanishing of elements $b\in E$, so it suffices to show that the morphism ${\alpha}:Y\times_{X_E}(X_E)_b\to(X_E)_b$ is schematically dominant. Note that the source is the localization of $Y$ at $b$, and so it is isomorphic to ${\rm Spec}(B_b)$, and the target is ${\rm Spec}(C)$ where $C$ is the zeroth graded component of $(\oplus_{n=0}^\infty E^n)_b$. But $C=\injlim_n b^{-n}(E^n/I_n)$, where $I_n$ is the submodule of elements killed by a power of $b$, and the kernel of the homomorphism $E^n\hookrightarrow B\to B_b$ is $I_n$. Hence $b^{-n}(E^n/I_n)\hookrightarrow B_b$ and therefore $C\hookrightarrow B$. In particular, ${\alpha}$ is schematically dominant. \begin{lem} \label{expblowuplem} Any $Y$-blow up of $X$ is isomorphic to some $X_{\mathcal E}$ as a $Y$-blow up of $X$. \end{lem} \begin{proof} Let $g_i:X_i\to X$ be a $Y$-blow up. Find an $X$-ample ${\mathcal O}_{X_i}$-module ${\mathcal L}$ with a $Y$-trivialization $\veps:{\mathcal O}_{X_i}\to{\mathcal L}$. Then there is a closed immersion of $X$-schemes $h:X_i\to P:={\bf Proj}(\oplus_{n=0}^\infty(g_i)_*{\mathcal L}^{\otimes n})$ and the morphism $h\circ f_i:Y\to X_i\to P$ factors through the chart of $P$ given by non-vanishing of the section $s\in\Gamma((g_i)_*{\mathcal L})$ corresponding to $\veps$. The latter chart is of the form ${\bf Spec}({\mathcal A})$ where ${\mathcal A}$ is the zeroth graded component of the localization $(\oplus_{n=0}^\infty(g_i)_*{\mathcal L}^{\otimes n})_s$. Composing the ${\mathcal O}_X$-homomorphism $(g_i)_*{\mathcal L}\to{\mathcal A}$ that takes $u$ to $s^{-1}u$ with the ${\mathcal O}_X$-homomorphism ${\mathcal A}\to f_*({\mathcal O}_Y)$ corresponding to $f_i$ we obtain a homomorphism $(g_i)_*{\mathcal L}\to f_*({\mathcal O}_Y)$ that takes $s$ to the unit section. Now we can define ${\mathcal E}$ to be the image of $(g_i)_*{\mathcal L}$ in $f_*({\mathcal O}_Y)$, and we claim that actually $X_i\widetilde{\to} X_{\mathcal E}$ as a $Y$-modification of $X$. Indeed, the obvious epimorphism $\oplus_{n=0}^\infty(g_i)_*{\mathcal L}^{\otimes n}\to\oplus_{n=0}^\infty{\mathcal E}^n$ corresponds to a closed immersion $X_{\mathcal E}\to P$ which agrees with the morphisms $Y\to X_{\mathcal E}$ and $Y\to P$. Since, the first morphism is schematically dominant, $X_{\mathcal E}$ is the schematic image of $Y$ in $P$, hence it must coincide with $X_i$ as the closed subscheme of $P$. \end{proof} \begin{cor} \label{extblowuplem} Assume that $X'$ is an open subscheme of $X$ and $Y'=f^{-1}(X')$. Then any $Y'$-blow up $X'_i\to X'$ extends to a $Y$-blow up $X_i\to X$. \end{cor} \begin{proof} Let $f':Y'\to X'$ be the restriction of $f$, so $f'_*({\mathcal O}_{Y'})$ is the restriction of $f_*({\mathcal O}_Y)$ on $X'$. By the lemma, a $Y'$-blow up of $X'$ is determined by a finitely generated ${\mathcal O}_{X'}$-submodule ${\mathcal E}'\subset f'_*({\mathcal O}_{Y'})$ containing the image of ${\mathcal O}_{X'}$. By \cite[6.9.7]{egaI}, one can extend ${\mathcal E}'$ to a finitely generated ${\mathcal O}_X$-submodule ${\mathcal E}\subset f_*({\mathcal O}_Y)$. Replacing ${\mathcal E}$ by ${\mathcal E}+{\mathcal O}_X$, if necessary, we can achieve that ${\mathcal E}$ contains the image of ${\mathcal O}_X$. Now, ${\mathcal E}$ defines a required extension of the blow up. \end{proof} \begin{rem} (i) Lemma \ref{expblowuplem} indicates that the notion of $Y$-blow up is in some sense a generalization of the notion of $U$-admissible blow up, where $i:U\hookrightarrow X$ is a schematically dense open subscheme, to the case of an arbitrary affine morphism $Y\to X$. Indeed, there is much similarity, but the notions are not equivalent in general: both $U$-admissible blow ups and $U$-blow ups are of the form ${\rm Proj}(\oplus_{n=0}^\infty{\mathcal E}^n)$, but in the first case ${\mathcal E}$ is an ${\mathcal O}_X$-submodule of ${\mathcal O}_X$ which is trivial over $U$, and in the second one ${\mathcal E}$ is an ${\mathcal O}_X$-submodule of $i_*({\mathcal O}_U)$ that contains ${\mathcal O}_X$ (so, it is trivial over $U$ as well). The important case when these notions agree was pointed out by the referee: it follows from \cite[$\rm II$, 3.1.8(iii)]{ega} that $U$-admissible blow ups and $U$-blow ups agree when $X\setminus U$ is the zero set of an invertible sheaf of ideals. (ii) Basic facts concerning compositions, extensions, etc., (see the above lemmas) hold for both families of $U$-modifications, but a slight advantage of $U$-blow ups is that the proofs seem to be easier. For example, compare with \cite[1.2]{Con} where one proves that $U$-admissible blow ups are preserved by compositions. \end{rem} The following lemma is an analog of \cite[4.4]{BL}. \begin{lem} \label{submodlem} Given a quasi-compact open subset ${\mathfrak U}\subset{\mathfrak X}={\rm Val}_Y(X)$, there exists a $Y$-modification $X'\to X$ and an open subscheme $U\subset X'$ such that ${\mathfrak U}$ is the preimage of $U$ in ${\mathfrak X}$. \end{lem} \begin{proof} If $X_1,\dots , X_n$ form a finite open affine covering of $X$ and $Y_i=f^{-1}(X_i)$ then ${\mathfrak X}_i={\rm Val}_{Y_i}(X_i)$ form an open covering of ${\mathfrak X}$ by Lemma \ref{easylem}. It suffices to separately solve our problem for each ${\mathfrak X}_i$ with ${\mathfrak U}_i:={\mathfrak U}\cap{\mathfrak X}_i$ because any $Y_i$-blow up of $X_i$ extends to a $Y$-blow up of $X$, and $Y$-blow ups of $X$ form a filtered family. Thus, we can assume that $X={\rm Spec}(A)$, and then $Y={\rm Spec}(B)$. We can furthermore assume that ${\mathfrak U}={\mathfrak X}\cap{\rm Spa}(B_b,A[a_1/b,\dots , a_n/b])$ with $a_i,b\in B$ because as we saw in the proof of Lemma \ref{spahomlem}, the sets ${\rm Spa}(B_b,A[a_1/b,\dots , a_n/b])$ form a basis of the topology of ${\rm Spa}(B,A)$. Now, the morphism $Y\to{\rm Proj}(A[T_1,T_{a_1},\dots , T_{a_n},T_b])$ defined by $(1,a_1,\dots , a_n,b)$ determines a required $Y$-blow up $X'\to X$ with $U$ given by the condition $T_b\neq 0$. \end{proof} \begin{cor} \label{homeomcor} The map $\psi:{\rm Val}_Y(X)\to{\rm RZ}_Y(X)$ is a homeomorphism. \end{cor} \begin{proof} Recall that $\psi$ is surjective and continuous by Propositions \ref{goodcaseprop} and \ref{qcprop}, respectively. From other side, the lemma implies that $\psi$ is injective and open. Indeed, any open quasi-compact ${\mathfrak U}\subset{\mathfrak X}$ is the full preimage of some $U\subset X'$ for a $Y$-modification $X'\to X$, hence $\psi({\mathfrak U})$, which is the full preimage of $U$ in ${\rm RZ}_Y(X)$, is open. In addition, since any pair of different points of ${\mathfrak X}$ is distinguished by some open quasi-compact set ${\mathfrak U}\subset{\mathfrak X}$, their images in an appropriate $X'$ do not coincide. \end{proof} We use the corollary to identify ${\mathfrak X}$ with ${\rm RZ}_Y(X)$ when $f$ is decomposable. In particular, this provides ${\mathfrak X}$ with a sheaf ${\mathcal O}_{\mathfrak X}$ of regular functions which was earlier defined on ${\rm RZ}_Y(X)$, and for any point ${\bf x}\in{\mathfrak X}$, thanks to Proposition \ref{goodcaseprop}, the semi-valuation ring ${\mathcal O}_{\bf x}$ obtains a new interpretation as the stalk of ${\mathcal O}_{\mathfrak X}$ at ${\bf x}$. As another corollary of Lemma \ref{submodlem} we obtain the following version of Chow lemma. \begin{cor} \label{Chowcor} Any $Y$-modification ${\ol X}\to X$ is dominated by a $Y$-blow up of $X$. \end{cor} \begin{proof} Let ${\ol U}_1,\dots ,{\ol U}_n$ be an affine covering of ${\ol X}$, and let $Y_i$ and ${\mathfrak U}_i$ denote the preimages of ${\ol U}_i$ in $Y$ and ${\mathfrak X}$, respectively. By Lemma \ref{submodlem}, we can find a $Y$-blow up $X'\to X$ and a covering $\{U'_i,\dots ,$ of $X'$, whose preimage in ${\mathfrak X}$ coincides with $,\dots ,{\mathfrak U}_i,\dots ,$. Note that the scheme-theoretic image $X''$ of $Y$ in ${\ol X}\times_X X'$ is a $Y$-modification of both $X'$ and ${\ol X}$. So, it suffices to show that $X''$ is a $Y$-blow up of $X$. Since the preimages of ${\ol U}_i$ and $U'_i$ in ${\mathfrak X}$ coincide, their preimages in $X''$ coincide too, and we will denote them as $U''_i\hookrightarrow X''$. Consider the induced $Y$-modification $h:X''\to X'$ with restrictions $h_i:U''_i\to U'_i$. For any $1\le i\le n$, the proper morphism $h_i$ is affine because the morphism ${\ol U}_i\to X$ is affine and $U''_i$ is closed in $U'_i\times_X{\ol U}_i$. Thus, $h_i$ is finite, and therefore $h$ is finite. We claim that finiteness of $h$ implies that it is a $Y$-blow up (this claim is an analog of \cite[4.5]{BL}). Indeed, ${\mathcal O}_{X''}$ is very ample relatively to $h$ because $h$ is affine, and the identity homomorphism gives its $Y$-trivialization. Thus, $X''$ is a $Y$-blow up of $X$ by Lemma \ref{blowuplem}(iii). \end{proof} \subsection{Decomposable morphisms} \label{mainsec} In this section we will complete a basic description of the relative Riemann-Zariski space ${\mathfrak X}$ associated with a separated morphism $f:Y\to X$ between qcqs schemes by proving that the finite union of open domains is an open domain, and any open domain in ${\mathfrak X}$ is of the form ${\rm Val}_{\ol Y}({\ol X})$ where the morphism ${\ol Y}\to{\ol X}$ is affine and schematically dominant. The first claim actually means that any quasi-compact open subset is an open domain, i.e. admits a model by a morphism of schemes, and the second claim states that this model can be chosen to be affine. In particular, applying the second claim to ${\mathfrak X}$ itself we obtain a bijection ${\rm Val}_{{\ol Y}}({\ol X})\widetilde{\to}{\rm Val}_Y(X)$ with ${\ol Y}=Y$ and affine morphism ${\ol Y}\to{\ol X}$. But then ${\ol X}$ is proper over $X$ by the valuative criterion \ref{valcritprop}, and hence $X$ admits a $Y$-modification ${\ol X}$ such that the morphism $Y\to{\ol X}$ is affine. Thus, the morphism $f:Y\to X$ is decomposable and this gives a new proof of Theorem \ref{decompth}. In particular, one obtains new proofs of Nagata compactification and Thomason approximation theorems. \begin{theor}\label{domth} Let $f:Y\to X$ be a separated morphism between qcqs schemes and ${\mathfrak X}={\rm Val}_Y(X)$. Then (i) open domains in ${\mathfrak X}$ are closed under finite unions, (ii) any open domain ${\mathfrak X}'$ is of the form ${\rm Val}_{\ol Y}({\ol X})$, where the morphism ${\ol Y}\to{\ol X}$ is affine and schematically dominant. \end{theor} \begin{proof} Note that any affinoid domain satisfies the assertion of (ii) (since schematical dominance is achieved by simply replacing ${\ol X}$ with the schematic image of ${\ol Y}$), and by Theorem \ref{affbaseth} and Corollary \ref{qccor}, ${\mathfrak X}'$ admits a finite affinoid covering. Therefore, both (i) and (ii) would follow if we prove the following claim: the union of two domains satisfying the assertion of (ii) is an open domain that satisfies the assertion of (ii). So, we assume that ${\mathfrak X}'={\mathfrak X}_1\cup{\mathfrak X}_2$ where ${\mathfrak X}_i={\rm Val}_{Y_i}(X_i)$ with $i\in\{1,2,\dots ,$ are open subdomains with affine morphisms $Y_i\to X_i$. Set ${\mathfrak X}_{12}={\mathfrak X}_1\cap{\mathfrak X}_2$ and $Y_{12}=Y_1\cap Y_2$. In the sequel, we will act as in Step 3 of the proof of Theorem \ref{approxtheor}, and the main difference is that we will use $Y_i$-blow ups instead of affine morphisms. For reader's convenience, we provide a commutative diagram containing the main objects which were and will be introduced. $$ \xymatrix{ Y_1\ar[d] & & Y_{12} \ar[d]\ar@{_{(}->}[ll]\ar@{^{(}->}[rr] & & Y_2\ar[d],\dots , {\mathfrak X}_1\ar[d] & & {\mathfrak X}_{12} \ar[d]\ar@{_{(}->}[ll]\ar@{^{(}->}[rr] & & {\mathfrak X}_2\ar[d],\dots , Z_1\ar[d] & & Z_{12} \ar[ld]\ar[rd]\ar@{_{(}->}[ll]\ar@{^{(}->}[rr] & & Z_2\ar[d],\dots , X_1 & X'_1\ar@{_{(}->}[l]& & X'_2\ar@{^{(}->}[r] & X_2} $$ Since $Y_i$'s are $X_i$-affine, Lemma \ref{submodlem} implies that we can replace $X_i$'s by their $Y_i$-blow ups such that each $X_i$ contains an open subscheme $X'_i$, whose preimage in ${\mathfrak X}_i$ coincides with ${\mathfrak X}_{12}$. Then the preimage of $X'_i$ in $Y$ is, obviously, $Y_{12}$. It can be impossible to glue $X_i$'s along $X'_i$'s, but by Lemma \ref{easylem}(ii), we at least know that ${\rm Val}_{Y_{12}}(X'_i)\widetilde{\to}{\mathfrak X}_{12}$ for $i=1,2$. Let $T$ be the scheme-theoretic image of $Y_{12}$ in $X'_1\times_X X'_2$; it is obviously separated over $X'_i$'s. Moreover, ${\rm Val}_{Y_{12}}(T)\widetilde{\to}{\rm Val}_{Y_{12}}(X'_1)\cap{\rm Val}_{Y_{12}}(X'_2)={\mathfrak X}_{12}$ by Lemma \ref{domlem}, and, therefore, $T$ is a $Y_{12}$-modification of $X'_i$'s by the valuative criterion \ref{valcritprop}. By Corollary \ref{Chowcor}, we can find a $Y_{12}$-blow up $T'\to X'_1$, which dominates $T$. It still can happen that $T'$ is not a $Y_{12}$-blow up of $X'_2$, but it is dominated by a $Y_{12}$-blow up $Z_{12}\to X'_2$. Then $Z_{12}\to T'$ is a $Y_{12}$-blow up by Lemma \ref{blowuplem}(i), and hence $Z_{12}\to X'_1$ is a $Y_{12}$-blow up by Lemma \ref{blowuplem}(iii). By Lemma \ref{extblowuplem}, we can extend the $Y_{12}$-blow ups $Z_{12}\to X'_i$ to $Y_i$-blow ups $Z_i\to X_i$. Then, the finite type $X$-schemes $Z_i$ can be glued along the subschemes $X$-isomorphic to $Z_{12}$ to a single $X$-scheme ${\ol X}$ of finite type, and the schematically dominant affine morphisms $Y_i\to Z_i$ glue to a single schematically dominant affine morphism ${\ol Y}\to{\ol X}$. Note that ${\rm Val}_{Y_i}(Z_i)={\mathfrak X}_i$ is the preimage of $Z_i$ in ${\rm Val}_{\ol Y}({\ol X})$, in particular, the latter is covered by its open subdomains ${\mathfrak X}_i$, $i\in\{1,2,\dots ,$. Now, it remains to show that ${\rm Val}_{\ol Y}({\ol X})$ is an open subdomain in ${\mathfrak X}$, since this would immediately imply that ${\rm Val}_{\ol Y}({\ol X})$ is a required model of ${\ol\gtX}$. The morphism ${\alpha}:{\ol Y}\to{\ol X}\times_X Y$ is glued from the morphisms ${\alpha}_i:Y_i\to Z_i\times_X Y$ because $Y_i$ is the preimage of $Z_i$ in $Y$, but ${\alpha}_i$'s are closed immersions by the construction. So, ${\alpha}$ is a closed immersion as well, and we are done. \end{proof} \begin{cor}\label{lastcor} The map $\eta:Y\to{\mathfrak X}:={\rm RZ}_Y(X)$ is injective, each point ${\bf x}\in{\rm RZ}_Y(X)$ possesses a unique minimal generalization $y$ in $\eta(Y)$, ${\mathcal M}_{{\mathfrak X},{\bf x}}\widetilde{\to}{\mathcal O}_{Y,y}$, and the stalk ${\mathcal M}_{{\mathfrak X},{\bf x}}$ is the semi-fraction ring of the semi-valuation ring ${\mathcal O}_{{\mathfrak X},{\bf x}}$. In particular, ${\mathcal O}_{\mathfrak X}$ is a subsheaf of ${\mathcal M}_{\mathfrak X}$. \end{cor} \begin{proof} By Theorem \ref{domth} and Corollary \ref{homeomcor}, we can identify ${\mathfrak X}$ with ${\rm Val}_Y(X)$. So, a point ${\bf x}$ can be interpreted as an $X$-valuation $(y,R,\phi)$ on $Y$. Then it is clear that the map $\eta$ sends $y\in Y$ to a trivial valuation $(y,k(y),f|_y)$ (with the obvious morphism $f|_y:{\rm Spec}(k(y))\to X$), and for an arbitrary ${\bf x}=(y,R,\phi)$ its minimal generalization in $\eta(Y)$ is $(y,k(y),f|_y)$. Uniqueness of minimal generalization implies that the stalk of ${\mathcal M}_{\mathfrak X}=\eta_*({\mathcal O}_Y)$ at ${\bf x}$ is simply ${\mathcal O}_{Y,y}$, so it remains to recall that the latter is the semi-fraction field of the semi-valuation ring ${\mathcal O}_{\bf x}$ defined in ,\dots ,\ref{firstsec}, which coincides with the stalk ${\mathcal O}_{{\mathfrak X},{\bf x}}$ by Proposition \ref{goodcaseprop}. \end{proof} \end{document}
arXiv
Is there a safe but weird distance from black hole merger? I wish to create a world such that it can support the following plot point: the world experiences gravitational waves that are directly noticeable to the human population (i.e. they can feel or see the effects themselves without instruments) without being so strong that everything gets spaghettified. The most obvious way to arrange this seems to be to have a pair of black holes undergo a merger at a suitable distance: not so close that everything gets ripped apart, and not so far away that the gravitational waves have become un-noticeably weak. My question has 2 parts but they are directly related so it wouldn't make sense to split this into 2 separate questions: (a) assuming a pair of 30 solar mass black holes as detected the other year by LIGO, what distance from the event would provide gravitational waves of the weird but not deadly strength that I need; and (b) at such a distance from the event, would you be safe from other consequences of the black hole merger, or would something else such as the intensity of high energy particle emissions kill you anyway? I'm open to any kind of habitat for my world, it could be a planet or a deep space habitat or generation ship or whatever. If the gravitational waves would rule out particular kinds of world then I'd be keen to hear why (e.g. maybe waves strong enough to be noticeable to humans would shatter a planet but a space habitat might be small enough to survive). In my story I may make limited use of unobtainium for interstellar travel, but I want the physical effects of the black hole to be as hard science as possible. If a black hole merger is out of the question as too dangerous, I'd be happy to receive reality-check level suggestions of alternative events that could safely create the kind of noticeable gravitational wave I want. physics hard-science black-holes Adrian - Justice for MonicaAdrian - Justice for Monica $\begingroup$ Gravitational waves are mind-bogglingly tiny. Fractions of the diameter of a proton is the displacement or size change due to the passage of a gravitational wave. To make your idea work you will need to rewrite general relativity for it to be remotely credible. Otherwise you have to be so close to the merging black holes you would be swallowed too. Asking for hard-science answers is wrong. You might get close with "science-based". $\endgroup$ – a4android Nov 28 '19 at 11:06 $\begingroup$ Gravitational waves big enough for a meatsack to feel are world-endingly powerful. $\endgroup$ – Starfish Prime Nov 28 '19 at 11:21 $\begingroup$ This is exactly why I tagged hard science rather than science-based. @a4android Obviously the gravitational waves we detect using ligo are imperceptible, but that is because the black holes are extremely far away. I'd like an answer to include calculations to show what distance you would have to be to feel them but not be killed. And starfish prime, I'd ideally like numbers to back up your assertion, but world-ending isn't necessarily a problem if I can put the characters in a small spaceship or habitat that could survive. $\endgroup$ – Adrian - Justice for Monica Nov 28 '19 at 11:28 $\begingroup$ My source told me you need to get close like in-your-face close to experience a distortion of your body by millimetres, then again the tidal force would have long already turn you into a beam of particles. $\endgroup$ – user6760 Nov 28 '19 at 12:54 I think I can now answer my own question, having come across some decent references I hadn't found before asking it. I found the equation for the gravitational strain $h$ - the proportional change in length of an object due to gravitational waves from a mass $M$: $$h \approx {{GM} \over c^2} \times {1 \over r} \times {v^2 \over c^2}$$ (Source of formula) The first term is of the order of the size of the black hole, or about 45 km for a 30 solar mass ($M_{\odot}$) black hole. Near collision the black holes move close to the speed of light so the last term is $\approx 1$. Then the strain falls off as $1 \over r$, so even if you could sense a brief stretching of 1 part in 10,000 (about 0.2 mm along the length of your body) you would need to be 450,000 km (about 1.9 times the average distance between Earth and the Moon) from two 30 $M_{\odot}$ black holes orbiting each other at near light speed. My takeaway is really just how weak gravitational waves are for the amount of energy that goes into them (for the LIGO 60 $M_{\odot}$ collision about 3 $M_{\odot}$ was converted from mass energy into gravitational waves). For an object orbiting 60 $M_{\odot}$ at that distance the orbital period would be 11.2 minutes. The gravitational tidal acceleration across a body of length d is given by: $$a={{2 G M d} \over {r^3}}$$ which works out as 5.8 micronewtons, so the astronauts would be safe from spaghettification at a range where they could experience noticeable but not intrinsically fatal gravitational waves. At that distance I guess it's still highly likely radiation from accreting matter would be fatal, so my scenario would rely on the black hole pair being located in an almost perfect void, which leads to other questions (how did they end up in such a perfect void, how did the characters end up at just such a perfect distance from them?) (Edited to remove erroneous statement about centripetal acceleration.) $\begingroup$ I was just about to cobble together an answer that said more or less the same thing. There's a nice writeup with some example values here. $\endgroup$ – Starfish Prime Nov 28 '19 at 12:59 $\begingroup$ in orbit you don't fell any acceleration, you're in free fall. $\endgroup$ – ths Nov 28 '19 at 15:36 $\begingroup$ @ths Agreed. You need to work out the tidal effects on the orbiting body to determine whether orbiting at a particular distance would be fatal. $\endgroup$ – notovny Nov 28 '19 at 15:38 $\begingroup$ This answer reminds me of what-if.xkcd.com/73 $\endgroup$ – Borgh Nov 29 '19 at 10:04 When gravitational waves reach Earth, they usually give a strain of $\delta L \over L$$=10^{-21}$. If we assume that they scale with the distance the same way electromagnetic waves do, thus following the inverse square law, we can get an estimate of the distance needed. LIGO detected the first merger of black holes at 1.3 billion light years away. If we would get to 1 light year away from the merger, under the above hypothesis we would get a strain of $10^{-21} \times (1.3 \cdot 10^9)^2=10^{-3}$. This means that on 1 meter length we would notice a 1 mm oscillation, which is something we are able to sense. On the other hand, supernova explosions are lethal well beyond 1 light year, and though extremely powerful they are probably tiny when compared with black hole merger. Wrapping up, there is probably a distance at which our body can feel gravitational wave produced by merging black holes, but that feeling would probably we quickly swept away by a shower of high energy particles, unless the two black holes have both no accretion disk. Addendum after Starfish Prime's comment: If instead the scaling goes like $1/r$, then at a distance of 1 light year the strain would be $10^{-21} \times (1.3 \cdot 10^9)=10^{-9}$. Thus too low. To make it back to $10^{-3}$ the observer would need to be to $1/1000$ of a light year, or $9 \cdot 10^9$ km, twice the distance between Neptune and the Sun. $\begingroup$ Turns out that gravity waves don't necessarily behave the way you might think they do. Specifically, the energy they carry does follow an inverse square relationship, but the amplitude (the important bit in this case) is a more simple 1/r thing. Anyway, it turns out that for a 1 part in a 1000 length change, you need to be more like a 10000km away, not a whole lightyear. $\endgroup$ – Starfish Prime Nov 28 '19 at 13:03 Not the answer you're looking for? Browse other questions tagged physics hard-science black-holes or ask your own question. Can we identify black holes in our path flying between the stars? Would the gravitational waves a binary black hole system make a feasible weapon? What is the best planetary orbit around a black hole in order to support life? Safe distance from a black hole? How close would merging black holes have to be to feel gravitational waves? Blackholes as Perfect Recyclers or Perfect Batteries: Safe storage, recharge, and usage Short-term Miniature black hole weapon and its effects A supermassive black hole is coming our way. When's the latest that we would notice? Could black hole civilizations even exist?
CommonCrawl
\begin{definition}[Definition:Metric Space/Triangle Inequality] Let $M = \struct {A, d}$ be a metric space, satisfying the metric space axioms: {{:Axiom:Metric Space Axioms}} Axiom $\text M 2$ is referred to as the '''triangle inequality''', as it is a generalization of the Triangle Inequality which holds on the real number line and complex plane. \end{definition}
ProofWiki
Stabilization of nonlinear systems via aperiodic intermittent stochastic noise driven by G-Brownian motion with application to epidemic models Xiaojing Zhong1, Feiqi Deng2, Bo Zhang3 & Haibin Ouyang1 To stabilize a nonlinear system \(dx(t)=f(t,x(t))\,dt\), we stochastically perturb the deterministic model by using two types of aperiodic intermittent stochastic noise driven by G-Brownian motion. We demonstrate quasi-sure exponential stability for the perturbed system and give the convergence rate, which is related to the control intensity. An application to SIS epidemic model is presented to confirm the theoretical results. Since Khas'minskii [1] used two white noise sources to stabilize a system, a wide range of works have appeared on stochastic stabilization problems. Arnold et al. [2] obtained stabilization results by using noisy terms in Stratonovich sense. Mao [3] presented a general theory on the stabilization by Brownian motion. Huang [4] further developed the general theory by Mao and revealed a more fundamental principle. Zhao et al. [5] established a new type of stability theorem which generalized local Lipschitz and one-sided linear growth conditions. From the considerations of reducing control cost and time, discontinuous controllers have been designed to stabilize a given system, such as discrete-time feedback control [6, 7], pinning control [8], impulsive control [9], adaptive control [10], intermittent control [11], etc. As for intermittent control, the control time is divided into periodic and aperiodic type. Periodically intermittent control has been studied by many authors, especially in synchronization problems. Zhang et al. [11] considered a periodic intermittent Brownian noise perturbation to stabilize and destabilize a given nonlinear system, the obtained criteria are different. Recently, Liu et al. [12] investigated the aperiodically intermittent control which has good performance to quasi-synchronize nonlinear coupled networks [13]. Motivated by the idea of stochastic stabilization via intermittent stochastic noise driven by Brownian motion, we are interested in analyzing whether the presence of intermittent stochastic perturbation driven by G-Brownian motion can stabilize a nonlinear system, since G-Brownian motion has powerful applications in modeling uncertainties. It is necessary to mention the pioneering work by Peng [14] who set up the G-framework. He pointed out that G-Brownian motion has independent increments and can be consistent with the classical Brownian motion in the sense of no volatility uncertainty. Many works have been done on G-Brownian motions [15–20], in particular existence and uniqueness theory for stochastic differential equations driven by G-Brownian motion (G-SDEs), as well as stability behavior and control theory, has been developed. Fei [16] investigated the exponential stability of paths for a G-SDE. Ren [19] designed a feedback control based on discrete-time observations to stabilize a G-SDE system. In [18], the aperiodically intermittent control has been embedded into the drift part, the authors obtained a set of piecewise Lyapunov-type conditions for the moment exponential stability theory. As far as we know, there is hardly any literature about stochastic stabilization of deterministic systems via aperiodic intermittent stochastic perturbation driven by G-Brownian motion. In the present paper, we add two aperiodic intermittent stochastic perturbations driven by G-Brownian motion into a general deterministic nonlinear system. Those stochastic perturbations can stabilize the nonlinear system. The main contributions are summarized as follows: The control itself is a stochastic perturbation driven by G-Brownian motion, which contains mean and volatility uncertainties, therefore, expands the general deterministic intermittent control and the stochastic intermittent control which is driven by classical Brownian motion. The control time is aperiodically intermittent, which improves flexibility to time nodes and length. The acquired criteria consist of the work and rest width, we can control the steady rate autonomously by adjusting the work and rest width. In Sect. 2, we establish the aperiodic intermittent stochastically perturbed system (2.2) driven by G-Brownian motion, present four notions, two lemmas, and one definition which will be used in the next section. Stabilization analysis is carried out in Sect. 3. In Sect. 4, we provide an application on stabilizing an SIS epidemic model by adding a special aperiodic intermittent stochastic perturbation driven by G-Brownian motion. This example clearly shows the power of stabilization by aperiodic intermittent stochastic perturbation driven by G-Brownian motion. Consider a nonlinear system $$ dx(t)=f\bigl(t,x(t)\bigr)\,dt,\quad t\geq 0, $$ with initial value \(x(t_{0})=x_{0} \in R^{n}\). We add two aperiodic intermittent stochastic perturbations driven by G-Brownian motion to the nonlinear system, then the system becomes $$ dx(t)=f\bigl(t,x(t)\bigr)\,dt+h\bigl(t,x(t)\bigr)\, d\langle B \rangle (t)+\sigma \bigl(t,x(t)\bigr)\,dB(t), $$ $$\begin{aligned}& h(t,x(t))= \textstyle\begin{cases} h_{1}(t,x(t)),& t \in [t_{i}^{h},t_{i}^{h}+c_{i}^{h}), \\ 0,& t \in [t_{i}^{h}+c_{i}^{h},t_{i+1}^{h}), \end{cases}\displaystyle \end{aligned}$$ $$\begin{aligned}& \sigma (t,x(t))= \textstyle\begin{cases} \sigma _{1}(t,x(t)),& t \in [t_{j}^{\sigma },t_{j}^{\sigma }+c_{j}^{ \sigma }), \\ 0,& t \in [t_{j}^{\sigma }+c_{j}^{\sigma },t_{j+1}^{\sigma }), \end{cases}\displaystyle \end{aligned}$$ with \(i ,j \in N\). Here $$ f,h_{1},\sigma _{1}:[t_{0},\infty )\times R^{n}\rightarrow R^{n}\quad \text{and}\quad f,h_{1},\sigma _{1} \in M_{G}^{2}(0,T). $$ Also \(B(t)\) is a one-dimensional G-Brownian motion with \(G(a)=\frac{1}{2} \hat{\mathbb{E}}[aB_{1}^{2}]=\frac{1}{2}( \bar{\delta }^{2} a^{+}-\underline{\delta }^{2} a^{-})\), where \(\bar{\delta }^{2}=\hat{\mathbb{E}}[B_{1}^{2}]\), \(\underline{\delta }^{2}=- \hat{\mathbb{E}}[-B_{1}^{2}]\); \(\langle B \rangle (t)\) is the quadratic variation process of the G-Brownian motion, which is also a continuous process with independent and stationary distribution, thus can still be regarded as a Brownian motion. Under the perturbation of h type, the time span \([t_{i}^{h},t_{i+1}^{h})\) contains the work time \([t_{i}^{h},t_{i}^{h}+c_{i}^{h})\) and the rest time \([t_{i}^{h}+c_{i}^{h},t_{i+1}^{h})\) as shown in Fig. 1, \(c_{i}^{h}\) denotes the ith h-type noise width. Similarly, the time span \([t_{i}^{\sigma },t_{i+1}^{\sigma })\) contains the work time \([t_{i}^{\sigma },t_{i}^{\sigma }+c_{i}^{\sigma })\) and the rest time \([t_{i}^{ \sigma }+c_{i}^{\sigma },t_{i+1}^{\sigma })\), \(c_{i}^{\sigma }\) denote the ith σ-type noise width. Naturally, those two noise widths satisfy \(0\leq c_{i}^{h} \leq t_{i+1}^{h}- t_{i}^{h}; 0 \leq c_{i}^{\sigma } \leq t_{i+1}^{\sigma }-t_{i}^{\sigma }\). For the aperiodically intermittent perturbation strategy, the start time and the noise width might be different, but the total perturbation time ratio should be fixed in the long term. Mathematically, we assume there exist two positive scalars \(\omega _{h}\), \(\omega _{\sigma }\) such that the above time nodes satisfy the following assumptions: $$ \begin{aligned} &\frac{\sum_{i=0}^{n}c_{i}^{h}}{t_{n+1}^{h}-t_{0}}=\omega _{h}, \\ &\frac{\sum_{j=0}^{n}c_{j}^{\sigma }}{t_{n+1}^{\sigma }-t_{0}}=\omega _{ \sigma }. \end{aligned} $$ We call \(\omega _{h}\) the h-type perturbation time ratio and \(\omega _{\sigma }\) the σ-type perturbation time ratio. Sketch of the aperiodically intermittent control strategy Throughout this paper, \(f_{1}\), \(h_{1}\), and \(\sigma _{1}\) satisfy the local Lipschitz condition and one-sided growth condition \(x^{T} f(t,x)+x^{T} h_{1}(t,x)+\frac{1}{2}\sigma _{1}^{2}(t,x)\leq K_{0} \| x \|^{2} \), where \(K_{0} >0\). Clearly, \(h(t,x)\) and \(\sigma (t,x)\) also satisfy the local Lipschits condition and one-sided growth condition. Moreover, we assume \(f(t,0)\equiv 0\), \(h(t,0)\equiv 0\), \(\sigma (t,0)\equiv 0\) for stochastic stability analysis, which guarantees the existence of a trivial solution \(x(t;t_{0},0)\equiv 0\). Letting \(V \in C^{1,2} ([t_{0},\infty )\times R^{n};R^{+})\), we introduce some new notations as follows: $$\begin{aligned}& F(t,x)=\frac{V_{t}(t,x)+V_{x}(t,x)f(t,x)}{V(t,x)}, \\& H_{1}(t,x)=\frac{\sigma ^{T}(t,x)V_{xx}(t,x)\sigma (t,x)}{2V(t,x)}, \\& H_{2}(t,x)=\frac{V_{x}(t,x)h(t,x)}{V(t,x)}, \\& R(t,x)=\frac{[V_{x}(t,x)\sigma (t,x)]^{2}}{V^{2}(t,x)}. \end{aligned}$$ Definition 2.1 The trivial solution of the intermittent G-stochastic system (2.2) in \(R^{n}\) is said to be quasi-sure exponentially stable, if for any \(x_{0}\neq 0\) and \(t\geq t_{0}\), $$ \limsup_{t\rightarrow \infty }\frac{1}{t}\log \bigl\Vert x(t;t_{0},x_{0}) \bigr\Vert < 0 \quad \text{q.s.} $$ Under the conditions imposed above, system (2.2) has a unique global solution \(x(t;t_{0},x_{0})\). The solution obeys $$ P\bigl(x(t;t_{0},x_{0}) \neq 0 \textit{ for } t \geq 0 \bigr)=1, \quad \textit{for all } x_{0} \neq 0. $$ The global existence of a unique solution follows from Theorem 4.5 in Li et al. [21], the nonzero property follows from the same method as in Mao [3] (see Lemma 3.2, p. 120). □ Let \(N(t)\) be G-Ito stochastic integral, \(\tau _{n}\) be a sequence of positive numbers with \(\tau _{n} \rightarrow \infty \). Then for all \(\omega \in \Omega \) there exists a random integer \(n_{0}(\omega )\) such that for all \(n \geq n_{0}\), $$ N(t)\leq \frac{\gamma _{n}}{2}\bigl\langle N(t)\bigr\rangle + \frac{2}{\gamma _{n}}\log (n) \quad \textit{on } t_{0} \leq t \leq \tau _{n}. $$ According to Lemma 2.6 in Fei et al. [16], $$ N(t)\leq \frac{\varepsilon }{2}\bigl\langle N(t)\bigr\rangle + \frac{\theta }{\varepsilon }\log (n), $$ We choose \(\gamma _{n}=\varepsilon \), \(\theta =2\), \(g(n)=n\), and the conclusion of Lemma 2.2 can be obtained naturally. □ Remark 2.1 If \(t_{i+1}-t_{i}=T\), \(c_{i}=\delta \) for all \(i \in N\), and \(\bar{\delta }=\underline{\delta }\), then the system (2.2) becomes a periodic intermittent system. This agrees with system 1 in Zhang et al. [11]. Our results can be regarded as a generalization of Zhang et al. [11]. In this section, we will establish the quasi-sure exponential stability theorem based on aperiodic intermittent stochastic noise driven by G-Brownian motions. Since \(x_{0}=0\) implies \(x(t;t_{0},0)=0\), we only need to concentrate on \(x_{0} \neq 0\). (Stabilization theorem) Assume that there exists a function \(V \in C^{1,2} ([t_{0}, \infty )\times R^{n};R^{+})\), and constants \(p>0\), \(c_{1}>0\), \(c_{3} \geq 0\), \(c_{4} \geq 0\), \(c_{5}\geq 0\), \(c_{2}\in R \) such that for \(t \geq t_{0}\), $$\begin{aligned} (\mathrm{i})&\quad c_{1} \Vert x \Vert ^{p} \leq V(t,x), \\ (\mathrm{ii})&\quad V_{t}(t,x)+V_{x}(t,x)f(t,x) \leq c_{2}V(t,x), \\ (\mathrm{iii})&\quad \sigma _{1}^{T}(t,x)V_{xx}(t,x) \sigma _{1}(t,x) \leq c_{3}V(t,x), \\ (\mathrm{iv})&\quad V_{x}(t,x)h_{1}(t,x) \leq c_{4}V(t,x), \\ (\mathrm{v})&\quad \bigl\Vert V_{x}(t,x)\sigma _{1}(t,x) \bigr\Vert ^{2} \geq c_{5}V^{2}(t,x). \end{aligned}$$ Then the solution \(x(t;t_{0},x_{0})\) satisfies $$ \limsup_{t\rightarrow \infty }\frac{1}{t}\log \bigl\Vert x(t;t_{0},x_{0}) \bigr\Vert \leq - \frac{c_{5} \omega _{\sigma } \underline{\delta }^{2}-c_{3} \omega _{h}\bar{\delta }^{2}-2c_{4} \omega _{\sigma } \bar{\delta }^{2}-2c_{2}}{2p}\quad \textit{q.s.} $$ In particular, if \(c_{5} \omega _{\sigma } \underline{\delta }^{2}-c_{3} \omega _{\sigma } \bar{\delta }^{2}-2c_{4} \omega _{h} \bar{\delta }^{2}-2c_{2}>0\), then the solution \(x(t;t_{0},x_{0})\) of system (2.2) is quasi-sure exponentially stable. Fix any \(x_{0}\neq 0\) and write \(x(t;t_{0},x_{0})=x(t)\). By Lemma 2.1, \(x(t)\neq 0\) for all \(t\geq t_{0}\) q.s. Applying Itô's formula, for \(t\geq t_{0}\), we get $$\begin{aligned} \begin{aligned} \log V\bigl(t,x(t)\bigr)={}&\log V(t_{0},x_{0})+ \int _{t_{0}}^{t}F\bigl(s,x(s)\bigr)\,ds + \int _{t_{0}}^{t}H_{1}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s) \\ &{}+ \int _{t_{0}}^{t}H_{2}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s) -\frac{1}{2} \int _{t_{0}}^{t}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s)+N(t), \end{aligned} \end{aligned}$$ $$ N(t)= \int _{t_{0}}^{t}\frac{V_{x}(s,x(s))\sigma (s,x(s))}{V(s,x(s))}\, d B(s) $$ is a continuous martingale. By Lemma 2.2, taking an arbitrary \(\varepsilon \in (0,1)\), for all \(\omega \in \Omega\) q.s., there exists an integer \(n_{0}(\omega ,P) \) such that if \(n\geq n_{0}\), then $$ N(t)\leq \frac{2}{\varepsilon }\log (n)+\frac{\varepsilon }{2} \int _{t_{0}}^{t}R\bigl(s,x(s)\bigr)\, d \langle B \rangle (s) $$ holds for all \(t_{0}\leq t\leq t_{0}+n\). Substituting this into (3.2), we have $$\begin{aligned} \log V\bigl(t,x(t)\bigr) =&\log V(t_{0},x_{0})+ \int _{t_{0}}^{t}F\bigl(s,x(s)\bigr)\,ds+ \int _{t_{0}}^{t}H_{1}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s) \\ &{}+ \int _{t_{0}}^{t}H_{2}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s)-\frac{1}{2}(1-\varepsilon ) \int _{t_{0}}^{t}R\bigl(s,x(s)\bigr)\, d\langle B \rangle (s)+\frac{2}{\varepsilon }\log (n). \end{aligned}$$ Then we consider t in a different time interval. Obviously, there exist two positive integers \(n_{1}\), \(n_{2}\) such that \(t \in [t_{n_{1}}^{h},t_{n_{1}+1}^{h}] \cap [t_{n_{2}}^{\sigma },t_{n_{2}+1}^{ \sigma }]\). Depending on h- and σ-type noise widths, there are four possible cases which need to be discussed. Case 1. For all \(\omega \in \Omega \) and \(n>n_{0}\), \(t\in [t_{n_{1}}^{h},t_{n_{1}}^{h}+c_{n_{1}}^{h}) \cap [t_{n_{2}}^{ \sigma },t_{n_{2}}^{\sigma }+c_{n_{2}}^{\sigma })\), we have $$\begin{aligned} \log V\bigl(t,x(t)\bigr) =&\log V(t_{0},x_{0})+ \int _{t_{0}}^{t}F\bigl(s,x(s)\bigr)\,ds \\ & {}+ \int _{t_{0}}^{t_{0}^{\sigma }+c_{0}^{\sigma }}H_{1}\bigl(s,x(s) \bigr)\, d\langle B \rangle (s)+ \int _{t_{0}^{\sigma }+c_{0}^{\sigma }}^{t_{1}^{\sigma }}H_{1}\bigl(s,x(s) \bigr)\, d \langle B\rangle (s) \\ &{}+\cdots + \int _{t_{n_{2}}^{\sigma }}^{t}H_{1}\bigl(s,x(s) \bigr)\, d \langle B\rangle (s) \\ & {}+ \int _{t_{0}}^{t_{0}^{h}+c_{0}^{h}}H_{2}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s)+ \int _{t_{0}^{h}+c_{0}^{h}}^{t_{1}^{h}}H_{2}\bigl(s,x(s) \bigr)\,d\langle B \rangle (s) \\ &{}+\cdots + \int _{t_{n_{1}}^{h}}^{t}H_{2}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s) \\ & {}-\frac{1}{2}(1-\varepsilon ) \biggl[ \int _{t_{0}}^{t_{0}^{\sigma }+c_{0}^{ \sigma }}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s)+ \int _{t_{0}^{\sigma }+c_{0}^{ \sigma }}^{t_{1}^{\sigma }}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s) \\ & {}+\cdots + \int _{t_{n_{2}}^{\sigma }}^{t}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s) \biggr]+\frac{2}{\varepsilon }\log (n). \end{aligned}$$ Substituting conditions (ii), (iii), (iv), and (v) into the above equation, we obtain $$\begin{aligned} \log V\bigl(t,x(t)\bigr) =&\log V(t_{0},x_{0})+c_{2} (t-t_{0})+\frac{1}{2}c_{3} \bar{\delta }^{2}\bigl[c_{0}^{\sigma }+0+\cdots + \bigl(t-t_{n_{2}}^{\sigma }\bigr)\bigr] \\ &{}+c_{4} \bar{\delta }^{2}\bigl[c_{0}^{h}+0+ \cdots +\bigl(t-t_{n_{1}}^{h}\bigr)\bigr] \\ &{}- \frac{1}{2}(1-\varepsilon )c_{5} \underline{\delta }^{2} \bigl[c_{0}^{ \sigma }+0+\cdots + \bigl(t-t_{n_{2}}^{\sigma }\bigr)\bigr]+\frac{2}{\varepsilon }\log (n) \\ \leq &\log V(t_{0},x_{0})+c_{2} (t-t_{0})+\frac{2}{\varepsilon } \log (n) \\ &{}+\frac{1}{2} c_{3} \bar{\delta }^{2} \sum _{i=0}^{n_{2}} c_{i}^{ \sigma }+c_{4} \bar{\delta }^{2}\sum_{i=0}^{n_{1}} c_{i}^{h}- \frac{1}{2}(1-\varepsilon )c_{5} \underline{\delta }^{2}\sum _{i=0}^{n_{2}-1} c_{i}^{\sigma }, \end{aligned}$$ which implies that $$\begin{aligned} \frac{1}{t}\log V\bigl(t,x(t)\bigr) \leq& \frac{1}{t} \biggl[ \log V(t_{0},x_{0})+c_{2} (t-t_{0})+\frac{2}{\varepsilon }\log (n) \biggr] \\ &{}+ \frac{c_{3} \bar{\delta }^{2} \sum_{i=0}^{n_{2}} c_{i}^{\sigma }}{2t_{n_{2}}^{\sigma }} + \frac{c_{4} \bar{\delta }^{2}\sum_{i=0}^{n_{1}} c_{i}^{h}}{2t_{n_{1}}^{h}} - \frac{(1-\varepsilon )c_{5} \underline{\delta }^{2}\sum_{i=0}^{n_{2}-1} c_{i}^{\sigma }}{t_{n_{2}+1}}. \end{aligned}$$ By Eq. (2.5), we deduce $$ \limsup_{t\rightarrow \infty }\frac{1}{t}\log V\bigl(t,x(t)\bigr) \leq c_{2}+ \frac{c_{3} \omega _{\sigma }\bar{\delta }^{2} }{2} +c_{4} \omega _{h} \bar{\delta }^{2}-\frac{1}{2}(1- \varepsilon )c_{5} \omega _{\sigma } \underline{\delta }^{2}. $$ Using condition (i) and letting \(\varepsilon \rightarrow 0\), it follows that $$ \limsup_{t\rightarrow \infty }\frac{1}{t}\log \bigl\Vert x(t,t_{0},x_{0}) \bigr\Vert \leq - \frac{c_{5} \omega _{\sigma } \underline{\delta }^{2}-c_{3} \omega _{h}\bar{\delta }^{2}-2c_{4} \omega _{\sigma } \bar{\delta }^{2}-2c_{2}}{2p} \quad \text{q.s.} $$ Case 2. For all \(\omega \in \Omega \) and \(n>n_{0}\), \(t\in [t_{n_{1}}^{h},t_{n_{1}}^{h}+c_{n_{1}}^{h}) \cap [t_{n_{2}}^{ \sigma }+c_{n_{2}}^{\sigma },t_{n_{2}+1}^{\sigma })\), the integral interval length of \(\sigma (t,x(t))\) has changed compared to Case 1. Hence we have $$\begin{aligned} \log V\bigl(t,x(t)\bigr) =&\log V(t_{0},x_{0})+ \int _{t_{0}}^{t}F\bigl(s,x(s)\bigr)\,ds \\ &{}+ \int _{t_{0}}^{t_{0}^{\sigma }+c_{0}^{h}}H_{1}\bigl(s,x(s) \bigr)\,d\langle B \rangle (s) + \int _{t_{0}^{h}+c_{0}^{\sigma }}^{t_{1}^{\sigma }}H_{1}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s) \\ &{}+\cdots + \int _{t_{n_{2}}^{\sigma }}^{t_{n_{2}}^{\sigma }+c_{n_{2}}^{ \sigma }}H_{1}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s)+ \int _{t_{n_{2}}^{\sigma }+c_{n_{2}}^{ \sigma }}^{t}H_{1}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s) \\ &{}+ \int _{t_{0}}^{t_{0}^{h}+c_{0}^{h}}H_{2}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s)+ \int _{t_{0}^{h}+c_{0}^{h}}^{t_{1}^{\sigma }}H_{2}\bigl(s,x(s) \bigr)\,d\langle B \rangle (s) \\ &{}+\cdots + \int _{t_{n_{1}}^{h}}^{t}H_{2}\bigl(s,x(s) \bigr)\,d\langle B \rangle (s) \\ &{}-\frac{1}{2}(1-\varepsilon ) \biggl[ \int _{t_{0}}^{t_{0}^{\sigma }+c_{0}^{ \sigma }}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s)+ \int _{t_{0}^{\sigma }+c_{0}^{ \sigma }}^{t_{1}^{\sigma }} R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s) \\ &{}+\cdots + \int _{t_{n_{2}}^{\sigma }}^{t_{n_{2}}^{\sigma }+c_{n_{2}}^{ \sigma }}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s) \\ &{}+ \int _{t_{n_{2}}^{\sigma }+c_{n_{2}}^{ \sigma }}^{t}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s) \biggr]+ \frac{2}{\varepsilon }\log (n). \end{aligned}$$ By conditions (ii), (iii), (iv), and (v), we obtain $$\begin{aligned} \log V\bigl(t,x(t)\bigr) =&\log V(t_{0},x_{0})+c_{2} (t-t_{0})+\frac{1}{2}c_{3} \bar{\delta }^{2}\bigl[c_{0}^{\sigma }+0+\cdots +c_{n_{2}}^{ \sigma }\bigr] \\ &{}+c_{4} \bar{\delta }^{2}\bigl[c_{0}^{h}+0+\cdots + \bigl(t-t_{n_{1}}^{h}\bigr)\bigr] \\ &{}-\frac{1}{2}(1-\varepsilon )c_{5} \underline{\delta }^{2} \bigl[c_{0}^{ \sigma }+0+\cdots + \bigl(t-t_{n_{1}}^{h}\bigr)\bigr]+ \frac{2}{\varepsilon }\log (n) \\ \leq &\log V(t_{0},x_{0})+c_{2} (t-t_{0})+\frac{2}{\varepsilon } \log (n) \\ &{}+\frac{1}{2} c_{3} \bar{\delta }^{2} \sum _{i=0}^{n_{2}} c_{i}^{ \sigma }+c_{4} \bar{\delta }^{2}\sum_{i=0}^{n_{1}} c_{i}^{h}- \frac{1}{2}(1-\varepsilon )c_{5} \underline{\delta }^{2}\sum _{i=0}^{n_{2}} c_{i}^{\sigma }. \end{aligned}$$ Using the same method as in Case 1, we conclude Case 3. For all \(\omega \in \Omega \) and \(n>n_{0}\), \(t\in [t_{n_{1}}^{h}+c_{n_{1}}^{h},t_{n_{1}+1}^{h}) \cap [t_{n_{2}}^{ \sigma },t_{n_{2}}^{\sigma }+c_{n_{2}}^{\sigma })\) for all \(\omega \in \Omega \) and \(n>n_{0}\). This case is similar to Case 1 except for the additional time interval \([t_{n_{1}}^{h}+c_{n_{1}}^{h},t_{n_{1}+1}^{h})\) of \(h(t,x(t))\). Since \(h(t,x(t))=0\), \(t \in [t_{n_{1}}^{h}+c_{n_{1}}^{h},t_{n_{1}+1}^{h})\), \(\log V(t,x(t))\) can be written as $$\begin{aligned} \log V\bigl(t,x(t)\bigr) =&\log V(t_{0},x_{0})+c_{2} (t-t_{0})+\frac{1}{2}c_{3} \bar{\delta }^{2}\bigl[c_{0}^{\sigma }+0+\cdots + \bigl(t-t_{n_{2}}^{ \sigma }\bigr)\bigr] \\ &{}+c_{4} \bar{\delta }^{2}\bigl[c_{0}^{h}+0+ \cdots +c_{n_{1}}^{h}\bigr]- \frac{1}{2}(1- \varepsilon )c_{5} \underline{\delta }^{2} \bigl[c_{0}^{ \sigma }+0+\cdots +\bigl(t-t_{n_{2}}^{\sigma } \bigr)\bigr] \\ &{}+ \frac{2}{\varepsilon }\log (n) \\ \leq &\log V(t_{0},x_{0})+c_{2} (t-t_{0})+\frac{2}{\varepsilon } \log (n) \\ &{}+\frac{1}{2} c_{3} \bar{\delta }^{2} \sum _{i=0}^{n_{2}} c_{i}^{ \sigma }+c_{4} \bar{\delta }^{2}\sum_{i=0}^{n_{1}} c_{i}^{h}- \frac{1}{2}(1-\varepsilon )c_{5} \underline{\delta }^{2}\sum _{i=0}^{n_{2}-1} c_{i}^{\sigma }. \end{aligned}$$ Together with conditions (i)–(v), it follows that $$ \limsup_{t\rightarrow \infty }\frac{1}{t}\log \bigl\Vert x(t,t_{0},x_{0}) \bigr\Vert \leq - \frac{c_{5}(1-\varepsilon ) \omega _{\sigma } \underline{\delta }^{2}-c_{3} \omega _{h}\bar{\delta }^{2}-2c_{4} \omega _{\sigma } \bar{\delta }^{2}-2c_{2}}{2p}\quad \text{q.s.} $$ As \(\varepsilon \rightarrow 0\), the following inequality holds: Case 4. For all \(\omega \in \Omega \) and \(n>n_{0}\), \(t\in [t_{n_{1}}^{h}+c_{n_{1}}^{h},t_{n_{1}+1}^{h}) \cup [t_{n_{2}}^{ \sigma }+c_{n_{2}}^{\sigma },t_{n_{2}+1}^{\sigma })\). This case is similar to Case 2 except for the time interval of \(h(t,x(t))\). This time \(\log V(t,x(t))\) can be divided into $$\begin{aligned} \log V\bigl(t,x(t)\bigr) =&\log V(t_{0},x_{0})+ \int _{t_{0}}^{t}F\bigl(s,x(s)\bigr)\,ds \\ &{}+ \int _{t_{0}}^{t_{0}^{\sigma }+c_{0}^{h}}H_{1}\bigl(s,x(s) \bigr)\,d\langle B \rangle (s)+ \int _{t_{0}^{h}+c_{0}^{\sigma }}^{t_{1}^{\sigma }}H_{1}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s) \\ &{}+\cdots + \int _{t_{n_{2}}^{\sigma }}^{t_{n_{2}}^{\sigma }+c_{n_{2}}^{ \sigma }}H_{1}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s) + \int _{t_{n_{2}}^{\sigma }+c_{n_{2}}^{ \sigma }}^{t}H_{1}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s) \\ &{}+ \int _{t_{0}}^{t_{0}^{h}+c_{0}^{h}}H_{2}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s)+ \int _{t_{0}^{h}+c_{0}^{h}}^{t_{1}^{h}}H_{2}\bigl(s,x(s) \bigr)\,d\langle B \rangle (s) \\ &{}+\cdots + \int _{t_{n_{1}}^{h}}^{t_{n_{1}}^{h}+c_{n_{1}}^{h}}H_{2}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s)+ \int _{t_{n_{1}}^{h}+c_{n_{1}}^{h}}^{t}H_{2}\bigl(s,x(s) \bigr)\,d\langle B\rangle (s) \\ &{}-\frac{1}{2}(1-\varepsilon ) \biggl[ \int _{t_{0}}^{t_{0}^{\sigma }+c_{0}^{ \sigma }}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s)+ \int _{t_{0}^{\sigma }+c_{0}^{ \sigma }}^{t_{1}^{\sigma }}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s) \\ &{}+\cdots + \int _{t_{n_{2}}^{\sigma }}^{t_{n_{2}}^{\sigma }+c_{n_{2}}^{ \sigma }}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s) \\ &{}+ \int _{t_{n_{2}}^{\sigma }+c_{n_{2}}^{ \sigma }}^{t}R\bigl(s,x(s)\bigr)\,d\langle B \rangle (s) \biggr] + \frac{2}{\varepsilon }\log (n). \end{aligned}$$ The latter implies that $$\begin{aligned} \log V\bigl(t,x(t)\bigr) =&\log V(t_{0},x_{0})+c_{2} (t-t_{0}) \\ &{}+\frac{1}{2}c_{3} \bar{\delta }^{2} \bigl[c_{0}^{\sigma }+0+\cdots +c_{n_{2}}^{ \sigma }+0 \bigr] \\ &{}+c_{4} \bar{\delta }^{2}\bigl[c_{0}^{h}+0+ \cdots +c_{n_{1}}^{h}+0\bigr] \\ &{}-\frac{1}{2}(1-\varepsilon )c_{5} \underline{\delta }^{2} \bigl[c_{0}^{ \sigma }+0+\cdots +c_{n_{2}}^{\sigma }+0\bigr]+\frac{2}{\varepsilon }\log (n) \\ =&\log V(t_{0},x_{0})+c_{2} (t-t_{0})+\frac{2}{\varepsilon }\log (n) \\ &{}+\frac{1}{2} c_{3} \bar{\delta }^{2} \sum _{i=0}^{n_{2}} c_{i}^{ \sigma }+c_{4} \bar{\delta }^{2}\sum_{i=0}^{n_{1}} c_{i}^{h}- \frac{1}{2}(1-\varepsilon )c_{5} \underline{\delta }^{2}\sum _{i=0}^{n_{2}} c_{i}^{\sigma }. \end{aligned}$$ Thus we claim $$ \limsup_{t\rightarrow \infty }\frac{1}{t}\log \bigl\Vert x(t,t_{0},x_{0}) \bigr\Vert \leq - \frac{c_{5} \omega _{\sigma } \underline{\delta }^{2}-c_{3} \omega _{h}\bar{\delta }^{2}-2c_{4} \omega _{\sigma } \bar{\delta }^{2}-2c_{2}}{2p}\quad \text{q.s.} $$ From the above four cases, for all \(\omega \in \Omega \) and \(t \in [t_{n_{1}}^{h},t_{n_{1}+1}^{h}]\cap [t_{n_{2}}^{\sigma },t_{n_{2}+1}^{ \sigma }]\), the following inequality always holds $$ \limsup_{t\rightarrow \infty }\frac{1}{t}\log \bigl\Vert x(t,t_{0},x_{0}) \bigr\Vert \leq - \frac{c_{5} \omega _{1} \underline{\delta }^{2}-c_{3} \omega _{1}\bar{\delta }^{2}-2c_{4} \omega _{2}\bar{\delta }^{2}-2c_{2}}{2p}\quad \text{q.s.} $$ The proof is complete. □ If \(V(t,x)=\|x\|^{2}\), conditions (i)–(v) in Theorem 3.1 become: (i) \(x^{T} f(t, x) \leq s_{1}\|x\|^{2}\); (ii) \(\|\sigma _{1}(t, x) \| \leq s _{2}\|x\|\); (iii) \(x^{T} h_{1}(t, x) \leq s_{3}\|x\|^{2}\); and (iv) \(\|x^{T}\sigma _{1}(t,x)\| \geq s_{4}\|x\|\). Then \(x(t; t_{0},x_{0})\) satisfies $$ \limsup_{t\rightarrow \infty }\frac{1}{t}\log \bigl\Vert x(t; t_{0},x_{0}) \bigr\Vert \leq -\bigl(s_{4}^{2} \omega _{1} \underline{\delta }^{2}-0.5 s_{2} ^{2} \omega _{1}\bar{\delta }^{2}-s_{3} \omega _{2} \bar{\delta }^{2}-s_{1}\bigr)\quad \text{q.s.} $$ In particular, if \(s_{4}^{2} \omega _{1} \underline{\delta }^{2}-0.5 s_{2} ^{2}\omega _{1} \bar{\delta }^{2}-s_{3} \omega _{2} \bar{\delta }^{2}-s_{1}>0\), the solution \(x(t;t_{0},x_{0})\) of system (2.2) is quasi-sure exponentially stale. If \(h_{1}(t,x)=0\), \(\sigma _{1}(t,x)=g_{1}(t,x)\), and \(\bar{\delta }=\underline{\delta }=1\), system (2.2) becomes an intermittently stochastically perturbed system driven by Brownian motion. More specially, if \(t_{j+1}^{\sigma }-t_{j}^{\sigma }=T\) and \(c_{j}^{\sigma }=\delta \) for all \(j \in N\), system (2.2) becomes a periodic intermittent system. Equation (3.1) becomes $$ \limsup_{t\rightarrow \infty } \frac{1}{t}\log \bigl\Vert x(t; t_{0},x_{0}) \bigr\Vert \leq -\frac{(c_{5} -c_{3} ) \frac{\delta }{T}-2c_{2}}{2p}\quad \text{a.s.} $$ This agrees with Theorem 1 in Zhang et al. [11]. Our results can be regarded as a generalization of Zhang et al. [11]. According to Eq. (3.1), the Lyapunov exponential of \(x(t;t_{0},x_{0})\) depends on perturbation time ratios \(\omega _{h}\), \(\omega _{\sigma }\) and volatility uncertainties \(\underline{\delta }\), δ̄. Thus h-type perturbation's time ratio and volatility uncertainty \(\underline{\delta }\) can speed up exponential convergence, if the control strategy is designed based on our theoretical results. Application to an epidemic system In this section, we study an application of our theoretical results in Sect. 3 on SIS epidemic model. A classical deterministic SIS epidemic model partitions the host population into the susceptible compartment S and the infectious compartment I. Ordinary differential equations (ODEs) that describe the change of size in compartments S and I can be written as $$ \begin{aligned} &\frac{{d}S}{{d}t}=A-d S+\mu I - \beta SI, \\ &\frac{{d}I}{{d}t}=\beta SI-d I-\mu I. \end{aligned} $$ Since \(S,I \geq 0\) and \(S+I=\frac{A}{d}\), the above two ODEs can be rewritten as $$ \frac{{d}I}{{d}t}=\beta \biggl( \frac{A}{d}-I\biggr)I-d I-\mu I. $$ The dynamics of the SIS epidemic model is completely determined by the basic reproduction number $$ R_{0} =\frac{\beta A}{d(d+\mu )}. $$ If \(R_{0} \leq 1\), the disease-free equilibrium \(P_{0} = (\frac{A}{d},0,0)\) is globally asymptotically stable and the disease always dies out; if \(R_{0} > 1\), then \(P_{0}\) is unstable and an endemic equilibrium exists which means the disease will persist. Now, we aim to control the number of infectious even if \(R_{0}>1\). Adding two aperiodic intermittently stochastic perturbations \(h SI \, {d}\langle B\rangle (t)\), \(\sigma I \,{d}B(t)\) to SIS epidemic model, it becomes $$ dI(t)= \biggl[ \beta \biggl(\frac{A}{d}-I\biggr)I-d I-\mu I \biggr]\,dt+h\bigl(t,I(t)\bigr)\, d \langle B\rangle (t)+\sigma \bigl(t,I(t)\bigr)\,dB(t), $$ $$\begin{aligned}& h\bigl(t,x(t)\bigr)= \textstyle\begin{cases} h SI,& t \in [t_{i}^{h},t_{i}^{h}+c_{i}^{h}), \\ 0,& t \in [t_{i}^{h}+c_{i}^{h},t_{i+1}^{h}), \end{cases}\displaystyle \\& \sigma \bigl(t,x(t)\bigr)= \textstyle\begin{cases} \sigma I,& t \in [t_{j}^{\sigma },t_{j}^{\sigma }+c_{j}^{\sigma }), \\ 0,& t \in [t_{j}^{\sigma }+c_{j}^{\sigma },t_{j+1}^{\sigma }), \end{cases}\displaystyle \end{aligned}$$ with \(i ,j \in N\). Letting \(V(t,I)=I\) and verifying conditions in Theorem 3.1, we obtain $$\begin{aligned}& V(t,I)=I\geq \Vert I \Vert ^{1}; \\& V_{I}(t,I)f(t,I)=\beta \biggl(\frac{A}{d}-I\biggr)I-d I-\mu I\leq \biggl( \frac{\beta A}{d} -d-\mu \biggr)I=\biggl( \frac{\beta A}{d} -d-\mu \biggr)V(t,I); \\& \sigma _{1}^{T}(t,I)V_{II}(t,I)\sigma _{1}(t,I)=0\leq 0; \\& V_{I}(t,I)h_{1}(t,I)=h SI \leq \frac{hA}{d}I \leq \frac{hA}{d}V(t,I); \\& \bigl\Vert V_{I}(t,I)\sigma _{1}(t,I) \bigr\Vert ^{2}=\sigma ^{2}I^{2} \geq \sigma ^{2} V^{2}(t,I). \end{aligned}$$ Comparing with conditions (ii)–(v), we obtain \(p=1\), \(c_{1}=1\), \(c_{2}=\frac{\beta A}{d} -d-\mu \), \(c_{3}=0\), \(c_{4}= \frac{hA}{d}\), \(c_{5}=\sigma ^{2}\). Thus the infectious part of the population \(I(t)\) satisfies $$ \limsup_{t\rightarrow \infty }\frac{1}{t}\log \bigl\Vert I(t) \bigr\Vert \leq \frac{\beta A}{d} -d-\mu + \frac{hA\omega _{2} \bar{\delta }^{2}}{d}- \frac{ \omega _{1} \sigma ^{2}\underline{\delta }^{2}}{2}\quad \text{q.s.} $$ If \(R_{0}>1\), which means \(\frac{\beta A}{d} -d-\mu >0\), the Lyapunov exponent of \(I(t)\) would also be lower than 0 by adjusting the perturbation parameter \(\omega _{1}\), σ, \(\underline{\delta }\). This implies that the disease can be stabilized by intermittent stochastic perturbation. Let us provide a numerical example for the stochastic perturbed SIS epidemic model (4.2) to substantiate the analytic findings. For system (4.2), setting \(A=100\), \(\beta =0.0002\), \(d=0.1\), \(\mu =0.05\) and \(h=0.1\), \(\sigma =0.5\), \(\bar{\delta }=2\), \(\underline{\delta }=1\), \(\omega _{2}=0.1\), we can calculate \(R_{0}=\frac{4}{3}>1\), the endemic equilibrium \(E^{*}\) is \((750,250)\), which means \(I(t)\) tends to 250, the disease will persist. To stabilize the deterministic SIS epidemic model (4.1), we choose different perturbation intensities \(\omega _{1}\) to compare the stabilization effects. Figure 2 shows clearly that the bigger the h-type perturbation intensity \(\omega _{1}\), the faster the steady speed. A single path of the solution In this paper, stochastic stabilization of a nonlinear system via aperiodic intermittent stochastic perturbation driven by G-Brownian motion has been investigated. We have derived sufficient conditions for quasi-sure exponential stability for the perturbed system (2.2), the criterion involves intermittent control strength. As an application, we have designed two special aperiodic intermittent stochastic perturbations to a deterministic SIS epidemic model, which would stabilize the epidemic system even though \(R_{0}>1\). Generally, we conclude that an aperiodic intermittent stochastic perturbation driven by G-Brownian motion can stabilize a nonlinear system. Some interesting topics deserve further investigations. It is also interesting to consider the case that a random perturbation is a real noise and control time is random. We leave these questions for further investigations and look forward to solving them in the near future. The datasets used or analyzed during the current study are available from the corresponding author on reasonable request. Khasminskii, R.: Stochastic Stability of Differential Equations. Sijthoff & Noordhoff, Rockville (1980) Arnold, L., Crauel, H., Wihstutz, V.: Stabilization of linear systems by noise. SIAM J. Control Optim. 21, 451–461 (1983) Mao, X.R.: Stochastic Differential Equations and Applications, 2nd edn. Woodhead Publishing, Cambridge (2008) Huang, L.R.: Stochastic stabilization and destabilization of nonlinear differential equations. Syst. Control Lett. 62(2), 163–169 (2013) Zhao, X.Y., Deng, F.Q.: A new type of stability theorem for stochastic systems with application to stochastic stabilization. IEEE Trans. Autom. Control 61(1), 240–245 (2016) Deng, F.Q., Luo, Q., Mao, X.R.: Stochastic stabilization of hybrid differential equations. Automatica 48, 2321–2328 (2012) Song, G.F., Lu, E.Y., Zheng, B.C., Mao, X.R.: Almost sure stabilization of hybrid systems by feedback control based on discrete-time observations of mode and state. Sci. China Inf. Sci. 61, 1–16 (2018) Suarez, O.J., Vega, C.J., Sanchez, E.N., et al.: Neural sliding-mode pinning control for output synchronization for uncertain general complex networks. Automatica 112, 108694 (2020) Cheng, P., Deng, F.Q., Yao, F.Q.: Almost sure exponential stability and stochastic stabilization of stochastic differential systems with impulsive effects. Nonlinear Anal. Hybrid Syst. 30, 106–117 (2018) Liu, Y.J., Lu, S., Li, D.: Adaptive controller design-based ABLF for a class of nonlinear time-varying state constraint systems. IEEE Trans. Syst. Man Cybern. Syst. 47(7), 1546–1553 (2017) Zhang, B., Deng, F.Q., Peng, S.G., Xie, S.L.: Stabilization and destabilization of nonlinear systems via iintermittent stochastic noise with application to memristor-based system. J. Franklin Inst. 355, 3829–3852 (2018) Liu, X., Chen, T.: Synchronization of complex networks via aperiodically intermittent pinning control. IEEE Trans. Autom. Control 60(2), 3316–3321 (2015) Liu, L., Pecr, M., Cao, J.D.: Aperiodically intermittent stochastic stabilization via discrete time or delay feedback control. Sci. China Inf. Sci. 62(10), 1–13 (2019) Peng, S.G.: Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation. Stoch. Process. Appl., 118(12), 2223–2253 (2008) Zhang, D.F., Chen, Z.: Exponential stability for stochastic differential equation driven by G-Brownian motion. Appl. Math. Lett. 25(11), 1906–1910 (2012) Fei, W.Y., Fei, C.: On exponential stability for stochastic differential equations disturbed by G-Brownian motion. Mathematics, 1–19 (2013) Deng, S.N., Fei, C., Fei, W.Y., Mao, X.R.: Stability equivalence between the stochastic differential delay equations driven by G-Brownian motion and the Euler–Maruyama method. Appl. Math. Lett. 96, 138–146 (2019) Yang, H.J., Ren, Y., Lu, W.: Stabilisation of stochastic differential equations driven by G-Brownian motion via aperiodically intermittent control. Int. J. Control 10, 179–189 (2018) Ren, Y., Yin, W.S., Sakthivel, R.: Stabilization of stochastic differential equations driven by G-Brownian motion with feedback control based on discrete-time state observation. Automatica 95, 146–151 (2018) Ren, Y., Yin, W.S.: Quasi sure exponential stabilization of nonlinear systems via intermittent Brownian motion. Discrete Contin. Dyn. Syst., Ser. B 110(10), 1–13 (2019) Li, X., Lin, X., Lin, Y.: Lyapunov-type conditions and stochastic differential equations driven by G-Brownian motion. J. Math. Anal. Appl. 439, 235–255 (2016) This work is supported by the Youth project of Guangzhou Education Bureau under Grant 1201630502 and Research Fund for Guangzhou University under Grant YG2020010. This work is supported by the Youth project of Guangzhou Education Bureau under Grant 1201630502, Research Fund for Guangzhou University under Grant YG2020010, the National Natural Science Foundation of China under Grant 61803094. School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou, Guangdong, 510006, China Xiaojing Zhong & Haibin Ouyang School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510640, China Feiqi Deng School of Automation, Guangdong University of Technology, Guangzhou, 510006, China Xiaojing Zhong Haibin Ouyang The control itself is a stochastic perturbation driven by G-Brownian motion, which contains mean and volatility uncertainties, therefore, expands the general deterministic intermittent control and the stochastic intermittent control which is driven by classical Brownian motion. The control time is aperiodically intermittent, which improves flexibility to time nodes and length. The acquired criteria consist of the work and rest widths, we can control the steady rate autonomously by adjusting the work and rest widths. All authors read and approved the final manuscript. Correspondence to Haibin Ouyang. Zhong, X., Deng, F., Zhang, B. et al. Stabilization of nonlinear systems via aperiodic intermittent stochastic noise driven by G-Brownian motion with application to epidemic models. Adv Differ Equ 2020, 699 (2020). https://doi.org/10.1186/s13662-020-03120-y DOI: https://doi.org/10.1186/s13662-020-03120-y Stochastic differential equations Integro-differential equations Split-step theta method Mean square exponential stability
CommonCrawl
We compute the entropy of entanglement in the ground states of a general class of quantum spin-chain Hamiltonians --- those that are related to quadratic forms of Fermi operators --- between the first $N$ spins and the rest of the system in the limit of infinite total chain length. We show that the entropy can be expressed in terms of averages over the classical compact groups and establish an explicit correspondence between the symmetries of a given Hamiltonian and those characterizing the Haar measure of the associated group. These averages are either Toeplitz determinants or determinants of combinations of Toeplitz and Hankel matrices. Recent generalizations of the Fisher-Hartwig conjecture are used to compute the leading order asymptotics of the entropy as $N\rightarrow\infty$. This is shown to grow logarithmically with $N$. The constant of proportionality is determined explicitly, as is the next (constant) term in the asymptotic expansion. The logarithmic growth of the entropy was previously predicted on the basis of numerical computations and conformal-field-theoretic calculations. In these calculations the constant of proportionality was determined in terms of the central charge of the Virasoro algebra. Our results therefore lead to an explicit formula for this charge. We also show that the entropy is related to solutions of ordinary differential equations of Painlev\'e type. In some cases these solutions can be evaluated to all orders using recurrence relations.
CommonCrawl
\begin{document} \title{ METRIC REGULARITY -- A SURVEY} \begin{flushright} \begin{tabular}{r} {\it In science things should be made}\\ {\it as simple as possible.} \\ {\rm Albert Einstein} \\ \\ {\it All the great things are simple.}\\ Winston Churchill \end{tabular} \end{flushright} \begin{abstract} Metric regularity theory lies in the very heart of variational analysis, a relatively new discipline whose appearance was to a large extent determined by needs of modern optimization theory in which such phenomena as non-differentiability and set-valued mappings naturally appear. The roots of the theory go back to such fundamental results of the classical analysis as the implicit function theorem, Sard theorem and some others. The paper offers a survey of the state-of-the-art of some principal parts of the theory along with a variety of its applications in analysis and optimization. \end{abstract} \vskip 4mm \noindent{\bf \large Contents} \vskip 3mm \noindent{\bf Introduction} \vskip 2mm \noindent{\bf Part 1. Theory} \vskip 2mm \noindent{\bf 1. Classical theory: five great theorems} 1.1 \ Banach-Shauder open mapping theorem 1.2 \ Regular points of smooth maps 1.3 \ Inverse and implicit function theorems 1.4 \ Sard theorem. Transversality. \vskip 2mm \noindent{\bf 2. Metric theory. Definitions and equivalences} 2.1 \ Local regularity 2.2 \ Non-local regularity \vskip 2mm \noindent{\bf 3. Metric theory. Regularity criteria} 3.1\ General criteria 3.2 \ An application: density theorem 3.3 \ Infinitesimal criteria 3.4 \ Related concepts: metric subregularity, calmness, controllability, linear recession . \vskip 2mm \noindent{\bf 4. Metric theory. Perturbations and stability} 4.1 \ Stability under Lipschitz perturbation 4.2 \ Strong regularity and metric implicit function theorem \vskip 2mm \noindent{\bf 5. Banach space theory} 5.1 \ Techniques of variational analysis in Banach spaces \qquad 5.1.1 \ Homogeneous set-valued mappings \qquad 5.1.2 \ Tangent cones and contingent derivatives \qquad 5.1.3 \ Subdfferentials, normal cones and coderivatives 5.2 \ Separable reduction 5.3 \ Contingent derivatives and primal regularity estimates 5.4 \ Dual regularity estimates \qquad 5.4.1 \ Neighborhood estimates \qquad 5.4.2 \ Perfect regularity and linear perturbations \vskip 2mm \noindent{\bf 6. Finite dimensional theory} 7.1 \ Regularity 7.2 \ Subregularity and error bounds 7.3 \ Transversality \vskip 4mm \noindent{\bf Part 2. Applications} \vskip 2mm \noindent{\bf 7. Special classes of mappings} 7.1 \ Error bounds \qquad 7.1.1 \ Error bounds for convex functions \qquad 7.1.2 \ Some general results on global error bounds 7.2 \ Mappings with convex graphs \qquad 7.2.1 \ Convex processes \qquad 7.2.2 \ Theorem of Robinson-Ursescu \qquad 7.2.3 \ Mappings with convex graphs. Regularity rates 7.3 \ Single-valued Lipschitz maps 7.4 \ Polyhedral and semi-linear sets and mapping 7.5 \ Semialgebraic mappings, stratifications and the Sard theorem \qquad 7.5.1 \ Basic properties \qquad 7.5.2 \ Transversality \vskip 2mm \noindent{\bf 8. Some applications to analysis and optimization} 8.1 \ Subdfferential calculus 8.2 \ Necessary conditions in constrained optimization \qquad 8.2.1 \ Noncovering principle \qquad 8.2.2 \ Exact penalty \qquad 8.2.3 \ Optimality alternative \qquad 8.2.4 \ Optimal control of differential inclusions \qquad 8.2.5 \ Constraint qualifications 8.3 \ An abstract relaxed optimal control problem 8.4 \ Genericity in tame optimization 8.5 \ Method of alternating projection 8.6 \ Generalized equations 8.7 \ Variational inequalities over polyhedral sets 8.8 \ Differential inclusions (existence of solutions) \section*{Introduction} Metric regularity has emerged during last 2-3 decades as one of the central concepts of a young discipline now often called {\it variational analysis}. The roots of this concept go back to a circle of fundamental regularity ideas of classical analysis embodied in such results as the implicit function theorem, Banach open mapping theorem, theorems of Lyusternik and Graves, on the one hand, and the Sard theorem and the Thom-Smale transversality theory, on the other. Smoothness is the key property of the objects to which the classical results are applied. Variational analysis, on the other hand, appeals to objects that may lack this property: functions and maps that are non-differentiable at points of interest, set-valued mappings etc.. Such phenomena naturally appear in optimization theory and not only there\footnote{Grothendick mentions "ubiquity of stratifed structures in practically all domains of geometry" in his1984 {\it Esquisse d'un Programme}, see \cite{AG}}. In the traditional nonlinear analysis, regularity of a mapping (e.g. from a normed space or a manifold to another) at a certain point means that its derivative at the point is onto (the target space or the tangent space of the target manifold). This property, translated through available analytic or topological means to corresponding local properties of the mapping, plays a crucial role in studying some basic problems of analysis such as existence and behavior of solutions of a nonlinear equation $F(x)=y$ (with $F$ and $y$ viewed as data and $x$ as unknown) under small perturbations of the data. Similar problems appear if, instead of equation, we consider inclusion \begin{equation}\label{pr1} y\in F(x) \end{equation} (with $F$ a set-valued mapping this time) which, in essence, is the main object to study in variational analysis. The challenge here is evident: no clear way to approximate the mapping by simple objects like linear operators in the classical case. The key step in the answer to the challenge was connected with the understanding of the metric nature of some basic phenomena that appear in the classical theory. This eventually led to the choice of the class of metric spaces as the main playground and subsequently to abandoning approximation as the primary tool of analysis in favor of a direct study of the phenomena as such. The "metric theory" offers a rich collection of results that, being fairly general and stated in purely metric language, are nonetheless easily adaptable to Banach and finite dimensional settings (still among the most important in applications) and to various classes of mappings with special structure. Moreover, however surprising this may sound, the techniques coming from the metric theory sometimes appear more efficient, flexible and easy to use than the available Banach space techniques (associated with subdifferentials and coderivatives, especially in infinite dimensional Banach spaces). We shall not once see that proper use of metric criteria may lead to dramatic simplification of proofs and clarification of the ideas behind them. This occurs at all levels of generality, from results valid in arbitrary metric spaces to specific facts about even fairly simple classes of finite dimensional mappings. It should be added furthermore that the central role played by distance estimates has determined a quantitative character of the theory (contrary to the predominantly qualitative character of the classical theory). Altogether, this opens gates to a number of new applications, such as say metric fixed point theory, differential inclusions, all chapters of optimization theory, numerical methods. This paper has appeared as a result of two short courses I gave in the University of Newcastle and the University of Chile in 2013-2014. The goal was to give a brief account of some major principles of the theory of metric regularity along with the impression of how they work in various areas of analysis and optimization. The three principal themes that will be in the focus of attention are: (a) regularity criteria (containing quantitative estimates for rates of regularity) including formal comparisons of their relative power and precision; (b) stability problems relating to the effect of perturbations of the mapping on its regularity properties, on the one hand, and to solutions of equations, inclusions etc. on the other; (c) role of metric regularity in analysis and optimization. The existing regularity theory of variational analysis may look very technical. Many available proofs take a lot of space and use heavy techniques. But the ideas behind most basic results, especially in the metric theory, are rather simple and in many cases proper application of the ideas leads to noticeable (occasionally even dramatic) simplification and clarification of the proofs. This is a survey paper, so many results are quoted and discussed, often without proofs. As a rule, a proof is given if (a) the result is of a primary importance and the proof is sufficiently simple, (b) the result is new, (c) the access to the original publication containing the result is not very easy and especially (d) the proof is simpler (shorter, or looking more transparent) than available in the literature known to me. And of course there are topics (some important) not touched upon in the paper, especially those that can be found in monographic literature. I mean first of all the books by Dontchev and Rockafellar \cite{DR} and Klatte and Kummer \cite{KK02} in which metric regularity, in particular its finite dimensional chapter, is prominently presented. Among more specialized topics not touched upon in the survey, I would mention nonlinear regularity models, point subdifferential regularity criteria with associated compactness properties of subdifferentials and directional regularity. The survey consists of two parts. The first part called `Theory' contains an account of the basic ideas and principles of the metric regularity theory, first in traditional settings of the classical analysis and then for arbitrary set-valued mappings between various classes of spaces. In the second part `Applications' we show how the theory works for some specific classes of maps that typically appear in variational analysis and and for a variety of fundamental existence, stability and optimization problems. In preparing this part of the survey the main efforts were focused on finding a productive balance between general principles and specific results and/or methods associated with the problem. This declaration may look as a sort of truism but the point is that publications in which over-attachment to certain particular techniques of variational analysis (e.g. associated with generalized differentiation) leads to long and poorly digestible proofs of sufficiently simple and otherwise easily provable results is not an exceptional phenomenon. To conclude the introduction I wish to express my thanks to J. Borwein and A. Joffre for inviting me to give the lectures that were the basis for this paper and to J. Borwein especially for his suggestion to write the survey. I also wish to thank D. Drusvyatskij and A. Lewis for the years of cooperation and many fruitful discussions and to A. Kruger and D. Klatte for many helpful remarks. \vskip 1mm \noindent{\bf Dedication}. 2015 and late 2014 have witnessed remarkable jubilees of six my good old friends. I dedicate this paper, with gratitude for the past and warm wishes for the future to \vskip 2mm \begin{tabular}{ll} Prof. Vladimir Lin\qquad\qquad\qquad& Prof. Terry Rockafellar\\ Prof. Louis Nirenberg & Prof. Vladimir Tikhomirov\\Prof. Boris Polyak& Prof. Nikita Vvedenskaya \end{tabular}\ \vskip 5mm \noindent{\bf Notation}. $d(x,Q)$ -- distance from $x$ to $Q$; $d(Q,P)=\inf \{ \|x-u\|:\; x\in Q,\; u\in P\}$ -- distance between $Q$ and $P$; ${\rm ex}(Q,P)=\sup\{d(x,P):\; x\in Q)\}$ -- excess of $Q$ over P; $h(Q,P)=\max\{{\rm ex}(Q,P),{\rm ex}(P,Q)\}$ -- Hausdorff distance between $Q$ and $P$; $B(x,r)$ -- closed ball of radius $r$ and center at $x$; $\overset{\circ}B (x,r)$ -- open ball of radius $r$ and center at $x$; $F|_Q$ -- the restriction of a mapping $F$ to the set $Q$; $F: X\rightrightarrows Y$ -- set-valued mapping; ${\rm Graph}~ F=\{(x,y): \; y\in F(x)\}$ -- graph of $F$; $I$ -- the identity mapping (subscript, if present, indicates the space, e.g. $I_X$); ${\rm epi}~ f=\{ (x,\alpha):\; \alpha\ge f(x)\}$ -- epigraph of $f$; ${\rm dom}~ f= \{x:\; f(x)<\infty\}$ -- domain of $f$; $i_Q(x)$ -- indicator of $Q$ (function equal to $0$ on $Q$ and $+\infty$ outside); $[f\le \alpha]=\{ x:\; f(x)\le \alpha\}$ etc.; $X\times Y$ -- Cartesian product of spaces; $X^*$ -- adjoint of $X$; $\langle x^*,x\rangle$ -- the value of $x^*$ on $x$ (canonical bilinear form on $X^*\times X$); $I\!\!R^n$ -- the $n$-dimensional Euclidean space; $B$ -- the closed unit ball in a Banach space (sometimes indicated by a subscript, e.g. $B_X$ is the unit ball in $X$); $S_X$ -- the unit sphere in $X$; ${\rm Ker}~ A$ -- kernel of the (linear) operator $A$; $L^{\perp}=\{x^*\in X^*:\; \langle x^*,x\rangle=0,\;\forall\; x\in L\}$ -- annihilator of a subspace $L\subset X$; $K^{\circ}=\{ x^*\in X^*:\; \langle x^*,x\rangle\le 0,\;\forall\; x\in K\}$ -- the polar of a cone $K\subset X$ ${\rm Im}~ A$ -- image of the operator $A$; ${\mathcal S}(X)$ -- collection of closed separable subspaces of $X$; ${\mathcal L}(X,Y)$ -- the space of linear bounded operators $X\to Y$ with the {\it operator norm}: $$ \| A\|=\sup_{\| x\|\le 1}\| Ax\|. $$ $L\oplus M$ -- direct sum of subspaces; $T_xM$, $N_xM$ -- tangent and normal space to a manifold $M$ at $x\in M$; $T(Q,x)$ -- contingent cone to a set $Q$ at $x\in Q$; $N(Q,x)$ -- normal cone to $Q$ at $x\in Q$, often with a subscript (e.g $N_F$ is a Fr\'echet normal cone etc.) \vskip 1mm \noindent We use the standard conventions \ $d(x,\emptyset)=\infty;\; \inf\emptyset=\infty;\; \sup\emptyset=-\infty$ with one exception: when we deal with non-negative quantities we set $\sup \emptyset=0$. \vskip 1cm \centerline{\bf \large Part 1. Theory} \section{Classical theory: five great theorems.} In this section all spaces are Banach. \subsection{Banach-Shauder open mapping theorem} \begin{theorem}[\cite{SB,JS30}]\label{bansha} Let $A:\ X\to Y$ be a linear bounded operator onto $Y$, that is $A(X)=Y$. Then $0\in{\rm int}~ A(B)$. \end{theorem} The theorem means that there is a $K>0$ such that for any $y\in Y$ there is an $x\in X$ such that $A(x)=y$ and $\| x\|\le K\| y\|$ (take as $K$ the reciprocal of the radius of a ball in $Y$ contained in the image of the unit ball in $X$ under $A$). \begin{definition}[Banach constant] {\rm Let $A: X\to Y$ be a bounded linear operator. The quantity $$ C(A) = \sup\{ r\ge 0:\; rB_Y\subset A(B_X)\} = \inf\{\| y\|:\; y\not\in A(B_X)\} $$ will be called the {\it Banach constant} of $A$}. \end{definition} The following simple proposition offers two more expressions for the Banach constant. Given a linear operator $A: X\to Y$, we set $$ \| A^{-1}\|=\sup_{\| y\|\le 1}d(0,A^{-1}(y))=\sup_{\| y\|=1}\inf\{\| x\|:\; Ax=y\}. $$ Of course, if $A$ is a linear homeomorphism, this coincides with the usual norm of the inverse operator. \begin{proposition}[calculation of $C(A)$]\label{calca} For a bounded linear operator $A: X\to Y$ $$ C(A) = \inf_{\| y^*\|=1}\| A^* y^*\| = \| A^{-1}\|^{-1}. $$ \end{proposition} \subsection{Regular points of smooth maps. Theorems of Lyusternik and Graves.} Let $F: X\to Y$ be Fr\'echet differentiable at $\overline x\in X$. It is said that $F$ is {\it regular} at $\overline x$ if its derivative $F'(\overline x)$ is a linear operator {\it onto} $Y$. Let $M\subset X$ be a smooth manifold. The {\it tangent space} $T_xM$ to $M$ at $x\in M$ is the collection of $h\in X$ such that $d(x+th,S)= o(t)$ when $t\to+0$. \begin{theorem}[Lyusternik \cite{LAL}] Suppose that $F$ is continuously differentiable and regular at $\overline x$. Then the tangent space to the level set $M=\{ x: \; F(x)=F(\overline x)\}$ at $\overline x$ coincides with ${\rm Ker}~ F'(\overline x)$. \end{theorem} \begin{theorem}[Graves \cite{LMG50}]\label{graves} Let $F$ be a continuous mapping from a neighborhood of $\overline x\in X$ into $Y$. Suppose that there are a linear bounded operator $A:\ X\to Y$ and positive numbers $\delta>0$, $\gamma>0$, $\varepsilon>0$ such that $C(A)>\delta+\gamma$ and $$ \| F(x')-F(x)-A(x'-x)\|< \delta \| x'-x\|, $$ whenever $x$ and $x'$ belong to the open $\varepsilon$-ball around $\overline x$. Then $$ B(F(\overline x),\gamma t)\subset F(B(\overline x,t)) $$ for all $t\in (0,\varepsilon)$. \end{theorem} Here is a slight modification (quantities explicitly added) of the original proof by Graves. \proof We may harmlessly assume that $F(\overline x)=0$. Take $K>0$ such that $KC(A) > 1>K(\delta+\gamma)$, and let $\| y\|<\gamma t$ for some $t<\varepsilon$. Set $x_0=\overline x$, $y_0=y$ and define recursively $x_n$, $y_n$ as follows: $$ y_{n-1}=A(x_n-x_{n-1}),\; \| x_n-x_{n-1}\|\le K\| y_{n-1}\|;\quad y_n=A(x_n-x_{n-1}) -(F(x_n)-F(x_{n-1})). $$ It is an easy matter to verify that $$ \| x_n-x_{n-1}\|\le (K\delta)^{n-1}K\| y\|,\quad \| y_n\|\le (K\delta)^{n}\| y\| $$ and $y_{n-1}-y_n= F(x_n)-F(x_{n-1})$, so that $(x_n)$ converges to some $x$ such that $ F(x)=y$ and $$ \| x-\overline x\|\le \frac{K}{1-K\delta}\| y\|\le \gamma^{-1}\|y\|<t $$ as claimed.\endproof The theorem of Lyusternik was proved in 1934 and the theorem of Graves in 1950. Graves was apparently unaware of Lyusternik's result and Lyusternik, in turn, of the open mapping theorem by Banach-Shauder. Nonetheless the methods they used in their proves were very similar. For that reason the following statement which is somewhat weaker than the theorem of Graves and somewhat stronger than the theorem of Lyusternik is usually called the Lyusternik-Graves theorem. \begin{theorem}[Lyusternik-Graves theorem] Assume that $F: X\to Y$ is continuously differentiable and regular at $\overline x$. Then for any positive $r< C(F'(\overline x))$, there is an $\varepsilon>0$ such that $$ B(F(\overline x),r t)\subset F(B(\overline x,t)), $$ whenever $\| x-\overline x\|<\varepsilon,\; 0\le t<\varepsilon$. \end{theorem} It should be also emphasized that no differentiability assumption is made in the theorem of Graves. In this respect Graves was much ahead of time. Observe that the mapping $F$ in the theorem of Graves can be viewed as a perturbation of $A$ by a $\delta$-Lipschitz mapping. With this interpretation the theorem of Graves can be also viewed as a direct predecessor of Milyutin's perturbation theorem (Theorem \ref{milt1} in the fourth section), which is one of the central results in the regularity theory of variational analysis. \subsection{Inverse and implicit function theorem} \begin{theorem}[Inverse function theorem]\label{lugr} Suppose that $F$ is continuously differentiable at $\overline x$ and the derivative $F'(\overline x)$ is an invertible operator onto $Y$. Then there is a mapping $G$ into $X$ defined in a neighborhood of $\overline y=F(\overline x)$, strictly differentiable at $\overline y$ and such that $$ G'(\overline y)= \big(F'(\overline x)\big)^{-1}\quad {\rm and}\quad F\circ G= I_Y $$ in the neighborhood. \end{theorem} The shortest among standard proofs of the theorem is based on the contraction mapping principle (see e.g. the second proof of the theorem in \cite{DR}). But equally short proof follows from the theorem of Lyusternik-Graves. \proof Set $A=F'(\overline x)$. Then $F(x')-F(x)- A(x'-x)= r(x',x)\| x'-x\|$, where $\| r(x',x)\|\to 0$ when $x,x'\to \overline x$. As $A$ is invertible, there is a $K>0$ such that $\| Ah\|\ge K\| h\|$. Hence $\| F(x')-F(x)\|\ge (K-r(x,x'))\| x'-x\|>0$ if $x,\ x'$ are close to $\overline x$. This means that $F$ is one-to-one in a neighborhood of $\overline x$. But by the Lyusternik-Graves theorem, $F(U)$ covers a certain open neighborhood of $\overline y$. Hence $G=F^{-1}$ is defined in a neighborhood of $F(\overline x)$. So given $y$ and $y'$ close to $\overline y=F(\overline x)$ and let $x', \ x $ be such that $F(x')=y',\ F(x)=y$. Then as we have seen $\| y-y'\|\ge K\| x-x'\|$. We have $$ A^{-1}\big(F(x')-F(x)-A(x'-x)\big)=A^{-1}(y'-y) - G(y')-G(y), $$ so that $$ \begin{array}{lcl} \|G(y')-G(y) - A^{-1}(y'-y)\|&\le&\| A\|^{-1}\|F(x')-F(x)- A(x'-x)\|\\ &=& \| A^{-1}\|\| r(x',x)\|\| x'-x\|\| \le q(y,y')\| y'-y\|, \end{array} $$ where $q(y,y')=Kr(G(y),G(y'))$ obviously goes to zero when $y,y'\to \overline y$. \endproof \begin{theorem}[implicit function theorem]\label{implic} Let $X,\ Y,\ Z$ be Banach spaces, and let $F$ be a mapping into $Z$ which is defined in a neighborhood of $(\overline x,\overline y)\in X\times Y$ and strictly differentiable at $(\overline x,\overline y)$. Suppose further that the partial derivative $F_y(\overline x,\overline y)$ is an invertible operator. Then there are neighborhoods $U\subset X$ \ of $\overline x$ and $W\subset Z$ of \ $\overline z=F(\overline x,\overline y)$ and a mapping $S:\; U\times W\to Y$ such that $(x,z)\mapsto (x,S(x,z))$ is a homeomorphism of $U\times W$ onto a neighborhood of $(\overline x,\overline y)$ in $X\times Y$ and $$ F(x,S(x,z))=z,\quad \forall \ x\in U,\; \forall \ z\in W $$ The mapping $S$ is strictly differentiable at $(\bar x,\bar z)$ with \begin{equation}\label{1.3.1} S_z(\bar x,\bar z)=\big(F_y(\overline x,\overline y)\big)^{-1},\quad S_x(\bar x,\bar z)=\big(F_y(\overline x,\overline y)\big)^{-1}F_x(\overline x,\overline y). \end{equation} \end{theorem} The simplest proof of the theorem is obtained by application of the inverse mapping theorem to the following map $X\times Y\to X\times Z$ (see e.g. \cite{DR}): $$ \Phi (x,y) = \left(\begin{array}{c}x \\ F(x,y)\end{array}\right). $$ \subsection{Sard theorem. Transversality.} \begin{definition}[critical and regular value]\label{defcrsm} {\rm Let $X$ and $Y$ be Banach spaces, and let $F$ be a mapping into $Y$ defined and continuously differentiable on an open set of $U\subset X$. A vector $y\in Y$ is called a {\it critical value} of $F$ if there is an $x\in U$ such that $F(x)=y$ and $x$ is a singular point of $F$. Any point in the range space which is not a critical value is called a {\it regular value}, even if it does not belong to ${\rm Im}~ F$. Thus $y$ is a regular value if either $y\neq F(x)$ for any $x$ of the domain of $F$ or ${\rm Im}~ F'(x)=Y$ for every $x$ such that $F(x)=y$.} \end{definition} \begin{theorem}[Sard \cite{Sard42}]\label{sard} Let $\Omega$ be an open set in $I\!\!R^n$ and $F$ a $C^k$-mapping from $\Omega$ into $I\!\!R^m$. Then the Lebesgue measure of the set of critical values of $F$ is equal to zero, provided $k\ge n-m+1$. \end{theorem} \noindent For a proof of a "full" Sard theorem see \cite{AR}; a much shorter proof for $C^{\infty}$ functions can be found in \cite{LN}. \begin{definition}[transversality]\label{deftran} {\rm Let $F:X\to Y$ be a $C^1$-mapping, and let $M\subset Y$ be a $C^1$-submanifold. Let finally $x$ be in the domain of $F$. We say that $F$ is {\it transversal to $M$ at} $x$ if either $y=F(x)\not\in M$ or $y\in M$ and ${\rm Im}~ F'(x)+T_yM=Y$. It is said that $F$ is {\it transversal to} $M$:\ $F\pitchfork M$, if it is transversal to $M$ at every $x$ of the domain of $F$.} \end{definition} We can also speak about transversality of two manifolds $M_1$ in $M_2$ in $X$: $M_1\pitchfork M_2$ at $x\in M_1\cap M_2$ if $T_xM_1+T_xM_2=X$. For our future discussions, it is useful to have in mind that the latter property can be equivalently expressed in dual terms: $N_x M_1\cap N_xM_2=\{ 0\}$, where $N_xM\subset X^*$ is the {\it normal space} to $M$ at $x$, that is the annihilator of $T_xM$. A connection with regularity is immediate from the definition: if $(L,\varphi)$ is a local parametrization for $M$ at $y$ and $y=F(x)$, then transversality of $F$ to $M$ at $x$ is equivalent to regularity at $(x,0,0)$ of the mapping $\Phi: X\times L\to Y$ given by $\Phi(u,v)= F(u)-\varphi (v)$. The connection of transversality and regularity is actually much deeper. Let $P$ be also a Banach space and let $F: X\times P\to Y$. We can view $F$ as a family of mappings from $X$ into $Y$ parameterized by elements of $P$. Let us denote ``individual'' mappings $x\to F(x,p)$ by $F(\cdot,p)$. Let further $M\subset Y$ be a submanifold, and let $\pi: X\times P\to P$ be the standard Cartesian projection $(x,p)\to p$. \begin{proposition}\label{prethom}Suppose $F$ is transversal to $M$ and $Q=F^{-1}(M)$ is a manifold. Let finally $\pi|_Q$ stands for the restriction of $\pi$ to $Q$. Then $F(\cdot,p)$ is transversal to $M$, provided $p$ is a regular value of $\pi|_Q$. \end{proposition} Combining the proposition with the Sard theorem, we get the following (simple version of) transversality theorem of Thom \begin{theorem}[see e.g. \cite{GP}]\label{thomsm} Let $X$, $Y$ and $P$ be finite dimensional Banach spaces Let $M\subset Y$ be a $C^r$-manifold, and let $F: X\times P\to Y$ be a $C^k$-mapping $(k\le r)$. Assume that $F\pitchfork M$ and $k> \dim X-{\rm codim} M$. Then $F(\cdot,p)\pitchfork M$ for each $p\in P$ outside of a subset of $P$ with $\dim P$-Lebesgue measure zero. \end{theorem} \section{Metric theory. Definitions and equivalences.} Here $X$ and $Y$ are metric space. We use the same notation for the metrics in both hoping this would not lead to any difficulties. \subsection{Local regularity} We start with the simplest and the most popular case of local regularity near a certain point of the graph. So let an $F: X\rightrightarrows Y$ be given as well as a $(\bar x,\bar y)\in{\rm Graph}~ F$. \begin{definition}[local regularity properties]\label{defloc}{\rm We say that $F$ is \vskip 1mm $\bullet$ {\it open} or {\it covering at a linear rate near} $(\bar x,\bar y)$ if there are $r>0$, $\varepsilon >0$ such that $$ B(y,rt)\cap B(\overline y,\varepsilon)\subset F(B(x,t)), \quad \forall\; (x,y)\in{\rm Graph}~ F,\; d(x,\overline x)<\varepsilon, \; t\ge 0. $$ The upper bound ${\rm sur} F(\overline x|\overline y)$ of such $r$ is the {\it modulus} or {\it rate of surjection} of $F$ near $(\bar x,\bar y)$. If no such $r$, $\varepsilon$ exist, we set ${\rm sur} F(\overline x|\overline y)=0$; \vskip 1mm $\bullet$ {\it metrically regular} near $(\bar x,\bar y)\in{\rm Graph}~ F$ if there are $K>0$, $\varepsilon >0$ such that $$ d(x,F^{-1}(y))\le Kd(y,F(x)),\quad {\rm if}\; d(x,\overline x)<\varepsilon,\; d(y,\overline y)<\varepsilon. $$ The lower bound ${\rm reg} F(\overline x|\overline y)$ of such $K$ is the {\it modulus} or {\it rate of metric regularity} of $F$ near $(\bar x,\bar y)$. If no such $K$, $\varepsilon$ exist, we set ${\rm reg} F(\overline x|\overline y)=\infty$. \vskip 1mm $\bullet$ {\it pseudo-Lipschitz} or has the {\it Aubin property} near $(\bar x,\bar y)$ if there are $K>0$ and $\varepsilon>0$ such that $$ d(y,F(x))\le Kd(x,u), \quad {\rm if}\; d(x,\overline x)<\varepsilon,\; d(y,\overline y)<\varepsilon,\; y\in F(u). $$ The lower bound ${\rm lip} F(\overline x|\overline y)$ is the {\it Lipschitz modulus} or {\it rate} of $F$ near $(\bar x,\bar y)$. If no such $K$, $\varepsilon$ exist, we set ${\rm lip} F(\overline x|\overline y)=\infty$. } \end{definition} Note a difference between the covering property and the conclusions of theorems of Lyusternik and Graves: the theorems deal only with the given argument $\overline x$ while in the definition we speak about all $x\in{\rm dom}~ F$ close to $\overline x$. This difference that was once a subject of heated discussions is in fact illusory as under the assumptions of the theorems of Lyusternik and Graves the covering property in the sense of the just introduced definitions is automatically satisfied. The key and truly remarkable fact for the theory is that the three parts of the definition actually speak about the same phenomenon. Namely the following holds true unconditionally for any set-valued mapping between two metric spaces. \begin{proposition}[local equivalence]\label{loceq} $F$ is open at a linear rate near $(\bar x,\bar y)\in{\rm Graph}~ F$ if and only if it is metrically regular near $(\bar x,\bar y)$ and if and only if $F^{-1}$ has the Aubin property near $(\overline y,\overline x)$. Moreover, under the convention that $0\cdot\infty=1$, $$ {\rm sur} F(\overline x|\overline y)\cdot{\rm reg} F(\overline x|\overline y)= 1;\quad {\rm reg} F(\overline x|\overline y)={\rm lip} F^{-1}(\overline y|\overline x). $$ \end{proposition} \begin{remark} {\rm In view of the proposition it makes sense to use the word {\it regular} to characterize the three properties. This terminology would also emphasize the ties with the classical regularity concept. We observe further that while the rates of regularity are connected with specific distances in $X$ and $Y$, the very fact that $F$ is regular near certain point is independent of the choice of specific metrics. Thus, although the definitions explicitly use metrics} the regularity is a topological property. \end{remark} The proof of the proposition is fairly simple (we shall get it as a consequence of a more general equivalence theorem later in this section). But the way to it was surprisingly long (see brief bibliographic comments at the end of the section). There are other equivalent formulations of the properties. For instance, the definition of linear openness/ covering can be modified by adding the constraint $0\le t<\varepsilon$ (see \cite{AI11a}); a well known modification of the definition of metric regularity includes the condition that $d(y,F(x))<\varepsilon$. The only difference is that the $\varepsilon$'s in the original and modified definitions may be different. \begin{definition}[graph regularity \cite{LT94}] {\rm $F$ is said to be {\it graph-regular at (or near) $(\bar x,\bar y)\in{\rm Graph}~ F$} if there are $K>0,\ \varepsilon >0$ such that the inequality \begin{equation}\label{2.2.1} d(x,F^{-1}(y))\le Kd((x,y),{\rm Graph}~ F), \end{equation} holds, provided $d(x,\overline x)<\varepsilon,\; d(y,\overline y)<\varepsilon$}. \end{definition} \begin{proposition}[metric regularity vs graph regularity \cite{LT94}]\label{graphreg} Let $F: X\rightrightarrows Y$, and $(\bar x,\bar y)\in{\rm Graph}~ F$. Then $F$ is metrically regular at $(\bar x,\bar y)$ if and only if it is graph-regular at $(\bar x,\bar y)$. \end{proposition} Note that, unlike the equivalence theorem, the last proposition is purely local: the straightforward non-local extension of this result (e.g. along the lines of the subsection below) is wrong. \subsection{Non-local regularity.} As we have already mentioned, most of current researches focus on local regularity.\ (although the first abstract definition of the covering property given in \cite{DMO} was absolutely non-local). To a large extent this is because of the close connection of modern variational analysis studies with optimization theory which is basically interested in local results: optimality conditions, stability of solutions under small perturbations, etc. Another less visible reason is that non-local regularity is a more delicate concept: in the non-local case we cannot freely change the regularity domain that is an integral part of the definition. Meanwhile non-local regularity is, a powerful instrument for proving e.g. various existence theorems (see e.g. subsection 8.7). Let $U\subset X$ and $V\subset Y$ (we usually assume $U$ and $V$ open), let $F: X\rightrightarrows Y$, and let $\gamma(\cdot)$ and $\delta(\cdot)$ be extended-real-valued functions on $X$ and $Y$ assuming positive values (possibly infinite) respectively on $U$ and $V$. \begin{definition}[non-local regularity properties \cite{AI11a}]\label{nonlocdef} {\rm We say that $F$ is $\bullet$ \ {\it $\gamma$-open (or $\gamma$-covering) at a linear rate} on $U\times V$ if there is an $r>0$ such that $$ B(F(x),rt)\bigcap V\subset F(B(x,t)), $$ if $x\in U$ and $t<\gamma(x)$. Denote by ${\rm sur}_{\gamma}F(U|V)$ the upper bound of such $r$. If no such $r$ exists, set ${\rm sur}_{\gamma}F(U|V)=0$. We shall call ${\rm sur}_{\gamma}F(U|V)$ the {\it modulus} (or {\it rate}) {\it of $\gamma$-openness} of $F$ on $U\times V$; $\bullet$ \ {\it $\gamma$-metrically regular on $U\times V$} if there is a $K>0$ such that $$ d(x,F^{-1}(y))\le Kd(y,F(x)), $$ provided $x\in U,\; y\in V$ and $Kd(y,F(x))<\gamma(x)$. Denote by ${\rm reg}_{\gamma}F(U|V)$ the lower bound of such $K$. If no such $K$ exists, set ${\rm reg}_{\gamma}F=\infty$. We shall call ${\rm reg}_{\gamma}F(U|V)$ the {\it modulus} (or {\it rate}) {\it of $\gamma$-metric regularity} of $F$ on $U\times V$; $\bullet$ \ {\it $\delta$-pseudo-Lipschitz} on $U\times V$ if there is a $K>0$ such that $$ d(y,F(x))\le Kd(x,u) $$ if $x\in U,\; y\in V,\; K d(x,u)< \delta (y)$ and $y\in F(u)$. Denote by ${\rm lip}_{\delta}F(U|V)$ the lower bound of such $K$. If no such $K$ exists, set ${\rm lip}_{\delta}F =\infty$ . We shall call ${\rm lip}_{\delta}F(U|V)$ the {\it $\delta$-Lipschitz modulus} of $F$ on $U\times V$.} \end{definition} If $U=X$ and $V=Y$, let us agree to write $ {\rm sur}_{\gamma} F,\ {\rm reg}_{\gamma}F,\ {\rm lip}_{\delta} F$ instead of ${\rm sur}_{\gamma}F(X|Y)$, etc. The role of the functions $\gamma$ and $\delta$ is clear from the definitions. They determine how far we shall reach from any given point in verification of the defined properties. It is therefore natural to call them {\it regularity horizon} functions. Such functions are inessential for local regularity (see e.g. Exercise 2.8 below). But for fixed $U$ and $V$ regularity horizon function is an essential element of the definition. Regularity properties corresponding to different $\gamma$ may not be equivalent (see Example 2.2 in \cite{AI14} and also Exercise 2.8 below). \begin{theorem}[equivalence theorem]\label{equiv3} The following three properties are equivalent for any pair of metric spaces $X,\ Y$, any $F: X\rightrightarrows Y$, any $U\subset X$ and $V\subset Y$ and any (extended-real-valued) function $\gamma(x)$ which is positive on $U$: \vskip 1mm (a) \ $F$ is $\gamma$-open at a linear rate on $U\times V$; \vskip 1mm (b) \ $F$ is $\gamma$-metrically regular on $U\times V$; \vskip 1mm (c) \ $F^{-1}$ is $\gamma$-pseudo-Lipschitz on $V\times U$. \vskip 1mm \noindent Moreover (under the convention that $0\cdot\infty=1$) $$ {\rm sur}_{\gamma}F(U|V)\cdot{\rm reg}_{\gamma}F(U|V)=1,\quad {\rm reg}_{\gamma}F(U|V)={\rm lip}_{\gamma}F^{-1}(V|U). $$ \end{theorem} \proof The implication (b) $\Rightarrow$ (c) is trivial. Hence ${\rm lip}_{\gamma}F^{-1}(V|U)\le {\rm reg}_{\gamma}F(U|V)$. To prove that (c) $\Rightarrow$ (a), take a $K>{\rm lip}_{\gamma}F^{-1}$ and an $r< K^{-1}$ , let $t<\gamma(x)$, and let $x\in U$, \ $y\in V$, \ $v\in F(x)$ and $y\in B(v,tr)$. Then $d(y,v)<r\gamma(x)$ and by (c) $d(x,F^{-1}(y))\le K d(y,v)< r^{-1}d(y,v)\le t$. It follows that there is a $u$ such that $y\in F(u)$ and $d(x,u)<t$. Hence $y\in F(B(x,t))$. It follows that $r\le{\rm sur}_{\gamma}F$, or equivalently $1\le K{\rm sur}_{\gamma}F$. But $r$ can be chosen arbitrarily close to $K^{-1}$ and and $K$ can be chosen arbitrarily close to ${\rm lip}_{\gamma}F^{-1}$. So we conclude that ${\rm sur}_{\gamma}F\cdot{\rm lip}_{\gamma}F^{-1}\ge 1.$ Let finally (a) hold with some $r>0$, let $x\in U,\ y\in V$, and let $d(y,F(x))<\gamma(x)$. Choose a $v\in F(x)$ such that $d(y,v)<r\gamma(x)$ and set $t=d(y,v)/r$. By (a) there is a $u\in F^{-1}(y)$ such that $d(x,u)\le t$. Thus $d(x,F^{-1}(y))\le t=d(y,v)/r$. But $d(y,v)$ can be chosen arbitrarily close to $d(y,F(x))$ and we get $d(x,F^{-1}(y))\le r^{-1}d(y,F(x))$, that is $r\cdot{\rm reg}_{\gamma}F\le 1$. On the other hand $r$ can be chosen arbitrarily close to ${\rm sur}_{\gamma}F$ and we can conclude that ${\rm sur}_{\gamma}F\cdot{\rm reg}_{\gamma}F\le 1$ so that $$ 1\ge {\rm sur}_{\gamma}F(U|V)\cdot{\rm reg}_{\gamma}F(U|V)\ge {\rm sur}_{\gamma}F(U|V)\cdot{\rm lip}_{\gamma}F(V|U)\ge 1 $$ which completes the proof of the theorem. \endproof The most important example of the horizon function is \ $m(x)=d(x,X\backslash U)$. The meaning is that we need not look at points beyond $U$. We shall call $F$ {\it Milyutin regular} on $U\times V$ if it is $m$-regular. (This is actually the type of regularity implicit in the definition given in \cite{DMO}.) In what follows we shall deal only with Milyutin regularity when speaking about non-local matters. \begin{exercise} {\rm Prove that} $F$ is regular near $(\bar x,\bar y)\in{\rm Graph}~ F$ if and only if it is Milyutin regular on $\overset{\circ}B(\overline x,\varepsilon)\times \overset{\circ}B (\overline y,\varepsilon)$ for all sufficiently small $\varepsilon$. \end{exercise} We conclude the section with a useful result (a slight modification of the corresponding result in \cite{AI00}) showing that, as far as metric regularity is concerned, any set-valued mapping can be equivalently and in a canonical way replaces by a single-valued mapping continuous on its domain. \begin{proposition}[single-valued reduction]\label{sinval} Let $X\times Y$ be endowed with the $\xi$-metric. Let $F$ be Milyutin regular on $U\times V$ with ${\rm sur}_mF(U|V)\ge r>0$. Consider the mapping ${\cal P}_F: {\rm Graph}~ F\to Y $ which is the restriction to ${\rm Graph}~ F$ of the Cartesian projection $(x,y)\to y$. Then ${\cal P}_F$ is Milyutin regular on $(U\times Y)\times V$ and ${\rm sur} _m {\cal P}_F(U\times Y|V) = {\rm sur}_m F(U|V)$ if $X\times Y$ is considered e.g. with the $\xi$-metric. \end{proposition} A few bibliographic comments. To begin with, it is worth mentioning that in the classical theory no interest to metric estimates can be traced. The covering property close to the covering part of Milyutin regularity was introduced in \cite{DMO} and attributed to Milyutin. An estimate of metric regularity type first time appeared in Lyusternik's paper \cite{LAL} but for $x$ restricted to the kernel of the derivative. In Ioffe-Tikhomirov \cite{IT} metric regularity was proved under the assumptions of the Graves theorem. Robinson was probably the first to consider set-valued mappings. In \cite{SMR76} he proved metric regularity of the mapping $F(x)= f(x)+K$ (even of the restriction of this mapping to a convex closed subset of $X$), under the assumptions that $f:X\to Y$ is continuously differentiable and $K\subset Y$ is a closed convex cone, under certain qualification condition extending Lyusternik's ${\rm Im}~ F'(x) = Y$. The definition of $\gamma$-regularity was given in \cite{AI11a}. Equivalence of covering and metric regularity was explicitly mentioned (without proof) in the paper of Dmitruk-Milyutin-Osmolovski \cite{DMO} that marked the beginning of systematic study of the regularity phenomena, in particular in metric spaces, and Ioffe in \cite{AI81} stated a certain equivalence result (Proposition 11.12 -- see \cite{AI98a} for its proof) which, as was much later understood, contains even more precise information about the connection of the covering and metric regularity properties. And the pseudo-Lipschitz property was introduced by Aubin in \cite{JPA84}. This was the sequence of events prior to the proof of the equivalence of the three properties by Borwein-Zhuang \cite{BZ88} and Penot \cite{JPP89}. It has to be mentioned that in both papers more general "nonlinear" properties were considered. In this connection we also mention the paper by Frankowska \cite{HF90} with a short proof of nonlinear openness and some pseudo-H\"older property. \section{Metric theory. Regularity criteria.} This section is central. Here we prove necessary and sufficient conditions for regularity. The key results is Theorems \ref{gencrit}, \ref{secmil} and \ref{secmilmod} containing general regularity criteria. The criteria (especially the first of them) will serve us as a basis for obtaining various qualitative and quantitative characterizations of regularity in this and subsequent sections. The criteria are very simple to prove and, at the same time, provide us with an instrument of analysis which is both powerful and easy to use. We shall see this already in this section and many times in what follows. In the second subsection we consider infinitesimal criteria for local regularity based on the concept of {\it slope}, the central in the local theory. Given a set-valued mapping $F: X\rightrightarrows Y$, we associate with it the following functions that will be systematically used in connection with the criteria and their applications: $$ \varphi_y(x,v)=\left\{\begin{array}{lc} d(y,v),&{\rm if}\; v\in F(x);\\ +\infty,& {\rm otherwise}\end{array}\right.;\quad \psi_y(x) = d(y,F(x)); \quad \overline{\psi}_y(x) = \liminf_{u\to x}\psi (u). $$ Note that $\varphi_y$ is Lipschitz continuous on ${\rm Graph}~ F$, hence it is lower semicontinuous whenever ${\rm Graph}~ F$ is a closed set. \subsection{General criteria.} Given a $\xi>0$, we define the $\xi$-metric on $X\times Y$ by $$ d_{\xi}((x,y),(x',y'))=\max\{d(x,x'),\xi d(y,y')\}. $$ \begin{theorem}[criterion for Milyutin regularity]\label{gencrit} Let $U\subset X$ and $V\subset Y$ be open sets, and let $F: X\rightrightarrows Y$ be a set-valued mapping whose graph is complete in the product metric. Let further $r>0$ and there be a $\xi>0$ such that for any $x\in U$, $y\in V$, $v\in F(x)$ with $0<d(y,v)<r m(x)$, there is a pair $(u,w)\in{\rm Graph}~ F$ different from $(x,v)$ and such that \begin{equation}\label{3.1} d(y,w)\le d(y,v)-rd_{\xi}((x,v),(u,w)). \end{equation} Then $F$ is Milyutin regular on $U\times V$ with ${\rm sur}_mF(U|V)\ge r$. Conversely, if $F$ is Milyutin regular on $U\times V$, then for any positive $r<{\rm sur}_{\gamma}F(U|V)$, any $\xi\in(0,r^{-1})$, any $x\in U$, $v\in F(x)$ and $y\in V$ satisfying $0<d(y,v)<r\gamma(x)$, there is a pair $(u,w)\in{\rm Graph}~ F$ different from $(x,v)$ such that (\ref{3.1}) holds. \end{theorem} The theorem offers a very simple geometric interpretation of the regularity phenomenon: it means that $F$ is regular if for any $(x,v)\in{\rm Graph}~ F$ and any $y\neq v$ there is a point in the graph whose $Y$-component is closer to $y$ (than $v$) and the distance from the new point to the original point $(x,v)$ is proportional to the gain in the distance to $y$. \proof We have to verify that, given $(\overline x,\overline v)\in {\rm Graph}~ F$ with $\overline x\in U$, $y\in V$ and $0<d(y,\overline v)\le rt,\; t<m(\overline x)$, there is a $u\in B(x,t)$ such that $y\in F(u)$. We have $\varphi_y(\overline x,\overline v)\le rt$. By Ekeland's variational principle (see e.g. \cite{BZ}) there is a pair $(\hat x,\hat v)\in{\rm Graph}~ F$ such that $d_{\xi}((\hat x,\hat v),(\overline x,\overline v))\le t$ and \begin{equation}\label{3.2} \varphi_y(x,v)+ rd_{\xi}((x,v),(\hat x,\hat v))> \varphi_y(\hat x,\hat v) \end{equation} if $(x,v)\neq (\hat x,\hat v)$. We claim that $\varphi_y(\hat x,\hat v)=0$, that is $y=\hat v\in F(\hat x)$. Indeed, $\hat x\in U$, so by the assumption if $y\neq \hat v$, there is a pair $(u,w)\neq (\hat x,\hat v)$ and such that (\ref{3.1}) holds with $(\hat x,\hat v)$ as $(x,v)$ which however contradicts (\ref{3.2}). This proves the first statement. Assume now that $F$ is Milyutin regular on $U\times V$ with the surjection modulus not smaller than $r$. Take a positive $\xi<r^{-1}$ and $x\in U$, $y\in V$, $v\in F(x)$ with $d(y,v)<r\gamma(x)$. Take a small $\varepsilon\in (0,r)$ and choose a $t\in (0,m(x))$ such that $(r-\varepsilon)t\le d(y,v)< rt$. By regularity there is a $u$ such that $d(u,x)< t$ and $y\in F(u)$. Note that $t>\xi d(y,v)$ by the choice of $\xi$. So setting $w=y$ we have $t>\xi d(v,w)$ and $$ d(y,w)=0\le d(y,v)-(r-\varepsilon)t\le d(y,v)- (r-\varepsilon)d_{\xi}((x,v),(u,w)). $$ Since $\varepsilon$ can be chosen arbitrarily small, the result follows. \endproof \begin{theorem}[second criterion for Milyutin regularity]\label{secmil} Let $X$ be a complete metric space, $U\subset X$ and $V\subset Y$ open sets and $F: X\rightrightarrows Y$ a set-valued mapping with closed graph. Then $F$ is Milyutin regular on $U\times V$ with ${\rm sur}_mF(U|V)\ge r$ if and only if for any $x\in U$ and any $y\in V$ with $0<\overline{\psi}_y(x)<rm(x)$ there is a $u\neq x$ such that \begin{equation}\label{3.3} \overline{\psi}_y(u)\le \overline{\psi}_y(x)-r d(x,u). \end{equation} \end{theorem} \proof The proof of sufficiency is similar to the proof of the first part of the previous theorem. To prove that (\ref{3.3}) is necessary for Milyutin regularity take $x\in U,\; y\in V$ such that $0<d(y,F(x))< rm(x)$. Take $\rho<r$ such that still $d(y,F(x))< \rho m(x)$, and let $\rho<\rho'<r$. Let $x_n\to x$ be such that $d(y,F(x_n))\to \overline{\psi}_y(x)$. We may assume that $d(y,F(x_n))< r m(x)$ for all $n$. Choose positive $\delta_n\to 0$ such that $d(y,F(x_n))\le (1+\delta_n) \overline{\psi}_y(x)$, and let $t_n$ be defined by $\rho' t_n=(1+\delta_n)\overline{\psi}_y(x)$. Then $y\in \overset{\circ}B (F(x_n),\rho't_n)$, $t_n< m(x_n)$ (at least for large $n$) and due to the regularity assumption on $F$ for any $n$ we can find a $u_n$ such that $d(u_n,x_n)<t_n$ and $y\in F(u_n)$. Note that $u_n$ are bounded away from $x$ for otherwise (as ${\rm Graph}~ F$ is closed) we would inevitably conclude that $y\in F(x)$ which cannot happen as $\overline{\psi}_y(x)>0$. This means that $\lambda_n=d(u_n,x_n)/d(u_n,x)$ converge to one. Thus $$ \begin{array}{lcl} \overline{\psi}_y(u_n)=0&=& \overline{\psi}_y(x)-\overline{\psi}_y(x)=\overline{\psi}_y(x)-\dfrac{\rho't_n}{1+\delta_n}\\ &\le& \overline{\psi}_y(x)-\dfrac{\rho'}{1+\delta_n}d(u_n,x_n)\\ &=& \overline{\psi}_y(x)-\dfrac{\lambda_n\rho'}{1+\delta_n}d(u_n,x) \le \overline{\psi}_y(x) -\rho d(u_n,x), \end{array} $$ the last inequality being eventually true as $\lambda_n\rho'>\rho(1+\delta_n)$ for large $n$. \endproof The theorem is especially convenient when $\psi_y$ is lower semicontinuous for every $y\in V$. Otherwise, the need for preliminary calculation of $\overline{\psi}_y$, the lower closure of $\psi_y$, may cause difficulties. It is possible however to modify the condition of the theorem and get a statement that requires verification of (\ref{3.3})-like inequality for $\psi$ rather than $\overline{\psi}$, although at the expense of some additional uniformity assumption. \begin{theorem}[modified second criterion for Milyutin regularity]\label{secmilmod} Let $X$, $Y$, $F$, $U$ and $V$ be as in Theorem \ref{secmil}. A necessary and sufficient condition for $F$ to be Milyutin regular on $U\times V$ with ${\rm sur} F(\overline x|\overline y)\ge r$ is that there is a $\lambda\in (0,1)$ and for any $x\in U$ and $y\in V$ with $0<\psi_y(x)< rm(x)$ there is a $u\neq x$ such that \begin{equation}\label{3.3m} \psi_y(u)\le \psi_y(x)-r d(x,u),\quad \ \lambda \psi_y(u)\le\lambda \psi_y(x). \end{equation} \end{theorem} \proof The key for understanding the theorem is the following implication \begin{equation}\label{impcom} \overline{\psi}_y(x)=0\ \Rightarrow\ y\in F(x) \end{equation} of course valid, under the condition of the theorem for $x\in U,\ y\in V$. Indeed, $\overline{\psi}_y(x)=0$ means that there is a sequence $(x_n)$ converging to $x$ such that $\psi_y(x_n)\to 0$. This in turn implies the existence of $v_n\in F(x_n)$ converging to $y$. As the graph of $F$ is closed, it follows that $(x,y)\in {\rm Graph}~ F$ as claimed. Now we can verify that under the assumptions of the theorem, the condition of Theorem \ref{secmil} holds. So let $x\in U,\ y\in V$ and $0<\alpha=\overline{\psi}_y(x)$. Take $x_n\to x$ such that $\psi_y(x_n)=\alpha_n\to \alpha$ and for each $n$ a $u_n$ such that $\psi_y(u_n)\le\lambda \alpha_n$ and $\psi_y(u_n)\le\psi_y(x_n)-rd(x_n,u_n)$. An easy calculation shows that $$ \psi_y(u_n)\le\overline{\psi}_y(x)- rd(x,u_n) + \varepsilon_n, $$ where $\varepsilon_n\to 0$. As $d(x,u_n)$ are bounded away from zero by a positive constant, we have $\varepsilon_n=\delta_nd(x,u_n)$, where $\delta_n\to 0$. Combining this with the above inequality, we conclude that for any $r'<r$ that $u_n\neq x$ and inequality $$ \overline{\psi}_y(u_n)\le \overline{\psi}_y(x)- r'd(x,u_n) $$ holds for sufficiently large $n$. This allows to apply Theorem \ref{secmil} and conclude (by virtue of (\ref{impcom})) that there is a $w\in B(x,(r')^{-1})$ such that $y\in F(x)$, that is ${\rm sur}_mF(U|V)\ge r'$. \endproof Note that the proof of necessity in the two last theorems does not differ from the proof of Theorem \ref{gencrit}. Corresponding criteria for local regularity are immediate. \begin{theorem}[criterion for local regularity]\label{locrit} Let $F: X\rightrightarrows Y$ be a set-valued mapping with closed graph, and let $(\bar x,\bar y)\in{\rm Graph}~ F$. Then $F$ is regular near $(\bar x,\bar y)$ if and only if there are $\varepsilon> 0$, $\xi>0$ and $r>0$ such that for any $x, \ v$ and $y$ satisfying $d(x,\overline x)<\varepsilon,\; d(y,\overline y)<\varepsilon,\; v\in F(x)$ and $0<d(y,v)<\varepsilon$ either of the following two properties is valid: (a) ${\rm Graph}~ F$ is locally complete and there is a pair $(u,w)\in {\rm Graph}~ F$, $(u,w)\neq (x,v)$ such that (\ref{3.1}) holds. (b) $X$ is a complete metric space, the graph of $F$ is closed and either (\ref{3.3}) or (\ref{3.3m}) holds true. \noindent Moreover, in either case ${\rm sur} F(\overline x|\overline y)\ge r$. \end{theorem} Theorem \ref{gencrit} is a particular case of the criterion for $\gamma$-regularity proved in \cite{AI11a}. Theorem \ref{locrit} is a modification of the result established in \cite{AI00}. Theorem \ref{secmil} is new but it was largely stimulated by a recent result of Ngai-Tron-Thera \cite{NTT12} (see Theorem \ref{slsc} later in this section) and by a much earlier observation by Cominetti \cite{RC90} that $\overline{\psi}_y(x)=0$ implies that $y\in F(x)$. Surprisingly, it has been recently discovered that sufficiency in statement of the part (a) of the local criterion (Theorem \ref{locrit}) is present as a remark in a much earlier paper by Fabian and Preiss \cite{FP87}. The completeness assumption in the first theorem differs from the corresponding assumption of the other two theorems. So the natural question is if and how they are connected. It is an easy matter to see, in view of Proposition \ref{sinval}, that Theorem \ref{gencrit} follows from Theorem \ref{secmil}. On the other hand, Theorem \ref{gencrit} is easier to use as it does not need a priori calculation of any limit or verification of the existence of $\lambda$ as in the third theorem. However, if the functions $d(y,F(\cdot))$ are lower semicontinuous, the second criterion may be more convenient. It should also be observed that the theorems can be equivalent in some cases (as follows from Proposition 1.5 in \cite{AI00}). \subsection{An application: density theorem.} Here is the first example demonstrating how handy and powerful the criteria are. \begin{theorem}[density theorem \cite{DMO,AI11a}]\label{dens} Let $U\subset X$ and $V\subset Y$ be open sets, let $F: X\rightrightarrows Y$ be a set-valued mapping with complete graph. We assume that whenever $x\in U$, $v\in F(x)$ and $t< m(x)$, the set $F(B(x,t))$ is a $\ell t$-net in $B(v,rt)\bigcap V$, where $0\le \ell<r$. Then $F$ is Milyutin regular on $U\times V$ and ${\rm sur}_{m}F\ge r-\ell$ . In particular, if $F(B(x,t))$ is dense in $B(F(x),rt)\bigcap V$ for $x\in U$ and $t<\ m(x)$, then ${\rm sur}_{m}F(U|V)\ge r$ . \end{theorem} \proof Take $x\in U$ and suppose $y\in V$ is such that $d(y,F(x))< rm(x)$. Take a $v\in F(x)$ such that $d(y,v)<rm(x)$ and set $t=d(y,v)/r$. Then $t<m(x)$ and by the assumption we can choose $(u,w)\in{\rm Graph}~ F$ such that $d(x,u)\le t$ and $d(y,w)\le \ell t= (\ell/r)d(y,v)$. Then $$ d(v,w)\le d(y,v)+d(y,w)\le \big(1+\frac{\ell}{r}\big)d(y,v)\le 2d(y,v). $$ Take a $\xi>0$ such that $\xi r \le 1/2$. Then $\xi d(v,w)<2\xi r t\le t$ and therefore $$ d(y,w)\le \ell t=rt-(r-\ell)t = d(y,v)-(r-l)t \le d(y,v)-(r-\ell)d{\xi}((x,v),d(u,w)). $$ Apply Theorem \ref{gencrit}\endproof \begin{exercise}\label{denscl} {\rm Prove the theorem under the assumptions of Theorem \ref{secmil} rather than Theorem \ref{gencrit}. } \end{exercise} \begin{exercise} {\rm Prove Banach-Shauder open mapping theorem using the density theorem (and the Baire category theorem)} \end{exercise} The specification of Theorem \ref{dens} for local regularity at $(\bar x,\bar y)$ is \begin{corollary}[density theorem - local version]\label{lodense} Suppose there are $r>0$, and $\varepsilon>0$ such that $F(B(x,t))$ is an $\ell t$-net in $B(v,rt)$ whenever $d(x,\overline x)<\varepsilon,\; d(v,\overline y)<\varepsilon,\; v\in F(x)$ and $t<\varepsilon$. Then ${\rm sur} F(\overline x|\overline y)\ge r-\ell$. Thus if $B(v,rt)\subset {\rm cl} F(B(x,t))$ for all $x,\ v$ and $t$ satisfying the specified above conditions, then $B(v,rt)\subset F(B(x,t)$ for the same set of the variables. \end{corollary} The density phenomenon was extensively discussed, especially at the early stage of developments. Results in the spirit of Corollary \ref{lodense} were first considered in Ptak \cite{VP74}, Tziskaridze \cite{KST75} and Dolecki \cite{SD78,SD78a} in mid-1970s. The very idea (and to a large extent the techniques used) could be traced back to Banach's proof of the closed graph/open mapping theorem. Some of the subsequent studies (e.g. \cite{BZ88,CU05}) were primarily concentrated on results of such type. We refer to \cite{ACP98} for detailed discussions and many references. Dmitruk-Milyutin-Osmolovskii in \cite{DMO} made a substantial step forward when they replaced (in the global context) the density requirement by the assumption that $F(B(x),t)$ is an $\ell t$-net in $B(F(x),rt)$. This opened way to proving the Milyutin perturbation theorem (see the next section). A similar advance in the framework of the infinitesimal approach (for mappings between Banach spaces) was made by Aubin \cite{JPA81}. \subsection{Infinitesimal criteria.} The main tool of the infinitesimal regularity theory in metric spaces is provided by the concept of (strong) slope -- which is just the maximal speed of descent of the function from a given point -- introduced in 1980 by DeGiorgi-Marino-Tosques \cite{DMT} and since then widely used in various chapters of metric analysis. \begin{definition}[slope]\label{dslope} {\rm Let $f$ be an extended-real-valued function on $X$ which is finite at $x$. The quantity $$ |\nabla f|(x) =\limsup_{u\to x\atop{u\neq x}}\frac{(f(x)-f(u))^+}{d(x,u)} $$ is called the {\it (strong) slope} of $f$ at $x$. We also agree to set $|\nabla f|(x)=\infty$ if $f(x)=\infty$. The function is called {\it calm} at $x$ if $|\nabla f|(x)<\infty$.} \end{definition} We shall consider only local regularity in this subsection (although it is possible to give slope-based characterizations of Milyutin regularity as well). It is easy to observe that $|\nabla f|(x)>r$ means that arbitrarily close to $x$ there are $u\neq x$ such that $f(x)> f(u) + rd(x,u)$. This allows to reformulate the sufficient part of the regularity criteria of Theorem \ref{locrit}. in infinitesimal terms. To this end set as before $$ \varphi_y(x,v)=d(y,v)+i_{{\rm Graph}~ F}(x,v),\quad\psi _y(x) = d(y,F(x)),\quad \overline{\psi}_y(x)= \liminf_{u\to x}\psi_y(u), $$ and let $\nabla_{\xi}$ stand for the slope of functions on $X\times Y$ with respect to the $d_{\xi}$-metric: $d_{\xi}((x,v),(x',v'))=\max\{d(x,x'),\xi d(v,v'))$. Things are more complicated with the necessity part: to prove it, an additional assumption on the target space is needed. Namely, let us say that a metric space $X$ is {\it locally coherent} if for any $x$ $$ \lim_{u,w\underset{u\neq w}\to x}|\nabla d(u,\cdot)|(w)=1. $$ It can be shown that a convex set and a smooth manifold in a Banach space are locally coherent in the induced metric \cite{AI07a} and that any length metric space (space whose metric is defined by minimal lengths of curves connecting points) is locally coherent (as follows from \cite{AC08}). \begin{theorem}[local regularity criterion 1 \cite{AI07a}]\label{critgen} Let $X$ and $Y$ be metric spaces, let $F:\ X\rightrightarrows Y$ be a set-valued mapping, and let $(\bar x,\bar y)\in {\rm Graph}~ F$. We assume that ${\rm Graph}~ F$ is locally complete at $(\bar x,\bar y)$. Suppose further that there are $\varepsilon> 0$, and $r>0$ such that for some $\xi>0$ \begin{equation}\label{3.21} |\nabla_{\xi}\varphi_y|(x,v)>r \end{equation} if \begin{equation}\label{3.22} v\in F(x),\quad d(x,\overline x)<\varepsilon,\quad d(y,\overline y)<\varepsilon,\quad d(v,\overline y)<\varepsilon,\quad v\neq y. \end{equation} Then $F$ is regular near $(\bar x,\bar y)$ with ${\rm sur} F(\bar x,\bar y)\ge r.$ Conversely, let \ $Y$ \ be locally coherent at $\overline y$. Assume that ${\rm sur} F(\overline x|\overline y)> r>0$. Take a $\xi<r^{-1}$. Then for any $\delta>0$ there is an $\varepsilon>0$ such that $|\nabla_{\xi} \varphi_y|(x,v)\ge(1-\delta)r$ whenever $(x,y,v)$ satisfy (\ref{3.22}). Thus, in this case \begin{equation}\label{3.23} {\rm sur} F(\bar x,\bar y)=\liminf_{{(x,v)\ \underset{{\rm Graph}F}\to\ (\overline x,\overline y)}\atop{y\to\overline y,\ y\neq v}}|\nabla_{\xi} \varphi_y|(x,v). \end{equation} \end{theorem} For mappings into metrically convex spaces (for any two points there is a shortest path connecting the points) the final statement of Theorem \ref{critgen} can be slightly improved. \begin{corollary}\label{critgp} Suppose under the conditions of Theorem \ref{critgen} that $Y$ is metrically convex. Then for any neighborhood $V$ of $\overline y$ \begin{equation}\label{3.26} {\rm sur} F(\bar x,\bar y)= \liminf_{(x,v)\underset{\rm Graph F}\to (\overline x,\overline y)}~\inf_{y\in V\backslash\{v\}}|\nabla_{\xi} \varphi_y|(x,v) \end{equation} \end{corollary} \begin{theorem}[local regularity criterion 2]\label{slsc} Suppose that $X$ is complete and the graph of $F$ is closed. Assume further that there are neighborhood $U\subset X$ of $\overline x$ and $V\subset Y$ of $\overline y$, $r>0$ and $\varepsilon>0$ such that that $|\nabla\overline{\psi}_y|(x)>r$ for all $(x, y)\in U\times V$ such that $\varepsilon>\overline{\psi}_y(x)>0$. Then ${\rm sur} F(\overline x|\overline y)\ge r$. Conversely, if in addition $Y$ is a length space and ${\rm sur} F(\overline x|\overline y)>r>0$, then there is a neighborhood of $(\bar x,\bar y)$ and an $\varepsilon>0$ such that $|\nabla\overline{\psi}_y|(x)\ge r$ for all $(x,y)$ of the neighborhood such that $y\not\in F(x)$ and $0<\overline{\psi}_y(x)<\varepsilon r$. Thus in this case $$ {\rm sur} F(\overline x|\overline y)=\liminf_{{(x,y)\to (\bar x,\bar y)}\atop{0\neq d(y,F(x))\to 0,}}|\nabla\overline{\psi}_y|(x). $$ In particular, if $\psi_y=d(y,F(\cdot))$ is lower semicontinuous at every $x$ of a neighborhood of $\overline x$ and for every $y\not\in F(x)$ close to $\overline y$, then $$ {\rm sur} F(\overline x|\overline y)=\liminf_{{(x,y)\to (\bar x,\bar y)}\atop{0\neq d(y,F(x))\to 0 }}|\nabla\psi_y|(x). $$ \end{theorem} The starting point for developing slope-based regularity theory was the paper by Az\'e-Corvellec-Lucchetti \cite{ACL} (its first version was circulated in 1998) who obtained a global error bound in terms of "variational pairs" that include slope on a metric space as a particular case. Theorem \ref{critgen}, and specifically the fact that the slope estimate is precise, was proved in \cite{AI00} under somewhat stronger condition (equivalent to $Y$ being a length space). We refer to \cite{DA06} for a systematic exposition of the slope-based approach to local regularity. Theorem \ref{slsc} is a slightly modified version of the mentioned result of Ngai-Tron-Thera \cite{NTT12} (proved originally for $Y$ being a Banach space). To explain how the additional assumption on $Y$ is used to get necessity e.g. in Theorem \ref{critgen}, let us consider, following the original argument in \cite{AI00}, $(x,y,v)$ sufficiently close to $\overline x$ and $\overline y$ respectively and such that $y\neq v\in F(x)$. For any $n$ take $\delta_n= o(n^{-1})$ and a $v_n$ such that $d(v_n,v)\le (n^{-1}+\delta_n)d(y,v)$ and $d(v_ny)\le (1-n^{-1}+\delta_n)d(y,v)$. If $Y$ is a length space such $v_n$ can be found. As $F$ covers near $(\bar x,\bar y)$ with modulus greater than $r$, there is a $u_n$ such that $v_n\in F(u_n)$ and $d(u_n,x)\le r^{-1}d(v_n,v)\to 0$ when $n\to\infty$. We have $|d(y,v) - (d(y,v_n) + d(v,v_n))|= o(d(v_n,v))$. Therefore (as $r\xi<1$) $$ |\nabla\varphi_y|(x,v) \ge \lim_{n\to\infty} \frac{\varphi_y(x,v) - \varphi_y(u_n,v_n)}{\max\{d(u_n,x),\xi d(v_n,v)\}} \ge \lim_{n\to\infty} \frac{d(v_n,v)}{r^{-1}d(v_n,v)}= r. $$ Similar argument, modified as the definition of $\overline{\psi}_y$ includes a limit operation, can be used also for the proof of necessity in Theorem \ref{slsc}. It should be observed that the class of locally coherent spaces is strictly bigger than the class of length spaces. For instance a smooth manifold in a Banach space with the induced metric is a locally coherent space but not a length space (unless it is a linear manifold). \subsection{Related concepts: metric subregularity, calmness, controllability, linear recession} In the definitions of the local versions of the three main regularity properties we scan entire neighborhoods of the reference point of the graph of the mapping. Fixing one or both components of the point leads to new weaker concepts that differ from regularity in many respects. Subregularity and calmness attract much attention last years. We refer to \cite{DR} for a detailed study of the concepts mainly for mappings between finite dimensional spaces, and begin with parallel concepts relating to linear openness which are rather new in the context of variational analysis. We skip (really elementary) proofs of almost all results in this subsection. \begin{definition}[controllability]\label{contr} {\rm A set valued mapping $F:\ X\rightrightarrows Y$ is said to be {\it (locally) controllable at} $(\bar x,\bar y)$ if there are $\varepsilon>0, \gamma >0$ such that \begin{equation}\label{3.1d} B(\overline y,r t)\subset F(B(\overline x,t)),\quad {\rm if}\; 0\le t< \varepsilon. \end{equation} The upper bound of such $r$ is the {\it rate} or {\it modulus of controllability} of $F$ {\it at} $(\bar x,\bar y)$. We shall denote it ${\rm contr} F(\overline x|\overline y)$ and ${\rm contr} F(\overline x)$ if $F$ is single-valued.} \end{definition} \begin{proposition}[Regularity vs. controllability]\label{regvscont} Let $X$ and $Y$ be metric spaces, let $F: X\rightrightarrows Y$ have locally complete graph, and let $(\bar x,\bar y)\in{\rm Graph}~ F$. Then \begin{equation}\label{3.d} {\rm sur} F(\overline x|\overline y)=\lim_{\varepsilon\to 0}\inf\{{\rm contr}F(x|y):\; (x,y)\in{\rm Graph}~ F,\; \max\{d(x,\overline x),d(y,\overline y)\}<\varepsilon\}. \end{equation} \end{proposition} \begin{definition}[linear recession]\label{recess} {\rm Lets us say that $F$ {\it recedes from $\overline y$ at $(\bar x,\bar y)$ at a linear rate} if there are $\varepsilon>0$ and $K\ge 0$ such that \begin{equation}\label{3.2d} d(\overline y, F(x))\le K d(x,\overline x),\quad {\rm if}\; d(x,\overline x)<\varepsilon. \end{equation} We shall call the lower bound of such $K$ the {\it speed of recession} of $F$ from $\overline y$ at $(\bar x,\bar y)$ and denote it ${\rm ress} F(\overline x|\overline y)$} \end{definition} The other possible way to ``pointify'' the Aubin property is to fix $\overline x$ and allow $(x,y)$ to change within ${\rm Graph}~ F$. Then, instead of (\ref{3.2d}) we get the inequality \begin{equation}\label{3.4d} d(y, F(\overline x))\le K d(x,\overline x) . \end{equation} \begin{definition}[calmness] {\rm It is said that $F: X\rightrightarrows Y$ is {\it calm} at $(\bar x,\bar y)$ if there are $\varepsilon>0$, $K\ge 0$ such that (\ref{3.4d}) holds if $d(x,\overline x)<\varepsilon, \ d(y,\overline y)<\varepsilon$ and $y\in F(x)$. The lower bound of all such $K$ will be called the {\it modulus of calmness} of $F$ at $(\bar x,\bar y)$. We shall denote it by ${\rm calm} F(\overline x|\overline y)$ (${\rm calm} F(\overline x)$ if $F$ is single-valued).} \end{definition} Again we can easily see that {\it uniform calmness}, that is calmness at every $(x,y)$ of the intersection of ${\rm Graph}~ F$ with a neighborhood of $(\bar x,\bar y)$ with the same $\varepsilon$ and $K$ for all such $(x,y)$, is equivalent to the Aubin property of $F$ near $(\bar x,\bar y)$. \begin{definition}[subregularity]\label{subdef}{\rm Let $F: X\rightrightarrows Y$ and $\overline y\in F(\overline x)$. It is said that $F$ is {\it (metrically) subregular} at $(\bar x,\bar y)$ if there is a $K>0$ such that \begin{equation}\label{3.6d} d(x,F^{-1}(\overline y))\le Kd(\overline y,F(x)) \quad {\rm if}\; d(x,\overline x)<\varepsilon. \end{equation} for all $x$ of a neighborhood of $\overline x$. The lower bound of such $K$ is called the {\it rate} or {\it modulus of subregularity} of $F$ at $(\bar x,\bar y)$. It will be denoted ${\rm subreg} F(\overline x|\overline y)$. We say that $F$ is {\it strongly subregular} at $(\bar x,\bar y)$ if it is subregular at the point and $\overline y\not\in F(x)$ for $x\neq \overline x$ of a neighborhood of $\overline x$. } \end{definition} \begin{proposition}\label{contsub} The equalities $$ {\rm subreg} F(\overline x|\overline y)= {\rm calm} F^{-1}(\overline y|\overline x),\quad {\rm contr} F(\overline x|\overline y)\cdot {\rm ress} F^{-1}(\overline y|\overline x)=1 $$ always hold. If moreover, $F$ is strongly subreglar at $(\bar x,\bar y)$, then $$ {\rm contr} F(\overline x|\overline y)\cdot {\rm subreg} F(\overline x|\overline y)\ge 1. $$ \end{proposition} \begin{theorem}[slope criterion for calmness]\label{slcalm} Let $X$ and $Y$ be arbitrary metric spaces, let $F: X\rightrightarrows Y$ be a set-valued mapping with closed graph and let $(\bar x,\bar y)\in{\rm Graph}~ F$. Then $$ {\rm calm} F(\overline x|\overline y)\ge\limsup_{y\to \overline y}|\nabla \psi_y|(\overline x), $$ where, as earlier, $\psi_y(x)= d(y,F(x))$. \end{theorem} \proof Let $K>{\rm calm} F(\overline x|\overline y)$ then there is an $\varepsilon >0$ such that (\ref{3.4d}) holds, provided $d(x,\overline x)<\varepsilon$ and $y\in F(x)$. To prove the theorem, it is sufficient to show that $|\nabla\varphi_y|(\overline x)\le K$ for all $y$ sufficiently close to $\overline y$. To this end, it is sufficient to verify that there is a $\delta>0$ such that the inequality $$ d(y,F(\overline x))-d(y,F(x))\le Kd(x,\overline x) $$ holds for all $x,y$ satisfying $d(x,\overline x)<\delta,\; d(y,\overline y)<\delta$. If $y\in F(x)$, then (\ref{3.6d}) reduces to (\ref{3.4d}). Take a positive $\delta<\varepsilon/2$, and let $x$ and $y$ be such that $d(x,\overline x)<\delta,\; d(y,\overline y)<\delta$. If $d(y,F(x))\ge \delta$, then (\ref{3.6d}) obviously holds. If $d(y,F(x))<\delta$, we can choose a $v\in F(x)$ such that $d(y,v)<\delta$. Then $d(v,\overline y)<\varepsilon$ and therefore $d(v,F(\overline x))\le d(x,\overline x)$. Thus $$ \begin{array}{lcl} d(y,F(\overline x))-d(y,F(x))&\le& d(y,v)+d(v,F(\overline x))-d(y,F(x))\\ \\ &\le& Kd(x,\overline x) + d(y,v)-d(y,F(x)) \end{array} $$ and (\ref{3.6d}) follows as $d(y,v)$ can be arbitrarily close to $d(y,F(x))$.\endproof \begin{theorem}[slope criterion for subregularity]\label{calmcrit} Assume that $X$ is a complete metric space. Let $F:\ X\rightrightarrows Y$ be a closed set-valued mapping and $(\bar x,\bar y)\in {\rm Graph}~ F$. Assume that the function $\psi(x)=d(\overline y,F(x))$ is lower semicontinuous and there are $\varepsilon>0$ and $ r>0$ such that $$ |\nabla\psi_{\overline y}|(x)=|\nabla d(\overline y,F(\cdot))|(x) \ge r, $$ if $d(x,\overline x)<\varepsilon$ and $0<d(\overline y, F(x))<\varepsilon$. Then $F$ is subregular at $(\bar x,\bar y)$ with modulus of subregularity (and hence the modulus of calmness of $F^{-1}$ at $(\overline y,\overline x)$) not greater than $r^{-1}$. \end{theorem} \section{Metric theory. Perturbations and stability.} In this section we concentrate on two fundamental questions: (a) what happens with regularity (and subregularity) properties of $F$ if the mapping is slightly perturbed? (b) how the set of solutions of the inclusion $y\in F(x,p)$ (where $F$ depends on a parameter $p$) depends on $(y,p)$? \noindent The answer to the second question leads us to a fairly general implicit function theorems. The key point in both cases is that we have to require a certain amount of Lipschitzness of perturbations to get desirable results. \subsection{Stability under Lipschitz perturbation} \begin{theorem}[stability under Lipschitz perturbation]\label{stablip} Let $X$, $Y$ be metric spaces, let $U\subset X$ and $V\subset Y$ be open sets. Consider a set-valued mapping $\Psi: X\times X\rightrightarrows Y$ with closed graph assuming that either $X$ or the graph of $\Psi$ is complete. Let $F(x)=\Psi(x,x)$. Suppose that (a) for any $u\in U$ the mapping $\Psi(\cdot,u)$ is Milyutin regular on $(U,V)$ with modulus of surjection greater than $r$, that is for any $x\in U$, any $v\in \Psi(x,u)$ and any $y\in \overset{\circ}B(v,rt)\cap V$ with $t< d(x,X\backslash U)$ there is an $x'$ such that $d(x,x')\le r^{-1}d(y,v)$ and $ y\in F(x');$ (b) for any $x\in U$ the mapping $\Psi(x,\cdot)$ is pseudo-Lipschitz on $(U,V)$ with modulus $\ell<r$, that is for any $u,w \in U$ $$ {\rm ex}(\Psi(x,u)\cap V,\Psi(x,w))< \ell d(u,w). $$ \noindent Then $F(x)=\Psi(x,x)$ is Milyutin regular on $(U,V)$ with ${\rm sur}_{m}F(U|V)\ge r-\ell$. \end{theorem} \proof We shall consider only the case of complete ${\rm Graph}~\Psi$. According to the general regularity criterion of Theorem \ref{gencrit} all we have to show is that there is a $\xi>0$ such that, given $(x,v)\in gr F$ and $y$ such that $x\in U$, $y\in V$ and $0<d(y,v)< rm(x)$, there is another point $(x',v')\neq (x,v)$ in the graph of $F$ such that \centerline{$d(y,v')\le d(y,v)-(r-\ell)\max\{d(x,x'),\xi(v,v'))\}$.} \vskip 1mm \noindent We have by (a): $B(v,rt)\cap V\subset \Psi(B(x,t),x)$ if $t<m (x)$. As $d(y,v)< rm (x)$, it follows that there is a $x'\in {\cal B}(x,t)$ such that $y\in \Psi(x',x)$ and $d(x,x')\le r^{-1}d(y,v)$. Clearly, $x'\in U$. Therefore by (b) $d(y,\Psi(x',x'))< \ell d(x,x')$. This means that there is a $v'\in F(x')$ such that $$ d(y,v')\le \ell d(x,x')\le \frac{\ell}{r} d(y,v). $$ Take $\xi<(r+\ell)^{-1}$. Then $$ \xi d(v,v')\le (r+\ell)^{-1}(d(v,y)+d(y,v'))\le (r+\ell)^{-1}\Big(1+\frac{\ell}{r}\Big)d(y,v) = \frac{1}{r}d(y,v). $$ Thus $\max\{d(x,x'),\xi d(v,v')\}\le r^{-1}d(y,v)$ and we have $$ d(y,v')< (\ell/r)d(y,v)= d(y,v)- \frac{r-l}{r}d(y,v)\le d(y,v)-(r-l)\max\{d(x,x'),\xi d(v,v')\}. $$ as needed.\endproof \begin{corollary}[Milyutin's perturbation theorem \cite{DMO}]\label{milt1} Let $X$ be a metric space, let $Y$ be a normed space and $F: X\rightrightarrows Y$ and $G: X\rightrightarrows Y$ We assume that either the graphs of $F$ and $G$ are complete or $X$ is a complete space. Let further $U\subset X$ be an open set such that $F$ is Milyutin regular on $U$ with ${\rm sur} F(U)\ge r$ and $G$ is (Hausdorff) Lipschitz with ${\rm lip} G(U)\le\ell<r$. If either $F$ or $G$ is single-valued continuous on $U$, then $F+G$ is Milyutin regular on $U$ and ${\rm sur} (F+G)(U)\ge r-\ell$. \end{corollary} \proof Apply the theorem to $\Psi(x,u)= F(x)+G(u)$.\endproof To state a local version of the theorem, we need the following \begin{definition}[uniform regularity]\label{unreg}{\rm Let $P$ be a topological space, let $F: P\times X\rightrightarrows Y$, let $\bar p\in P$, and let $(\bar x,\bar y)\in{\rm Graph}~ F(\bar p,\cdot)$. We shall say that $F$ is regular near $(\bar x,\bar y)$ {\it uniformly} in $p\in P$ near $\bar p$ if for any $r<{\rm sur} F(\bar p,\cdot)(\overline x|\overline y)$ there are $\varepsilon>0$ and a neighborhood $W\subset P$ of $\bar p$ such that for any $p\in W$ and any $x$ with $d(x,\overline x)<\varepsilon$ $$ B(F(p,x),rt)\cap B(\overline y,\varepsilon)\subset F(p,B(x,t)), \quad {\rm if}\; 0\le t<\varepsilon. $$ } \end{definition} \begin{theorem}[stability under Lipschitz perturbations: local version]\label{stabliploc} Let $X$, $Y$, $\Psi: X\times X\rightrightarrows Y$ and $F(x)= \Psi(x,x)$ be as in Theorem \ref{stablip}, and let $(\bar x,\bar y)\in{\rm Graph}~ F$. We assume that (a) $\Psi(\cdot, u)$ is regular near $(\bar x,\bar y)$ uniformly in $u$ near $\overline x$; (b) $\Psi(x,\cdot)$ is pseudo-Lipschitz near $(\bar x,\bar y)$ uniformly in $x$ near $\overline x$. \noindent If ${\rm lip} \Psi(\overline x,\cdot)(\overline x|\overline y)<\ell<r<{\rm sur} \Psi(\cdot,\overline x)(\overline x|\overline y)$, then $F$ is regular near $(\bar x,\bar y)$ with modulus of surjection greater than $r-\ell$. \end{theorem} The last theorem in turn immediately implies Milyutin's theorem and its versions correspond to $\Psi(x,y)= F(x)+g(y)$ with $g$ being single-valued Lipschitz. The following corollary from the theorems is straightforward \begin{theorem}[Milyutin's perturbation theorem - local version]\label{milt} Let $X$ be a metric space, let $Y$ be a normed space, and let $F: X\rightrightarrows Y$ and $G: X\rightrightarrows Y$. Given $\overline x\in{\rm dom}~ F\cap{\rm dom}~ G$, $\overline y\in F(\overline x),\; \overline z\in G(\overline x)$, we assume that $F$ is regular near $(\bar x,\bar y)$ with ${\rm sur} F(\overline x|\overline y)\ge r$ and $G$ has the Aubin property near $(\overline x,\overline z)$ with ${\rm lip} G(\overline x|\overline z)\le\ell$. If either $F$ or $G$ is single-valued continuous on its domain and the graph of the other is complete in the product metric, then $$ {\rm sur} (F+G)(\overline x,\overline y+\overline z)\ge r-\ell. $$ \end{theorem} \proof Set $\Psi(x,y)=F(x)+ G(y)$. It is an easy matter to check that the conditions of Theorem \ref{stabliploc} are valid. \endproof As an immediate consequence of the last theorem we mention a stronger version of the Lyusternik-Graves theorem stating that its condition is not only sufficient but also necessary for regularity is an immediate corollary of the last theorem. \begin{corollary}[Lyusternik-Graves from Mulyutin]\label{neclg} Let $X$ and $Y$ be Banach spaces, and let $F:X\to Y$ be strictly differentiable at $\overline x$. Then ${\rm sur} F(\overline x)=C(F'(\overline x))$. \end{corollary} \proof Indeed, let $X,Y$ be Banach spaces, and let $F:X\to Y$ be strictly differentiable at $\overline x$. Set $g(x) = F(x) - F'(\overline x)(x-\overline x)$. As $F$ is strictly differentiable at $\overline x$, the Lipschitz constant of $g$ at $\overline x$ is zero which by Milyutin's theorem means that the moduli of surjection of $F$ at $\overline x$ and $F'(\overline x)$ coincide. \endproof We observe next that in Theorem \ref{milt} one of the mappings is assumed single-valued. This assumption is essential. With both mappings set-valued the result may be wrong as the following example shows. \begin{example}[cf. \cite{DR}]\label{contr1}{\rm Let $X=Y=I\!\!R$, $G(x,y)=\{x^2,-1\},\; F(x)=\{-2x,1\}$. It is easy to see that $F$ is regular near $(0,0)$ and $G$ is Lipschitz in the Hausdorff metric. On the other hand, $$ \Phi(x)=\{x^2-2x,x^2+1,-2x-1,0\} $$ is not even regular at $(0,0)$. Indeed $(\xi,0)\in{\rm Graph}~ \Phi$ for any $\xi$. However, if $\xi\neq 0$, then the $\Phi$-image of a sufficiently small neighborhood of $\xi$ does not contain points of a small neighborhood of zero other than zero itself. } \end{example} Perturbation analysis of regularity properties was initiated by Dmitruk-Milyutin-Osmolovski in \cite{DMO} with a proof of a global version of Theorem \ref{milt1} (attributed in \cite{DMO} to Milyutin) with both the mapping and the perturbation single valued. The first perturbation result for set-valued mappings was proved probably by Ursescu \cite{CU96} (see also \cite{AI00}). Observe that global theorems are valid for Lipschitz set-valued perturbations as well. Till very recently the main attention was devoted to additive perturbations into a linear range space, especially in connection with implicit function theorems for generalized equations - see e.g. \cite{AB08,DR}. Interest to non-additive Lipschitz set-valued perturbations of set-valued mappings appeared just a few years ago, partly in connection with fixed point and coincidence theorems \cite{AAGDO,DF12,AI11a,AI14} The Graves theorem can be viewed as a perturbation theorem for a {\it linear} regular operator. For that reason in some publications (e.g. \cite{DF11,DR}) this theorem is called "extended Lyusternik-Graves theorem". I believe the name "Milyutin theorem" is adequate. It is quite obvious that Graves did not have in mind the perturbation issue and was interested only in a quality of approximation needed to get the result. (Tikhomirov and I a had similar idea when proving the metric regularity counterpart of the Graves theorem for \cite{IT} without any knowledge of the Graves' paper.) And the fact that the Lipschitz property of the perturbation as the key for the estimate was explicitly emphasized in \cite{DMO}. Note also that even Corollary \ref{neclg} cannot be obtained from the Graves theorem. Milyutin's theorem can also be viewed as a regularity result for a composition $\Phi(x,F(x))$, where $\Phi(x,y)= G(x)+y$. Theorems \ref{stablip} and \ref{stabliploc} can be applied to prove regularity of more general compositions, with arbitrary $\Phi$, just by taking $\Psi(x,u)=\Phi(x,F(u))$. However, a certain caution is needed to guarantee that such a $\Psi$ satisfies the required assumptions (as say in \cite{AI11a} where $\Phi(x,\cdot)$ is assumed to be an isometry or in \cite{DS12} where a certain "composition stability" is a priori assumed). Corollary \ref{neclg} was first stated in \cite{ALD96} with a direct proof, not using Milyutin's theorem. \subsection{Strong regularity and metric implicit function theorem.} Generally speaking, the essence of the inverse function theorem is already captured by the main Equivalence Theorem \ref{equiv3}. But in view of the very special role of the inverse and implicit function theorems in the classical theory, it seems appropriate to make the connection with the classical results more transparent. So let $F(x,p): X\times P\rightrightarrows Y$. We shall view $P$ as a parameter space. Let $S(y,p)=\{ x\in X:\; y\in F(x,p)\}$ stand for the solution mapping of the inclusion $y\in F(x,p)$. In all theorems to follow we consider $Y\times P$ with an $\ell^1$-type distance $$ d_{\alpha}^1((y,p),(y',p'))= \alpha d(y,y')+ d(p,p'), $$ where $\alpha$ will be further determined by Lipschitz moduli of mappings involved. \begin{theorem}[general proposition on implicit functions]\label{genimp} We assume that $\overline y\in F(\overline x,\bar p)$ and $F$ satisfies the following conditions: there are constants $K>0$, $\alpha >0$ and a sufficiently small $\varepsilon> 0$ such that the following relations hold: \vskip 1mm (a) $F(\cdot,p)$ is regular near ($(\bar x,\bar y),\bar p$) uniformly in $p$ with the rate of metric regularity not grater than $K$; \vskip 1mm (b) $F(x,\cdot)$ is pseudo-Lipschitz near $(\overline x,(\bar p,\overline y))$ uniformly in $x$ with the Lipschitz modulus not greater than $\alpha$. \vskip 1mm \noindent Then $S$ has the Aubin property near $((\overline y,\bar p),\overline x)$ with the Lipschitz modulus with respect to the metric $d_{\alpha}^1$ in $Y\times P$ not greater than ${\rm reg} F(\cdot,\bar p)(\overline x,\overline y)$. In particular, if we are interested in solutions of the inclusion $\overline y\in F(x,p)$ (with fixed $\overline y$), then under the assumption of the theorem the solution mapping $p\mapsto S_{\overline y}(p)$ has the Aubin property near $(\bar p,\overline x)$ with Lipschitz modulus not exceeding $K\alpha$. \end{theorem} \proof As $F(\overline x,\bar p)\neq\emptyset$, the uniform pseudo-Lipschitz property implies that $S(y,p)\neq\emptyset$ for $(y,p)$ close to $(\overline y,\bar p)$. If now $y\in F(x,p)$, then $$ \begin{array}{lcl} d(x,S(y',p'))&\le& K d(y',F(x,p'))\le K\big(d(y,y') + d(y,F(x,p'))\big)\\ &\le& K\big(d(y,y') +\alpha d(p,p')\big)= K d_{\alpha}^1((y,p),(y',p'))\\ &=&K\alpha(d(p,p')+\alpha^{-1}d(y,y')), \end{array} $$ and the proof is completed.\endproof \begin{definition}\label{sregdef} {\rm Let $F: X\rightrightarrows Y$, and let $\overline y\in F(\overline x)$. We say that $F$ is {\it strongly (metrically) regular} near $(\bar x,\bar y)\in{\rm Graph}~ F$ if for some $\varepsilon>0,\ \delta >0$ and $K\in[0,\infty)$ \begin{equation}\label{2.4.1} B(\overline y,\delta)\subset F(B(\overline x,\varepsilon)) \quad \&\quad d(x,u)\le Kd(y,F(x)) \end{equation} \noindent whenever $x\in B(\overline x,\varepsilon)$, $u\in B(\overline x,\varepsilon)$ and $y\in F(u)\bigcap B(\overline y,\delta)$. We shall also say following \cite{DR} that $F$ {\it has a single-valued localization near} $(\bar x,\bar y)$ if there are $\varepsilon> 0,\ \delta>0$ such that the restriction of $F(x)\cap B(\overline y,\delta)$ to $B(\overline x,\varepsilon)$ is single-valued. If in addition, the restriction is Lipschitz continuous, we say that $F$ has {\it Lipschitz localization} near $(\bar x,\bar y)$ .} \end{definition} It is obvious from the definition that strong regularity implies regularity: the second relation in (\ref{2.4.1}) is clearly stronger than metric regularity. \begin{proposition}[characterization of strong regularity]\label{sregelem} Let $F: X\rightrightarrows Y$ and $(\bar x,\bar y)\in{\rm Graph}~ F$. Then the following properties are equivalent \vskip 1mm (a) $F$ is strongly regular near $(\bar x,\bar y)$; \vskip 1mm (b) there are $\varepsilon>0$ and $\delta>0$ such that $B(\overline y,\delta)subset F(B(\overline x,\varepsilon))$ and \begin{equation}\label{2.4.2} F(x)\bigcap F(u)\bigcap B(\overline y,\delta)=\emptyset, \end{equation} \noindent whenever $u\neq x$ and both $x$ and $u$ belong to $B(\overline x,\varepsilon)$; \vskip 1mm (c) $F$ is regular near $(\bar x,\bar y)$ and there are $\varepsilon>0,\ \delta>0$ such that $F^{-1}$ has a single-valued localization near $(\overline y,\overline x)$; \vskip 1mm (d) $F^{-1}$ has a Lipschitz localization $G(y)$ near $(\overline y,\overline x)$. In particular $y\in F(G(y))$ for all $y$ of a neighborhood of $\overline y$. \vskip 1mm \noindent Moreover, if $F$ is strongly regular near $(\bar x,\bar y)$, then the lower bound of $K$ for which the second part of (\ref{2.4.1}) holds and the Lipschitz modulus of its Lipschitz localization $G$ at $\overline y$ coincide with ${\rm reg} F(\overline x|\overline y)$. \end{proposition} \begin{theorem}[persistence of strong regularity under Lipschitz perturbation]\label{strlip} We consider a set-valued mapping $\Phi:X\rightrightarrows Y$ with complete graph, and a (single-valued) mapping $G: X\times Y\to Z$. Let $\overline y\in\Phi(\overline x)$ and $\overline z=G(\overline x,\overline y)$. We assume that (a) $\Phi$ is strongly regular near $(\bar x,\bar y)$ with ${\rm sur} \Phi(\overline x|\overline y)>r$; (b) $G(x,\cdot)$ is an isometry from $Y$ onto $Z$ for any $x$ of a neighborhood of $\overline x$; (c) $G(\cdot,y)$ is Lipschitz with constant $\ell<r$ in a neighborhood of $\overline x$, the same for all $y$ of a neighborhood of $\overline y$. \noindent Set $F(x)=G(x,\Phi(x))$. Then $F$ is strongly regular near $(\overline x,\overline z)$. In particular, if $Y$ is a normed space, $\Phi$ is strongly regular near $(\bar x,\bar y)\in{\rm Graph}~ \Phi$ and $G(x,y)= g(x)+ y$ with ${\rm lip} g(\overline x)<{\rm sur} \Phi(\overline x|\overline y)$, then $F(x) = \Phi(x)+g(x)$ is strongly regular near $(\overline x,\overline y+g(\overline x))$. \end{theorem} \begin{remark}\label{remstrong}{\rm It is to be observed in connection with the last theorem that strong regularity is not preserved under set-valued perturbations like those in Theorem \ref{stablip}. Here is a simple example: $$ \Psi (x,u)= x+ u^2[-1,1] \; (x,u\in I\!\!R),\quad \overline x = 0. $$ Clearly $\Psi(\cdot,0)$ is strongly regular but $F(x)= x+x^2[-1,1]$ is of course regular but not strongly regular. It follows that strong regularity is somewhat less robust compare to the standard regularity.} \end{remark} \begin{theorem}[implicit function theorem - metric version]\label{mimpl} Assume in addition to the assumptions of Theorem \ref{genimp} that \begin{equation}\label{2.4.3} F(x,p)\cap F(x',p)\cap\overset{\circ}B (\overline y,\varepsilon)=\emptyset\quad\forall \; x,x'\in\overset{\circ}B(\overline x,\varepsilon),\;,x\neq x',\; p\in\overset{\circ}B(\bar p,\varepsilon). \end{equation} Then the solution map $S$ has a Lipschitz localization $G$ near $((\bar p,\overline y),\overline x)$ with ${\rm lip} G(\bar p,\overline y)\le K$ (with respect to the $d_{\alpha}^1$-metric in $Y\times P$. In particular $z\in F(S(p,y),y)$ for all $(p,y)$ of a neighborhood of $(\bar p,\overline y)$. \end{theorem} The conclusion is already very similar to the conclusion of the classical implicit function theorem. Indeed, it contains precisely the same information about the solution, namely its uniqueness in a neighborhood and its Lipschitz continuity (replacing differentiability) with the Equivalence Theorem \ref{equiv3} providing, along with the concluding part of Proposition \ref{sregelem} an estimate for the Lipschitz constant of the solution map (replacing the formulas for partial derivative in the classical theorem). Moreover, the proof below is based on the same main idea as the proof of the classical theorem, say the second proof in \cite{DR}. \proof Consider the set-valued mapping $\Phi$ from $X\times P$ into $P\times Y$. defined by $$ \Phi(x,p)=\{p\}\times F(x,p). $$ Then $(\bar p,\overline y)\in\Phi(\overline x,\bar p)$. We claim that $\Phi$ is strongly regular near $((\overline x,\bar p),(\bar p,\overline y))$. Indeed, we have for $x,\ p,\ y$ sufficiently close to $\overline x,\bar p,\overline y$ \begin{equation}\label{2.4.4} \Phi^{-1}(x,y)=\{ p\}\times S(p,y) \end{equation} By Theorem \ref{genimp} $S$ has the Aubin property at $((\bar p,\overline y),\overline x)$. This obviously implies that $\Phi^{-1}$ has the Aubin property at $((\bar p,\overline y),(\overline x,\bar p)$. The latter means that $\Phi$ is regular at $((\overline x,\bar p),(\bar p,\overline y))$. On the other hand, $(p,y)\in\Phi(x,p)\cap\Phi(x',p')$ means that $p=p'$ and $y\in F(x,p)\cap F(x',p)$, so that (\ref{2.4.3}) may happen only if $x=x'$. This proves the claim. By Proposition \ref{sregelem} there is a Lipschitz localization of $\Phi^{-1}$ defined in a neighborhood of $(\bar p,\overline y)$. By (\ref{2.4.3}) this localization has the form $(p,G(p,y))$, where $G(p,y)\in S(p,y)$. Thus $G$ is a Lipschitz localization of $S$ and by Theorem \ref{genimp} its Lipschitz constant is not greater than $K$. \endproof \begin{theorem}[metric infinitesimal implicit function theorem]\label{impinf} Let $\overline y\in F(\overline x,\bar p)$, and assume that there are $\xi>0,\ r>0,\ \ell>0, \ \varepsilon >0$ are such that for all $x,y,p,v$ satisfying $$ d(x,\overline x)<\varepsilon,\; d(y,\overline y)<\varepsilon,\; d(p,\bar p)<\varepsilon, $$ either ${\rm Graph}~ F$ is complete and (a$_1$) \ $|\nabla_{\xi}\varphi_y(\cdot,p)|(x,v)>r$ \quad {\rm if} $v\in F(x,p)$ and $ d(y,v)>0$ \noindent or $X$ is a complete space and (a$_2$) \ $|\nabla\overline{\psi}_y(\cdot,p)|(x)> r$ \quad {\rm if} $\; \overline{\psi}_y(x,p)>0$ \vskip 1mm \noindent holds along with \vskip 1mm (b) \ $|\nabla\psi_y(x,\cdot)|(p) <\ell d(p,p') $, if $y\in F(x,p')$ for some $p'\in\overset{\circ}B (\bar p,\varepsilon)$. \vskip 1mm \noindent Then $S$ has the Aubin property near $(\overline y,\bar p)$ with ${\rm lip} S((\overline y,\bar p)|\overline x) \le r^{-1}$ if $Y\times P$ is considered with the distance $d_{\ell}^1((y,p),(y',p'))= \ell d(p,p')+ d(y,y')$. \end{theorem} The proof of the theorem consists in verifying the assumptions of Theorem \ref{genimp} for all $(x,y,p)$ of a neighborhood of $(\overline x,\bar p,\overline y)$ and $p'$ close to $\bar p$. The next theorem is an infinitesimal counterpart of Theorem \ref{mimpl}. \begin{theorem}\label{simpinf} In addition to the conditions of Theorem \ref{impinf} we assume that \vskip 1mm (c) \ $|\nabla\psi_y(\cdot,p)|(x) >0$ \quad if $y\in F(x',p)$ for some $x'\neq x$. \vskip 1mm \noindent Then $S$ has a Lipschitz localization $G$ in a neighborhood of $(\bar p,\overline z)$ with $G(\bar p,\overline y)=\overline x$ and the Lipschitz constant (with respect to the $d_{\ell}^1$-metric in $P\times Y$) not exceeding $r^{-1}$. \end{theorem} \proof Indeed, it follows from (c) that $y\not\in F(x,p)$ that is $(F(x,p)\cap F(x',p))\cap\overset{\circ}B(\overline y,\varepsilon)=\emptyset$ for $x, x'$ close to $\overline x$ and $p$ close to $\bar p$ and the reference to Theorems \ref{impinf} and \ref{mimpl} completes the proof.\endproof There have been numerous publications extending, one way or another, the implicit function theorem to settings of variational analysis, see e.g \cite{AB08,DR,BDG01,AI00,LZ99,NT04,NTT12}. Most of them deal with Banach spaces and/or specific classes of mappings, e.g. associated with generalized equations. It should be also said that some results named ``implicit function theorem" are rather parametric regularity or subregularity theorems giving uniform (w.r.t parameter) estimates for regularity rates of a mapping depending on a parameter. The concept of strong regularity was introduced by Robinson in \cite{SMR80}. A number of characterizations of strong regularity can be found in \cite{DR}. It is appropriate to mention (especially because we do not discuss these questions in the paper) that there are certain important classes of mappings for which regularity and strong regularity are equivalent. Such are monotone operators, in particular subdifferentials of convex functions, or Kojima mappings associated with constrained optimization \cite{DR,KK02}. \section{Banach space theory.} Needless to say that the vast majority of applications of the theory of metric regularity relate to problems naturally stated in Banach spaces. Variational analysis and metric regularity theory in Banach spaces are distinguished by (a) the existence of an approximation mechanisms, both primal and dual, using homogeneous mappings (graphical derivatives and coderivatives) in case of set-valued mappings or directional subderivatives and subdifferentials for functions; (b) the possibility of separable reduction for metric regularity that allows to reduce much of analysis to mappings between separable spaces; (c) the existence of a class of linear perturbations, most natural and interesting in many cases. \subsection{Techniques of variational analysis in Banach spaces.} \subsubsection{Homogeneous set-valued mappings.} \begin{definition}\label{normsvm} {\rm A set valued mapping ${\mathcal H}: X\rightrightarrows Y$ is {\it homogeneous} if its graph is a pointed cone. The latter means that $0\in H(0)$. The mapping $$ {\mathcal H}^*(y^*)=\{ x^*:\; \langle x^*,x\rangle-\langle y^*,y\rangle \le 0,\;\forall\; (x,y)\in{\rm Graph}~ {\mathcal H}\} $$ is called {\it adjoint} or {\it dual} to ${\mathcal H}$ (or the {\it dual convex process} as it is often called for the reasons to be explained in the next chapter). It is an easy matter to see, that \vskip 1mm \centerline{${\rm Graph}~ {\mathcal H}^*=\{(y^*,x^*):\; (x^*,-y^*)\in \big({\rm Graph}~ {\mathcal H})^{\circ}\}.$} \vskip 1mm With every homogeneous mapping ${\mathcal H}$ we associate the {\it upper norm} \vskip 1mm \centerline{$\| {\mathcal H}\|_+= \sup\{ \| y\|:\; y\in {\mathcal H}(x),\; x\in{\rm dom}~{\mathcal H},\; \| x\|\le 1\},$} and the {\it lower norm} \centerline{$\| {\mathcal H}\|_-=\sup_{x\in B\cap{\rm dom}~{\mathcal H}}\inf \{\| y\|:\; y\in {\mathcal H}(x)\}=\sup_{x\in B\cap{\rm dom}~{\mathcal H}} d(0,{\mathcal H}(x))$.} \vskip 2mm \noindent For single-valued mappings with ${\rm dom}~{\mathcal H}= X$ both quantities coincide and we may speak about the {\it norm} of ${\mathcal H}$. The mapping ${\mathcal H}$ is {\it bounded} if $\| {\mathcal H}\|_+<\infty$. This obviously means that there is an $r>0$ such that ${\mathcal H}(x)\subset r\| x\|B_Y$ for all $x$. Very often however, in the context of regularity estimates, it is more convenient to deal with different quantities defined by way of the norms as follows: $$ C({\mathcal H}) = \| {\mathcal H}^{-1}\|_-^{-1}\quad {\rm and}\quad C^*({\mathcal H})= \| {\mathcal H}^{-1}\|_+^{-1}. $$ The quantities are respectively called the {\it Banach constant} and the {\it dual Banach constant} of ${\mathcal H}$. To justify the terminology, note that for linear operators they coincide with the Banach constants introduced for the latter in the first section.} \end{definition} The proposition below containing important geometric interpretation of the concepts shows that the Banach constants are actually very natural objects.. \begin{proposition}[cf. Proposition \ref{calca}]\label{dualb}For any homogeneous ${\mathcal H}: X\rightrightarrows Y$ $$ \begin{array}{l} C({\mathcal H})={\rm contr}{\mathcal H}(0|0)=\sup\{ r\ge 0:\; rB_Y\subset {\mathcal H}(B_X)\};\\ \\ C^*({\mathcal H})= ({\rm subreg}{\mathcal H}(0|0))^{-1}= \inf \{ \| y\|:\; y\in {\mathcal H}(x),\; \| x\|= 1\} =\displaystyle\inf_{\| x\|=1} d(0,{\mathcal H}(x)). \end{array} $$ \end{proposition} \proof The equality ${\rm contr}{\mathcal H}(0|0)=\sup\{ r\ge 0:\; rB_Y\subset {\mathcal H}(B_X)\}$ follows from homogeneity of ${\mathcal H}$. On the other hand, saying that $rB_Y\subset {\mathcal H}(B_X)$ is the same as saying that for any $y$ with $\| y\|=r$ there is an $\| x\|$ with $\| x\|\le 1$ such that $x\in{\mathcal H}^{-1}(y)$ which means that $\| {\mathcal H}^{-1}\|_-\le r^{-1}$ and therefore $C({\mathcal H})\ge {\rm contr}{\mathcal H}(0|0) $. Likewise, $\|{\mathcal H}^{-1}\|_- < r^{-1}$ means that for any $y$ with $\| y\|=1$ there is an $x$ with $\| x\|\le r^{-1}$ such that $y\in{\mathcal H}(x)$ from which we get that $rB_Y\subset {\mathcal H}(B_X)$ and the first equality follows. To prove the second equality, consider first the case $C^*(H)<\infty$. Then $$ \begin{array}{lcl} C^*({\mathcal H})&=&\displaystyle\inf_{\| y\|=1}\inf\{\| x\|^{-1}: \; x\in {\mathcal H}^{-1}(y)\}\\ &=&\inf\{\| y\|:\; y\in {\mathcal H}(x), \| x\| = 1\}. \end{array} $$ If $C^*({\mathcal H})=\infty$, and therefore $\| {\mathcal H}^{-1}\|_+=0$, then for any $y$ the set ${\mathcal H}^{-1}(y)$ is either empty (recall our convention: $\inf \emptyset =\infty, \; \sup\emptyset =0$) or contains only the zero vector. Hence the domain of ${\mathcal H}$ is a singleton containing the origin. It follows that $\inf\{ \| y\|:\; y\in {\mathcal H}(x),\; \| x\|=1\}=\inf\emptyset=\infty$. This proves the left equality. Consider again the case $C^*({\mathcal H})>0$. Then $\| {\mathcal H}^{-1}\|_+<\infty$ and consequently, ${\mathcal H}^{-1}(0)=\{ 0\}$. It follows that $d(x,{\mathcal H}^{-1}(0))= \| x\|$. Setting $K=(C^*({\mathcal H}))^{1}$, we get for any $x$ with $\| x\|=1$: $$ Kd(0,{\mathcal H}(x))\ge 1= \| x\|= d(x,{\mathcal H}^{-1}(0) $$ and on the other hand for any $K'<K$ we can find an $x$ with $\| x\|=1$ such that $K'd(0,{\mathcal H}(x))<1$. It follows that $K= {\rm subreg} {\mathcal H}(0|0)$. The case $C^*({\mathcal H})=0$ is treated as above. \endproof \begin{corollary}\label{constin} For any homogeneous mappings ${\mathcal H}: X\rightrightarrows Y$ and ${\mathcal E}: Y\rightrightarrows Z$ $$ C({\mathcal E}\circ{\mathcal H})\ge C({\mathcal E})\cdot C({\mathcal H}). $$ \end{corollary} \proof Take $\rho< C({\mathcal H})$. Then $\rho(B_Y)\subset {\mathcal H}(B_X)$ and therefore $$ \begin{array}{lcl} C({\mathcal E}\circ {\mathcal H})&=& \sup \{ r\ge 0: \; r B_Z\subset ({\mathcal E}\circ{\mathcal H})(B_X)\}\\ &\ge& \sup \{ r\ge 0: \; r B_Z\subset {\mathcal E}(\rho B_Y)\} =\rho C({\mathcal E}) \end{array} $$ and the result follows. \endproof We shall see that the tangential (primal) regularity estimates are stated in terms of Banach constants of contingent derivatives of the mapping while the subdifferential estimate need dual Banach constants of coderivatives. The following theorem is the first indicator that (surprisingly!) the dual estimates can be better. \begin{theorem}[basic inequality for Banach constants]\label{bancon} For any homogeneous set-valued mapping $H: X\rightrightarrows Y$ $$ C^*({\mathcal H}^*)\ge C({\mathcal H})\ge C^*({\mathcal H}). $$ \end{theorem} \noindent Note that for linear operators we have equality -- see Proposition \ref{calca}. In the next section we shall see that the equality also holds for convex processes and some other set-valued mappings. \proof The right inequality is immediate from the definition. If $C({\mathcal H})=\infty$, that is $\| {\mathcal H}^{-1}\|_-=0$, then for any $y\in Y$ there is a sequence $(x_n)\subset X$ norm converging to zero and such that $y\in {\mathcal H}(x_n)$. It is easy to see that in this case \begin{equation}\label{5.1} {\mathcal H}^*(y^*)=\left\{\begin{array}{lcl}\emptyset,&{\rm if}& y^*\neq 0;\\ X^*,&{\rm if}& y^*=0, \end{array}\right. \end{equation} that is $({\mathcal H}^*)^{-1}\equiv \{0\}$, $\| ({\mathcal H}^*)^{-1}\|*^+=0$ and hence $C^*(H^*) =\infty$. Let now $\infty> C({\mathcal H})= r >0$. Set $\lambda = r^{-1}$. Then $\| {\mathcal H}^{-1}\|_-=\lambda$ so that for any $y$ with $\|y\|=1$ and any $\varepsilon>0$ there is an $x$ such that $\| x\|\le \lambda+\varepsilon$ and $y\in {\mathcal H}(x)$. Let now $x^*\in{\mathcal H}^*(y^*)$, that is $\langle x^*,x\rangle -\langle y^*,y\rangle\le 0$ if $y\in {\mathcal H}(x)$. Take $y\in S_Y$ such that $\langle y^*,y\rangle\le (-1+\varepsilon)\| y^*\|$ and choose an $x\in{\mathcal H}^{-1}(y)$ with $\| x\|\le \lambda +\varepsilon$. Then $$ -(\lambda+\varepsilon)\| x^*\|\le \langle x^*,x\rangle\le \langle y^*,y\rangle \le (-1+\varepsilon)\|y^*\| $$ that is $(\lambda +\varepsilon)\| x^*\|\ge (1-\varepsilon)\| y^*\|$. As $\varepsilon$ can be chosen arbitrarily close to zero this implies that $\| ({\mathcal H}^*)^{-1}\|_+\le r^{-1}$ and therefore $C^*({\mathcal H}^*)\ge r= C({\mathcal H})$. \endproof The following property plays an essential role in future discussions. \begin{definition}[non-singularity]\label{sing}{\rm We say that ${\mathcal H}$ is {\it non-singular} if $C^*({\mathcal H})>0$. Otherwise we shall call ${\mathcal H}$ {\it singular}.} \end{definition} We conclude the subsection with showing that regularity of a homogeneous mapping near the origins implies its global regularity. \begin{proposition}\label{globa} Let $X$ and $Y$ be two Banach spaces, and let $F: X\rightrightarrows Y$ be a homogeneous set-valued mapping. If $F$ is regular near $(0,0)$, then it is globally regular with the same rates. \end{proposition} \proof By the assumption, there are $K>0$ and $\varepsilon >0$ such that $d(x,F^{-1}(y))\le Kd(y,F(x))$ if $\max\{\| x\|,\| y\| \}<\varepsilon$. Let now $(x,y)$ be an arbitrary point of the graph. Set $\| m\|= \max\{\| x\|,\| y\| \}$, and let $\mu<\varepsilon/m$. Then $$ \mu d(x,F^{-1}(y))=d(\mu x,F^{-1}\mu y)\le d(\mu y, F(\mu x))=\mu d(\mu y,F(\mu x)) $$ whence $d(x,F^{-1}(y))\le Kd(y,F(x))$.\endproof The norms for homogeneous multifunctions were originally introduced first by Rockafellar \cite{RTR67} and Robinson \cite{SMR72} in the context of convex processes (lower norm) and then by Ioffe \cite{AI81} (upper norm for arbitrary homogenous maps) and Borwein \cite{JMB83} (upper norm and duality for convex processes -see also \cite{JMB86a,BoL,DR}). The dual Banach constant $C^*$ was also introduced in \cite{AI81}. The meaning of the primal constant has undergone some evolution since it first appeared in \cite{AI81}. The $C({\mathcal H})$ introduced here is reciprocal to that in \cite{AI87} mainly because the connection of Banach constants with the norms of homogeneous mappings makes the present definition more natural. \subsubsection{Tangent cones and contingent derivatives} Given a set $Q\subset X$ and an $\overline x\in Q$. The {\it tangent (or contingent) cone} $T(Q,\overline x)$ is the collection of $h\in X$ with the following property: there are sequences of $t_k\searrow 0$ and $h_k\to h$ such that $\overline x+t_kh_k\in Q$ for all $k$. If $F: X\rightrightarrows Y$ then the {\it contingent} or {\it graphical derivative} of $F$ at $(\bar x,\bar y)$ is the set-valued mapping $$ X\ni h\mapsto DF(\bar x,\bar y)(h)=\{v\in Y:\; (h,v)\in T({\rm Graph}~ F,(\bar x,\bar y))\}. $$ Let now $f$ be a function on $X$ finite at $\overline x$. The function $$ h\mapsto f^-(\overline x;h)=\liminf_{(t,h')\to (0^+,h)}t^{-1}(f(\overline x+th')-f(\overline x)) $$ is called the {\it Dini-Hadamard lower directional derivative} of $f$ at $\overline x$. This function is either lsc and equal to zero at the origin or identically equal to $-\infty$. The latter of course cannot happen if $f$ is Lipschitz near $\overline x$. The connection between the two concepts is very simple: $h\in T(Q,\overline x)$ if and only if $d^-(\cdot,Q)(\overline x;h)=0$ and $\alpha = f^-(\overline x;h)$ if and only if $(h,\alpha)\in T({\rm epi}~ f,(\overline x,f(\overline x)))$. If $F: X\rightrightarrows Y$ then the {\it contingent derivative} of $F$ at $\overline x$ is the set-valued mapping $$ X\ni h\mapsto DF(\overline x;h)=\{v\in Y:\; (h,v)\in T({\rm Graph}~ F,(\overline x,F(\overline x)))\}. $$ The contingent tangent cone and contingent derivative were introduced by Aubin in \cite{JPA81} (see \cite{AF} for detailed comments concerning genesis of the concept.) \subsubsection{Subdifferentials, normal cones and coderivatives.} {\it From now on, unless the opposite is explicitly said, all spaces are assumed separable.} Thanks to the separable reduction theorem to be proved in the next subsection such a restriction is justifiable in the context of regularity theory. On the other hand, it provides for a substantial economy of efforts, especially in the non-reflexive (or to be precise, non-Asplund) case. Subdifferential is among the most fundamental concepts in local variational analysis. Essential for the infinite dimensional variational analysis are five types of subdifferentials: Fr\'echet subdifferential, Dini-Hadamard subdifferential (the two are sometimes called ``elementary subdifferentials"), limiting Fr\'echet subdifferential, $G$-subdifferential and the generalized gradient. In Hilbert space there is one more convenient construction, ``proximal subdifferential". We shall introduce it in \S~7, So let $f$ be a function on $X$ which if finite at $x$. The sets \vskip 2mm \centerline{ $\partial_H f(x) =\{ x^*\in X^*:\; \langle x^*h\rangle\le f^-(x;h),\; \forall h\in X\}$} \noindent and \centerline{$\partial_Ff(x)=\{ x^*\in X^*:\; \langle x^*,h\rangle\le f(x+h)-f(x) + o(\| h\|) \}$} \vskip 2mm \noindent are called respectively the {\it Dini-Hadamard} and {\it Fr\'echet subdifferential} of $f$ at $x$. The corresponding {\it limiting} subdifferential at $x$ (we denote them for a time being $\partial_{LH}$ and $\partial_{LF}$) is defined as the collection of $x^*$ such that there is a sequence $(x_n,x_n^*)$ with $x_n$ norm converging to $x$ and $x_n^*$ weak$^*$-converging to $x^*$. The essential point in the definition of the limiting subdifferentials is that only {\it sequential} weak$^*$-limits of elements of elementary subdifferentials are considered. The limiting Dini-Hadamard subdifferential is basically an intermediate product in the definition of the $G$-subdifferential. Given a set $Q\subset X$, the {\it $G$-normal cone} to $Q$ at $x\in Q$ is $$ N_G(S,x) = \bigcup_{\lambda\ge 0}\lambda\partial_{LH}d(\cdot,Q)(x). $$ The {\it G-subdifferential} of $f$ at $x$ is defined as follows $$ \partial_Gf(x)=\{x^*:\; (x^*,-1)\in N_G({\rm epi}~ f,(x,f(x)) \}. $$ The cone $N_C(Q,x)={\rm cl}({\rm conv}~ N_G(Q,x))$ is {\it Clarke's normal cone} to $Q$ at $x$ and the set $$ \partial_Cf(x) = \{x^*:\; (x^*,-1)\in N_C(Q,x)\} $$ is the {\it subdifferential} or {\it generalized gradient of Clarke}. \begin{proposition}[some basic properties of subdifferentials]\label{bprop} The following statements hold true: (a) for any lsc function $\partial_Hf(x)\neq\emptyset$ on a dense subset of ${\rm dom}~ f$; (b) the same is true for $\partial_F$ if there is a Fr\'echet differentiable (off the origin) norm in $X$ (that is if $X$ is an Asplund space); (c) if $f$ is Lipschitz near $x$, then $\partial_Gf(x)\neq\emptyset$ and the set-valued mapping $x\mapsto \partial_G f(x)$ is compact-valued (see (f) below) and upper semicontinuous; (d) if $f$ is continuously (or strictly) differentiable at $x$, then $\partial f(x)=\{f'(x)\}$ for any of the mentioned subdifferentials; (e) if $f$ is convex, then all mentioned subdifferentials coincide with the subdifferential in the sense of convex analysis: $\partial f(x)=\{x^*:\;f(x+h)-f(x)\ge\langle x^*,h\rangle,\; \forall\; h \}$; (f) if $f$ is Lipschitz near $x$ with Lipschitz constant $K$, then $\| x^*\|\le K$ for any $x^*\in \partial f(x)$ and any of the mentioned subdifferentials; (g) if $f$ is Lipschitz near $x$, then $\partial_{LH}f(x)=\partial_Gf(x)$ and $\partial_Cf(x) = {\rm cl}({\rm conv}~\partial_G(x))$; (h) if $f$ is lsc and $X$ is an Asplund space, then $\partial_{LF}f(x)=\partial_Gf(x)$ for any $x$; (i) if $f(x,y)=\varphi(x)+\psi(y)$, then $\partial f(x,y)= \partial \varphi(x)+\partial\psi(y)$, where $\partial$ any of $\partial_F,\ \partial_H, \partial_G$ (but not $\partial_C$). \end{proposition} \begin{remark}{\rm It should be observed in connection with the proposition that $\bullet$ \ $\partial_{LH}$ has little interest for non-Lipschitz functions: it may be too big to contain any useful information about the function. $\bullet$ \ If $X$ is not Asplund, $\partial_{LF}f(x)$ may be identically empty even for a very simple Lipschitz function (e.g. $-\| x\|$ in $C[0,1]$). In terminology of the subdifferential calculus this means that $\partial_F$ {\it cannot be trusted} on non-Asplund spaces. } \end{remark} We do not need here a formal definition for the concept of a subdifferential trusted on a space or a class of spaces (see e.g. \cite{AI12a}). Loosely speaking this means that a version of the fuzzy variational principle is valid for the subdifferentials of lsc functions on the space. Just note that the Fr\'echet subdifferential is trusted on Asplund spaces and only on them, Dini-Hadamard subdifferential is trusted on G\^ateaux smooth spaces and the G-subdifferential and the generalized gradient are trusted on all Banach spaces. There is one more important property of subdifferentials that has not been mentioned in the proposition. This property is called {\it tightness} and it characterizes a reasonable quality of lower approximation provided by the subdifferential (see \cite{AI12a}). It turns out that the Dini-Hadamard, Fr\'echet and $G$-subdifferentials are tight but Clarke's generalized gradient is not. This determines a relatively small role played by generalized gradient in the regularity theory. On the other hand, generalized gradient typically is much easier to compute and work with. Moreover, convexity of the generalized gradient makes it the only subdifferential that can be used in the critical point theory associated with the concept of ``weak slope", not considered here. We do not need here the general theory of subdifferentials. Just mention in connection with the property (h) in Proposition \ref{bprop} that in separable spaces the $G$-subdifferential is a unique subdifferential having a certain collection of properties (including tightness, (c), (e), (f) and "exact calculus" as defined in the proposition below). It is to be again emphasized that we assume all spaces separable. \begin{proposition}[basic calculus rules]\label{basrul} Let $f(x) = f_1(x)+f_2(x)$, where both functions are lsc and one of them is Lipschitz near $\overline x$. Then the following statements are true 1. {\rm Fuzzy variational principle:} If $f$ attains a local minimum at $\overline x$, then there are sequences $(x_{in})$ and $(x_{in}^*)$, $i=1,2$ such that $x_{in}\to \overline x$, $x_{in}^*\in\partial_H f_{in}(x_{in})$ and $\| x_{1n}^*+ x_{2n}^*\|\to 0$, 2. {\rm Fuzzy sum rule}: if $X$ is Asplund and $x^*\in\partial_Ff(\overline x)$, then there are sequences $(x_{in})$ and $(x_{in}^*)$, $i=1,2$ such that $x_{in}\to \overline x$, $x_{in}^*\in\partial_H f_{in}(x_{in})$ and $\| x_{1n}^*+ x_{2n}^*-x^*\|\to 0$. 3. {\rm Exact sum rule}: $\partial_Gf(\overline x)\subset \partial_G f_1(\overline x)+\partial_G f_2(\overline x)$. \end{proposition} Let $Q\subset X$ and $x\in Q$. Given a subdifferential $\partial$, the set $$ N(Q,x)=\partial i_Q(x), $$ always a cone, is called the {\it normal cone} to $Q$ at $x$ {\it associated with $\partial$}. It is an easy matter to see that in case of $\partial_G$ this definition coincides with the given earlier. For normal cones associated with $\partial_H$ and $\partial_F$ we use notation $N_H$ and $N_F$. Let $F: X\rightrightarrows Y$ and $\overline y\in F(\overline x)$. Given a subdifferential $\partial$ and normal cone associated with $\partial$, the set-valued mapping $$ y^*\mapsto D^*F(\bar x,\bar y)(y^*)=\{x^*:\; (x^*,-y^*)\in N({\rm Graph}~ F,(\bar x,\bar y)) \} $$ is called the {\it coderivative} of $F$ at $(\bar x,\bar y)$ {\it associated with $\partial$}. We use notation $D_H^*,\ D_F^*$ and $D_G^*$ for the coderivatives, associated with the mentioned subdifferentials. There is a number of monographs and survey articles in which subdifferentials are studied at various levels of generality: \cite{RW} (finite dimensional theory), \cite{BZ,BM,JPP,WS} (Asplund spaces), \cite{AI12a,JPP} (general Banach spaces), \cite{FHC83,CLSW} (generalized gradients). Concerning the sources of the main concepts: Clarke's subdifferential was first to appear - it was introduced in Clarke's 1973 thesis \cite{FHC73} and in printed form first appeared in \cite{FHC75}, it is not clear where the Fr\'echet subdifferential first appeared, probably in \cite{BGN74}, the Dini-Hadamard subdifferential was introduced by Penot in \cite{JPP74}, the sequential limiting Fr\'echet subdifferential for functions on Fr\'echet smooth spaces was introduced by Kruger in mimeographed paper \cite{AK81} in 1981 (not in \cite{KM80} as stated in e.g. \cite{MS95,BM} and many other publications- the definition given in \cite{KM80} is purely topological and does not involve sequential weak$^*$-limits) and in printed form appeared in \cite{AK85} (see \cite{AI12a} for details). The $G$-subdifferential was first defined in \cite{AI81c} but its definition was later modified in \cite{AI89a}. \subsection{Separable reduction.} In this subsection $X$ and $Y$ are general Banach spaces, not necessarily separable. Recall that by ${\mathcal S}(X)$ we denote the collection of separable subspaces of $X$. \begin{proposition}\label{inher} Assume that ${\rm sur} F(\overline x|\overline y)> r$. Then for any $L_0\subset {\mathcal S}(X)$ and $M\subset {\mathcal S}(Y)$ there is an $L\in{\mathcal S}(X)$ containing $L_0$ such that for sufficiently small $t\ge 0$ $$ y+rt(B_Y\cap M)\subset {\rm cl} \big(F(x+t(1+\delta)(B_X\cap L))\big), $$ if $\delta >0$ and the pair $(x,y)\in({\rm Graph}~ F)\cap(L\times M)$ is sufficiently close to $(\bar x,\bar y)$. \end{proposition} \proof Take an $\varepsilon>0$ to guarantee that the inclusion below holds for $x\in B(\overline x,\varepsilon)$. \begin{equation}\label{6.2} F(x)\cap B(\overline y,\varepsilon)+ trB_Y\subset F(B(x,t)). \end{equation} We shall prove that there is a nondecreasing sequence $(L_n)$ of separable subspaces of $X$ such that: \begin{equation}\label{6.1} y+rt(B_Y\cap M)\subset {\rm cl}\big( F(x+t(1+\delta)(B_X\cap L_{n+1}))\big), \end{equation} for all $\delta >0$ and all $(x,y)\in({\rm Graph}~ F)\cap(L_n\times M)$ sufficiently close to $(\bar x,\bar y)$. Then to complete the proof, it is sufficient to set $L={\rm cl}(\cup L_n)$. Assume that we have already $L_n$ for some $n$. Let $(x_i,y_i)$ be a dense countable subset of the intersection of $( {\rm Graph}~ F)\cap(L_n\times M)$ with the neighborhood of $(\bar x,\bar y)$ in which (\ref{6.2}) is guaranteed, let $(v_j)$ be a dense countable subset of $B_Y\cap M$, and let $(t_k)$ be a dense countable subset of $(0,\varepsilon)$. For any $i,j,k=1,2,\ldots$ we find from (\ref{6.2}) an $h_{ijk}\in B_X$ such that $y_i+rt_kv_j\in F(x_i+t_kh_{ijk})$, and let $\hat L_n$ be the subspace of $X$ spanned by the union of $L_n$ and the collection of all $h_{ijk}$. If now $(x,y)\in({\rm Graph}~ F)\cap (L_n\times M)$, $t\in (0,1)$, $v\in B_Y$ and $(x_{i_m},y_{i_m})$, $t_{k_m}$, $v_{j_m}$ converge respectively to $(x,y)$, $t$ and $v$, then as $x_{i_m}+t_{k_m}(B_X\cap M_n)\subset x+ t(1+\delta)(B_X\cap M_n)$ for sufficiently large $m$, we conclude that (\ref{6.1}) holds with $\hat L_n$ instead of $L_{n+1}$. \endproof \begin{theorem}[separable reduction of regularity \cite{AI13b}]\label{sepredreg} Let $X$ and $Y$ be Banach spaces. A set-valued mapping $F: X\rightrightarrows Y$ with closed graph is regular at $(\bar x,\bar y)\in{\rm Graph}~ F$ if and only if for any separable subspace $M\subset Y$ and any separable subspace $L_0\subset X$ with $(\bar x,\bar y)\in L_0\times M$ there exists a bigger separable subspace $ L\in {\mathcal S}(X)$ such that the mapping $F_{L\times M}: L\rightrightarrows M$ whose graph is the intersection of ${\rm Graph}~ F$ with $L\times M$ is regular at $(\bar x,\bar y)$. Moreover, if ${\rm sur} F(\overline x|\overline y)> r$, we can choose $L\in{\mathcal S}(X)$ and $M\in{\mathcal S}(Y)$ containing respectively $\overline x$ and $\overline y$ to make sure that also ${\rm sur} F_{L\times M}(\overline x|\overline y)\ge r$. Conversely, if there is an $r>0$ such that for any separable $M_0\subset Y$ and $L_0\subset X$ there are bigger separable subspaces $M\supset M_0$ and $L\supset L_0$ such that ${\rm sur} F_{L\times M}(\overline x|\overline y)\ge r$, then $F$ is regular at $(\bar x,\bar y)$ with ${\rm sur} F(\overline x|\overline y)\ge r$. \end{theorem} \proof So assume that $F$ is regular at $(\bar x,\bar y)$ with ${\rm sur} F(\overline x|\overline y)> r$. Then, given $L_0$ and $M$, we can find a closed separable subspace $L\subset X$ containing $L_0$ such that (\ref{6.1}) holds for any $\delta>0$, any $(x,y)\in({\rm Graph}~ F)\cap (L\times M)$ sufficiently close to $(\bar x,\bar y)$ and any sufficiently small $t>0$. By the Density theorem we can drop the closure operation, so that $F_{L\times M}$ is indeed regular near $(\bar x,\bar y)$ with ${\rm sur} F_{L\times M}(\overline x|\overline y)\ge (1+\delta)^{-1}r$. As $\delta$ can be arbitrarily small we get the desired estimate for the modulus of surjection of $F_{L\times M}$. On the other hand, if $F$ were not regular at $(\bar x,\bar y)$, then we could find a sequence $(x_n,y_n)\in {\rm Graph}~ F$ converging to $(\bar x,\bar y)$ such that $y_n+ (t_n/n)v_n\not\in F(B(x_n,t_n))$ for some $t_n<1/n$ and $v_n\in B_Y$ (respectively $y_n+ t_n(r-\delta)v_n\not\in F(B(x_n,t_n))$ for some $\delta>0$). Clearly this carries over to any closed separable subspace $L\subset X$ and $M\subset Y$ containing respectively all $x_n$, all $y_n$ and all $v_n$, so that no such $F_{L\times M}$ cannot be regular at $(\bar x,\bar y)$ (with the modulus of surjection $\ge r$) contrary to the assumption.\endproof \subsection{Contingent derivatives and primal regularity estimates} The following simple proposition establishes connection between slope of $f$ and its lower directional derivative. \begin{proposition} \label{sldinad} For any function $f$ and any x at which $f$ is finite $$ |\nabla f|(x)\ge-\inf_{\| h\|=1}f^-(x;h). $$ \end{proposition} \proof Take an $h$ with $\| h\|=1$. We have $$ |\nabla f|(x)=\lim_{t\searrow 0}\sup_{\| u\|= 1}\frac{(f(x)-f(x+tu))^+}{t}\ge \limsup_{(t,u)\to (0+,h)}\frac{f(x)-f(x+tu)}{t}=-f^-(x;h) $$ as claimed.\endproof The following result is now immediate from the proposition and Theorem \ref{secmil}. \begin{theorem}[tangential regularity estimate 1]\label{tancrit1} Let $(\bar x,\bar y)\in{\rm Graph}~ F$. Assume that there are neighborhoods $U$ of $\overline x$ and $V$ of $\overline y$ such that for any $y\in V$ the function $\psi_y$ is lower semicontinuous $U$ and $\inf_{\| h\|=1}\psi_y'(x;h)\le -r$ for $x\in U$ and $y\in V$. Then \begin{equation}\label{tan} {\rm sur} F(\overline x|\overline y)\ge r. \end{equation} \end{theorem} \noindent (Of course a similar estimate can be obtained from Theorem \ref{critgen}.) \begin{theorem}[tangential regularity estimate 2]\label{tancrit3} Suppose there are a neighborhood $U$ of $(\bar x,\bar y)$ and two numbers $c>0$ and $\lambda\in[0,1)$ such that for any $(x,y)\in U\cap{\rm Graph}~ F$ \begin{equation}\label{5.2} {\rm ex}(S_Y,DF(x,y)(cB_X))\le \lambda, \end{equation} then \begin{equation}\label{5.3} {\rm sur} F(\overline x|\overline y)\ge\frac{1-\lambda}{c}. \end{equation} \end{theorem} \proof Take an $(x,v)\in U\cap{\rm Graph}~ F$ with $v\neq y$ and set $z=\| y-v\|^{-1}(y-v)$. By the assumption for any $\lambda'>\lambda$ there is a pair $(\tilde h,\tilde w)$ with $\tilde w\in DF(x,v)(\tilde h)$ such that $\|\tilde h\|= c$ and $\| z-\tilde w\|\le \lambda'$. As $(\tilde h,\tilde w)$ belongs to the contingent cone to the ${\rm Graph}~ F$ at $(x,v)$, we can find (for sufficiently small $t>0$) vectors $h(t)$ and $w(t)$ norm converging to $\tilde h$ and $\tilde w$ respectively and such that $v+tw(t)\in F(x+th(t))$. We have \begin{equation}\label{5.6} \begin{array}{lcl} \|y-(v+tw(t))\|&=& \| y-v-t\tilde w\| +o(t)\\ &\le&\| y-v-tz\| +t\| z-\tilde w\|+o(t)\\ &\le& \|y-v\|\big(1- \displaystyle\frac{t}{\| y-v\|}\big) +t\lambda' +o(t), \end{array} \end{equation} so that $$ \varphi_y^-((x,v);(\tilde h,\tilde w)) \le\lim_{t\to+0}\frac{\|y-t(v+w(t))\|-\|y-v\|}{t}\le -(1-\lambda'). $$ Take a $\xi >0$ such that $\xi(1+\lambda)<c$ and consider the $\xi$-norm in $X\times Y$, Then $\|(\tilde h,\tilde w)\|_{\xi}\le\max\{c,\xi(1+\lambda')\}=c$ (if $\lambda'$ is sufficiently close to $\lambda$) and we get from (\ref{5.4}) $$ \inf\{\varphi_y^-((x,v);( h,w)): \; \|(h,w)\|_{\xi}\le 1\} \le\frac{1}{c}\varphi_y^-((x,v);(\tilde h,\tilde w)) \le -\frac{1-\lambda'}{c}. $$ It remains to refer to Proposition \ref{sldinad} and Theorem \ref{critgen}. \endproof \begin{theorem}[tangential regularity estimate 3]\label{tancrit2} Let $X$ and $Y$ be Banach spaces, and let $F: X\rightrightarrows Y$ be a set-valued mapping with locally closed graph. Let finally $\overline y\in F(\overline x)$. Then \begin{equation}\label{5.4} {\rm sur} F(\overline x|\overline y)\ge\lim_{\varepsilon\to 0}\inf \{ C(DF(x,y)):\; (x,y)\in({\rm Graph}~ F)\bigcap B((\bar x,\bar y),\varepsilon)\}, \end{equation} or equivalently, $$ \begin{array}{lcl} {\rm reg} F(\overline x|\overline y)&\le&\displaystyle\lim_{\varepsilon\to 0}\sup\{ \|(DF(x,y))^{-1}\|_-: y\in F(x), \ \| x-\overline x\|<\varepsilon,\ \| y-\overline y\|<\varepsilon\}\\ &=&\displaystyle\lim_{\varepsilon\to 0} \big\{\displaystyle\sup_{\| v\|=1}\inf\{\| h\|:\; v\in DF(x,y)(h)\}:\\ & &\qquad\qquad\qquad \qquad\qquad\qquad(x,y)\in({\rm Graph}~ F)\bigcap B((\bar x,\bar y),\varepsilon)\big\}. \end{array} $$ \end{theorem} \proof We first note that $DF(x,v)(B_X)$ is a star-shaped set as it contains zero and $z\in DF(x,v)(h)$ implies that $\lambda z\in DF(x,v)(\lambda h)$ for $\lambda >0$. On the other hand, by Proposition \ref{dualb} $C(DF(x,v))> r>0$ means that $r B_Y\subset DF(x,v)(B_X)$. It follows that $B_Y\subset DF(x,v)(r^{-1}B_X)$. If this is true for all $(x,v)\in{\rm Graph}~ F$ close to $(\bar x,\bar y)$, this in turn means that the condition of Theorem \ref{tancrit3} is satisfied with $c= r^{-1}$ and $\lambda =1$, whence the theorem. \endproof \begin{remark}{\rm In fact the last two theorems are equivalent. Indeed, let the conditions of Theorem \ref{tancrit3} be satisfied. Then $(1-\lambda)B_Y\subset DF(x,v)(cB_X)$ for all $(x,v)\in{\rm Graph}~ F$ close to $(\bar x,\bar y)$ and setting $r=c^{-1}(1-\lambda)$ we get $rB_Y\subset DF(x,v)(B_X)$ for the same $(x,v)$.} \end{remark} It follows from the proofs that the estimate provided by Theorem \ref{tancrit1} is never worse than the estimates given by the other two theorems. But it can actually be strictly better (unless both spaces are finite dimensional). Informally, this is easy to understand: the quality of approximation provided by the contingent derivative for a map into an infinite dimensional spaces maybe much lower than for a real-valued function. The following example illustrates the phenomenon. \begin{example}\label{ex1}{\rm Let $X=Y$ be a separable Hilbert space, and let $(e_1,e_2,\ldots)$ an orthonormal basis in $X$. Consider the following mapping from $[0,1]$ into $X$: $$ \eta(t)=\left\{ \begin{array}{cl} 0,&{\rm if}\; t\in\{0,1\}\\ 2^{-(n+2)}e_n,&{\rm if}\; t= 2^{-n}, \end{array}\right. $$ and $\eta (\cdot)$ is linear on every segment $[2^{-(n+1)},2^{-n}],\; n=0,1,\ldots$. Define a mapping from the unit ball of $\ell_2$ into $\ell_2$ by $$ F(x) = x-\eta(\| x\|). $$ It is an easy matter to see that $x\mapsto \eta(\| x\|)$ is $(\sqrt 5/4)$-Lipschitz, hence by Milyutin's perturbation theorem $F$ is open near the origin with the rate of surjection at least $1-(\sqrt 5/4)$. Let us look what we get applying both statements of the theorem for the mapping. If $\| h\|= 1$ and $t\in (2^{-(n+1)},2^{-n}]$, then $F(th)= th - (t/2)(e_n-e_{n+1}) - 2^{-(n+2)}(2e_{n+1}-e_n))$, and it is easy to see that for no sequence $(t_k)$ converging to zero $t_k^{-1}F(t_k)$ converge. Hence the tangent cone to the graph of $F$ at zero consists of a single point $(0,0)$ and the first statement gives ${\rm sur} F(0)\ge 0$ - a trivial conclusion. Now take an $x$ with $\| x\|<1$ and a $y\neq F(x)$. We have $$ \begin{array}{lcl} \|F(x+th)-y\|&=&\|x+th-\eta(\|x+th\|)-y\|\\ &\le& \|x+th-\eta(\| x\|)-y\|+\|\eta(\|x+th\|)-\eta(\| x\|)\|\\ &\le & \|F(x)+th-y\| +(3/4)t\| h\|. \end{array} $$ Taking $h= (y-F(x))/\| y-F(x)\|$, we get $$ \varphi_y^-(x;h)\le \lim_{t\searrow 0}t^{-1}\Big(\Big(1-\frac{t}{\|F(x)-y\|}\Big)\|F(x)-y\|-\|F(x)-y\|\Big) +\frac{\sqrt 5}{4} =-\frac{4-\sqrt 5}{4} $$ which gives ${\rm sur} F(x)\ge 1- (\sqrt 5/4)$ for all $x$ with $\| x\|<1$. } \end{example} A tangential regularity estimate, similar to but somewhat weaker than that in Theorem \ref{tancrit3} was first obtained by Aubin in \cite{JPA84} (see also \cite{AF}) under the same assumptions. The very estimate (\ref{5.3}) was obtained in \cite{AI87}. Theorem \ref{tancrit2} was proved by Dontchev-Quincampoix-Zlateva in \cite{DQZ06}. Theorem \ref{tancrit1} seems to have been state for the first time in \cite{CFI}. Example \ref{ex1} has also been borrowed from that paper. \subsection{Dual regularity estimates.} This is the part of the local regularity theory that attracted main attention in the 80s and 90s. The role of coderivatives was in the center of the studies. Further developments, however, that followed the discovery of the role of slope open gates for potentially stronger (and often easier to apply) results involving subdifferentials of the functions $\varphi_y$ and $\psi_y$. \subsubsection{Neighborhood estimates} There is a simple connection between slopes and norms of elements of subdifferentials. \begin{proposition}[slopes and subdifferentials]\label{slsub} Let $f$ be lsc, and let an open set $U$ have nonempty intersection with ${\rm dom}~ f$. Then for any subdifferential $\partial$ $$ \inf_{x\in U}d(0,\partial f(x))\le \inf_{x\in U}|\nabla f|(x). $$ On the other hand, $\| x^*\|\ge |\nabla f|(x)$ if $x^*\in\partial_Ff(x)$. \end{proposition} Combining this with Theorems \ref{critgen} and \ref{slsc}, we get \begin{theorem}[subdifferential regularity estimate 1]\label{subest} Let $X$ and $Y$ be Banach spaces, let $F: X\rightrightarrows Y$ have a locally closed graph, and let $\partial$ be a subdifferential trusted on a class of Banach spaces containing both $X$ and $Y$. Then for any $(\bar x,\bar y)\in{\rm Graph}~ F$ and any $\xi>0$ \begin{equation}\label{est1} {\rm sur} F(\overline x|\overline y)\ge\liminf_{{(x,v)\ \underset{{\rm Graph}F}\to\ (\overline x,\overline y)}\atop{y\to\overline y,\ y\neq v}}\inf\{ \| x^*\|+\xi^{-1}\| v^*\|:\; (x^*,y^*)\in \partial\varphi_y(x,v)\}. \end{equation} and \begin{equation}\label{est2} {\rm sur} F(\overline x|\overline y)\ge\liminf_{{(x,y)\to (\bar x,\bar y)}\atop{y\not\in F(x)}} d(0,\partial \overline{\psi}_y(x)). \end{equation} \end{theorem} \begin{theorem}[subdifferential regularity estimate 2]\label{subcrit2} Let $(\bar x,\bar y)\in{\rm Graph}~ F$. Assume that there are neighborhoods $U$ of $\overline x$ and $V$ of $\overline y$ such that for any $y\in V$ the function $\psi_y$ is lower semicontinuous and $\| x^*\|\ge r$ if $x^*\in\partial_H\psi_y(x)$ for all $x\in U$ and $y\in V$. Then \begin{equation}\label{tan} {\rm sur} F(\overline x|\overline y)\ge r. \end{equation} \end{theorem} The obvious inequality $\| x^*\|\ge -f^-(x;h)$ if $x^*\in\partial_Hf(x)$ and $\| h\|=1$ shows that {\it the estimate provided by the last theorem cannot be worse that the estimate of Theorem \ref{tancrit1}.} Our next purpose is to derive coderivative estimates for regularity rates. \begin{theorem}[coderivative regularity estimate 1]\label{subcrit} Let $F: X\rightrightarrows Y$ be a set-valued mapping with locally closed graph containing $(\bar x,\bar y)$. Then $$ \begin{array}{lcl} {\rm sur} F(\overline x|\overline y)&\ge&\displaystyle\displaystyle\lim_{\varepsilon\to 0} \inf\{ C^*(D_H^*F(x,y)): y\in F(x), \ \| x-\overline x\|<\varepsilon,\ \| y-\overline y\|<\varepsilon\}\\ &=&\displaystyle\displaystyle\lim_{\varepsilon\to 0} \inf\{\| x^*\|:\; x^*\in D_H^*F(x,y)(y^*),\; \| y^*\|=1,\\ & &\qquad\qquad\qquad \qquad\qquad\qquad(x,y)\in({\rm Graph}~ F)\bigcap B((\bar x,\bar y),\varepsilon)\}, \end{array} $$ or equivalently, $$ \begin{array}{lcl} {\rm reg} F(\overline x|\overline y)={\rm lip} F^{-1}(\overline y|\overline x)&\le&\displaystyle\lim_{\varepsilon\to 0}\sup\{\|D_H^*F^{-1}(x,y)\|_+: \\ & & \qquad\qquad\qquad \quad(x,y)\in({\rm Graph}~ F)\bigcap B((\bar x,\bar y),\varepsilon)\}\\ &=&\displaystyle\lim_{\varepsilon\to 0}\sup \{\| y^*\|:\; x^*\in D_H^*F(x,y)(y^*),\;\| x^*\|=1,\\ & &\qquad\qquad\qquad \quad(x,y)\in({\rm Graph}~ F)\bigcap B((\bar x,\bar y),\varepsilon)\}. \end{array} $$ \end{theorem} To furnish the proof we can use either any of the estimates of the preceding theorem or apply directly the slope-based results of Theorems \ref{critgen} and \ref{slsc} via (\ref{slsub}). We choose the second option as it actually leads to a shorter proof. The first approach requires to work with weak$^*$ neighborhoods to estimate subdifferential of a sum of functions (that inevitably appears in the course of calculation) which makes estimating norms of subgradients difficult (if possible at all). \proof We only need to show that, given $(x,w)\in{\rm Graph}~ F$, for any neighborhoods $U\subset X$ and $V\subset Y$ of $x$ and $y$ $$ \inf\{\| x^*\|:\; x^*\in D^*F(u,v)(y^*),\;(u,v)\in{\rm Graph}~ F\cap(U\times V),\; \| y^*\|=1\}\le m. $$ if $|\nabla_{\xi}\varphi_y|(x,w)<m$ for small $\xi$. Then the theorem is immediate from Theorem \ref{critgen} in view of Proposition \ref{slsub}. So let $|\nabla_{\xi}\varphi_y|(x,w)<m$. Take an $m'<m$ but still greater than $|\nabla_{\xi}\varphi_y|(x,v)$ and set $$ \begin{array}{lcl} f(u,v)&=&\varphi_y(u,v) + m'\max\{\|u-x\|,\xi\|v-w\| \}\\ &=& \|v-y\|+ i_{{\rm Graph}~ F}(u,v)+ m'\max\{\|u-x\|,\xi\|v-w\| \}. \end{array} $$ Then $f$ attains a local minimum at $(x,w)$. We thus can apply Proposition \ref{basrul}: given a $\delta>0$, there are $v_i,\ i=0,1,2$, $u_i,\ i=1,2$ with $(u_1,v_1)\in{\rm Graph}~ F$ and $v_0^*\in \partial \|\cdot\|(y-v_0)$, $(u_1^*,v_1^*)\in N({\rm Graph}~ F,(u_1,v_1))$ and $(u_2^*,v_2^*)$ with $\| u_2^*\|+\xi^{-1}\| v_2^*\|\le m'$ such that $$ \| v_i-w\|<\delta,\quad \| u_i-x\|<\delta,\quad \| u_1^*+ u_2^*\|<\delta, \quad \| v_0^*+v_1^*+v_2^*\|<\delta. $$ Take $\delta<\| y- w\|$, $(1+2\delta)m'<m$ and $\xi$ so small that $\xi m'<\delta$. Then $y\neq v_0$, so that $\| v_0^*\|=1$, $\| x_2^*\|\le m'$ and $\| v_2^*\|<\delta$. We thus have $\| x_1^*\|\le m'+\delta< m$ and $|\| v_1^*\|- 1|<1+2\delta$. It remains to set $y^*= v_1^*/\| v_1^*\|$, $x^*= x_1^*/\| v_1^*\|$ to complete the proof.\endproof \begin{theorem}[coderivative regularity estimate 2]\label{subcrit1} If in addition to the assumptions of Theorem \ref{subcrit} both $X$ and $Y$ are Asplund spaces, then $$ \begin{array}{lcl} {\rm sur} F(\overline x|\overline y)&=&\displaystyle\displaystyle\lim_{\varepsilon\to 0} \inf\{ C^*(D_F^*F(x,y)): y\in F(x), \ \| x-\overline x\|<\varepsilon,\ \| y-\overline y\|<\varepsilon\}\\ &=&\displaystyle\displaystyle\lim_{\varepsilon\to 0} \inf\{\| x^*\|:\; x^*\in D_F^*F(x,y)(y^*),\; \| y^*\|=1,\\ & &\qquad\qquad\qquad \qquad\qquad\qquad(x,y)\in({\rm Graph}~ F)\bigcap B((\bar x,\bar y),\varepsilon)\}, \end{array} $$ or equivalently, $$ \begin{array}{lcl} {\rm reg} F(\overline x|\overline y)={\rm lip} F^{-1}(\overline y|\overline x)&=&\displaystyle\lim_{\varepsilon\to 0}\sup\{\|D_F^*F^{-1}(x,y)\|_+: \\ & & \qquad\qquad\qquad \quad(x,y)\in({\rm Graph}~ F)\bigcap B((\bar x,\bar y),\varepsilon)\}\\ &=&\displaystyle\lim_{\varepsilon\to 0}\sup \{\| y^*\|:\; x^*\in D_F^*F(x,y)(y^*),\;\| x^*\|=1,\\ & &\qquad\qquad\qquad \quad(x,y)\in({\rm Graph}~ F)\bigcap B((\bar x,\bar y),\varepsilon)\}. \end{array} $$ \end{theorem} \proof If the spaces are Asplund, then the same arguments as in the proof of the preceding theorem lead to the same conclusion with $D_H^*$ replaced by $D_F^*$. So we have to show that the opposite inequality holds. This however is an elementary consequence of the definition. Indeed, fix certain $(x,y)\in {\rm Graph}~ F$ close to $(\bar x,\bar y)$ and let $$ m= \inf\{\| x^*\|:\; x^*\in D_F^*F(x,y)(y^*),\; \| y^*\|=1\}. $$ If ${\rm sur} D_F^*F(\overline x|\overline y)$= 0 or $D_F^*F(x,y)(y^*)=\emptyset$ (in which case $m=\infty$ by the general convention), the inequality is trivial. So we can take a positive $r<{\rm sur} F(\overline x|\overline y)$ in which case we may assume that $B(y,rt)\subset F(B(x,t))$ for small $t$ and $y$ close to $\overline y$, and suppose that $m<\infty$. Take a $x^*\in D_F^*(x,y)(y^*)$ with $\| y^*\|=1$ and $\| x^*\|<m+\delta$ for some $\delta>0$. Then $\langle x^*,h\rangle-\langle y^*,v\rangle\le o(\|h\|+\| v\|)$ whenever $(x+h,y+v)\in{\rm Graph}~ F$. Now take $v(t)\in B(y,rt)$ such that $\langle y^*,v(t)\rangle\le - (1-t^2)\| v(t)\|$ and an $h(t)$ with $\| h(t)\|\le t$ such that $(x+th(t),y+v(t))\in {\rm Graph}~ F$. Then $$ -t\| x^*\| + (1-t^2)rt\le \langle x^*,h(t)\rangle-\langle y^*,v(t)\rangle \le o(\| h(t)\|+\| v(t)\|)= o(t) $$ which implies that $r\le m$ and the result follows. \endproof \begin{remark} {\rm Note that the just given proof (that the inequality $\le$ holds) works in any space, not necessarily Asplund. In other words, the part of the theorem that incorporates essential properties of the space (that is that the Fr\'echet subdifferential is trusted) is contained in Theorem \ref{subcrit}. } \end{remark} Comparing the last theorem with Example \ref{ex1}, we conclude that in Asplund spaces the coderivative estimate using Fr\'echet coderivative can be strictly better than the tangential estimate provided by Theorem \ref{tancrit2}. What about connection of the estimates from Theorems \ref{tancrit2} and \ref{subcrit}? \begin{proposition}[DH-coderivative vs. tangential criterion]\label{dhvstan} The regularity estimate involving Dini-Hadamard coderivative (Theorem \ref{subcrit}) is never worse than tangential estimate provided by Theorem \ref{tancrit2}. \end{proposition} \proof Indeed, by definition $D_H^*F(x,y)=(DF(x,y))^*$ and we only need to recall that $C^*(D_H^*F(x,y))\ge C(DF(x,y))$ for any $(x,y)\in{\rm Graph}~ F$ by Theorem \ref{bancon}. \endproof Theorem \ref{subcrit} was proved in \cite{AI87} for subdifferentials satisfying a bit stronger requirements than the subdifferential of Dini-Hadamard. However a minor change in the proof allows to extend it to all subdifferentials trusted on the given Banach space (see e.g. \cite{AI00,AI12a} also for a proof) , in particular to the DH-subdifferential on any G\^ateaux smooth space. Likewise, Theorem \ref{subcrit1} was proved in \cite{AK88}, in a somewhat different form and in terms of $\varepsilon$-Fr\'echet subdifferential on Fr\'echet smooth spaces. And again, a minor change is needed to extend the proof to standard Fr\'echet subdifferentials. Theorem \ref{subcrit1} as stated was proved in \cite{MS96a} (see also \cite{BM} for a proof, for all Asplund spaces, not necessarily separable). This extension can be viewed as a consequence of the Fr\'echet smooth spaces version of the theorem and the separable reduction theorem of Fabian-Zhivkov \cite{FZ85} (and actually was proved that way). Proposition \ref{dhvstan} seems to have never been mentioned earlier. It sounds rather surprising with all its simplicity. It would be interesting to find an example with a Dini-Hadamard coderivative estimate strictly better, than the tangential estimate (or to prove that the estimates are equal). It is still unclear whether strict inequality is possible. The general consideration (the dual object cannot contain more information that its original predecessor) suggests that this is rather unlikely. But no proof is available for the moment. It should be mentioned however that the tangential estimate is valid in all Banach spaces while the Dini-Hadamard coderivative makes sense basically in G\^ateaux smooth spaces. \subsubsection{Perfect regularity and linear perturbations} The main inconvenience of the regularity criteria that have been just established, no matter primal or dual, comes from the necessity to scan an entire neighborhood of the point of interest. Below we define what can be viewed as an ideal situation. \begin{definition}{\rm We shall say that $F$ is {\it perfectly regular} at $(\bar x,\bar y)\in{\rm Graph}~ F$ if } \begin{equation}\label{5.16} {\rm sur} F(\overline x|\overline y)= C^*(D_G^*F(\bar x,\bar y))=\min\{\| x^*\|:\; x^*\in D_G^*F(\bar x,\bar y)(y^*),\ \| y^*\|=1\}. \end{equation} \end{definition} Later we shall come across some classes of perfectly regular mappings and meanwhile consider an important class of additive linear perturbations of maps. \begin{definition}{\rm Given a set-valued mapping $F: X\rightrightarrows Y$ and an $(\bar x,\bar y)\in{\rm Graph}~ F$. The {\it radius of regularity} of $F$ at $(\bar x,\bar y)$ is the lower bound of norms of linear continuous operators $A: X\to Y$ such that ${\rm sur} (F+A)(\overline x,\overline y+A\overline x)=0$. We shall denote it ${\rm rad} F(\overline x|\overline y)$.} \end{definition} By Milyutin's theorem ${\rm sur} F(\overline x|\overline y)\le {\rm rad} F(\overline x|\overline y)$. It turns out that for perfectly regular mappings the equality holds. To show this we need the following proposition, not very difficult to prove. \begin{proposition}\label{sumlin} Let $X$ and $Y$ be normed spaces, let $F: X\rightrightarrows Y$ be set-valued mapping with closed graph, and let $A\in{\mathcal L} (X,Y)$. Assume that $F$ is regular at $(\bar x,\bar y)\in{\rm Graph}~ F$and set $G=F+A$ (that is $G(x)=F(x)+Ax$). Then $$ D_G^*G(\overline x|\overline y+A\overline x)= D_G^*F(\bar x,\bar y) + A^* $$ \end{proposition} \noindent Note that the equality is elementary in case of Dini-Hadamard or Fr\'echet subdifferentials. \begin{theorem}[perfect regularity and radius formula]\label{radform} Assume that $X$ and $Y$ are Banach spaces, $F: X\rightrightarrows Y$, $(\bar x,\bar y)\in{\rm Graph}~ F$ and $F+A$ is perfectly regular at $(\overline x,\overline y+A\overline x)$ for any $A\in{\mathcal L}(X,Y)$ of rank 1. Then \begin{equation}\label{5.7a} {\rm sur} F(\overline x|\overline y) = {\rm rad} F(\overline x|\overline y). \end{equation} Moreover, for any $\varepsilon >0$ there is a linear operator $A_{\varepsilon}$ of rank one such that $\| A_{\varepsilon}\|\le {\rm sur} F(\overline x|\overline y)+\varepsilon$ and ${\rm sur} (F+A)(\overline x,\overline y+A\overline x))=0$. \end{theorem} In the sequel we call (\ref{5.7a}) the {\it radius formula}. \proof Set $r={\rm sur} F(\overline x|\overline y)$. The theorem is obviously valid if $r=0$. So we assume that $r>0$. Take an $\varepsilon >0$ and find a $y_{\varepsilon}^*$ and an $x_{\varepsilon}^*\in D_G^*F(\bar x,\bar y)(y_{\varepsilon}^*)$ such that $\| y_{\varepsilon}^*\|=1,\; \|x_{\varepsilon}^*\|\le (1+\varepsilon)r$. Let further $x_{\varepsilon}\in X$ and $y_{\varepsilon}\in Y$ satisfy \begin{equation}\label{5.8a} \|x_{\varepsilon}\|= \|y_{\varepsilon}\|=1,\quad \langle x_{\varepsilon}^*,x_{\varepsilon}\rangle\ge (1-\varepsilon)\| x_{\varepsilon}^*\|. \quad \langle y_{\varepsilon}^*,y_{\varepsilon}\rangle\ge (1-\varepsilon). \end{equation} We use these four vectors to define an operator $A_{\varepsilon}: X\to Y$ as follows: $$ A_{\varepsilon}x=-\frac{\langle x_{\varepsilon}^*,x\rangle}{\langle y_{\varepsilon}^*,y_{\varepsilon}\rangle}y_{\varepsilon}. $$ Then $\| A_{\varepsilon}\|\le\dfrac{1+\varepsilon}{1-\varepsilon}r$ and $$ A_{\varepsilon}^*y^*=-\frac{\langle y^*,y_{\varepsilon}\rangle}{\langle y_{\varepsilon}^*,y_{\varepsilon}\rangle}x_{\varepsilon}^*. $$ In particular we see that $-x_{\varepsilon}^*= A_{\varepsilon}^*y_{\varepsilon}^*$. Combining this with Proposition \ref{sumlin} we get $0= x_{\varepsilon}^*-A_{\varepsilon}^*y_{\varepsilon}^*\in D_G^*(F+A)(\overline x,\overline y+A\overline x)(y_{\varepsilon}^*)$ and therefore by the prefect regularity assumption, ${\rm sur} (F+A)(\overline x|\overline y+A\overline x)=0$, that is ${\rm rad} F(\bar x,\bar y)\le \|A_{\varepsilon}\|\to r$ as $\varepsilon\to 0$. \endproof Let $S(y,A)$ be the set of solutions of the inclusion \begin{equation}\label{5.10a} y\in F(x)+Ax, \end{equation} where $A\in{\mathcal L}(X,Y)$. Let $\overline x$ be a nominal solution of (\ref{5.10a}) with $y=\overline y,\ A=\overline A$. The question we are going to consider concerns Lipschitz stability of $S$ with respect to small variations of both $y$ and $A$ around the nominal value $(\overline y,\overline A)$ and their effect on regularity rates. In other words, we are interested in finding ${\rm lip} S((\overline y,\overline A)|\overline x)$. By the equivalence theorem, this is the same as finding the modulus of surjection of the mapping $\Phi=S^{-1}$ at $(\overline x,(\overline y,\overline A))$. Clearly $$ \Phi(x)=\{(y,A)\in Y\times{\mathcal L}(X,Y):\; y\in F(x)+A(x)\}. $$ We shall consider $Y\times {\mathcal L}(X,Y)$ with the norm $\|(y,A)\|=\nu(\|y\|,\|A\|)$, where $\nu$ is a norm in $I\!\!R^2$. The dual norm is $\nu^*(\| y^*\|,\|\ell\|)$, where $\ell\in({\mathcal L}(X\times Y))^*$ and $\nu^*$ is the norm in $I\!\!R^2$ dual to $\nu$: $\nu^*(u)=\sup\{\alpha\xi+\beta\eta:\; \nu (\alpha,\beta)\le 1\}$. As to the space dual to ${\mathcal L}(X,Y)$, we only need the simplest elements of the space, rank one tensors $y^*\otimes x$ whose action on $A\in{\mathcal L}(X,Y)$ is defined by $\langle y^*\otimes x,A\rangle=\langle A^*y^*,x\rangle$ and whose norm is $\| y^*\otimes x\|=\|y^*\|\|x\|$. The following theorem gives an answer to the question. \begin{theorem}[\cite{AI13}]\label{stablin} Let $X$ and $Y$ be Banach spaces, and let $F: X\rightrightarrows Y$ be a set-valued mapping with closed graph. Let $(\bar x,\bar y)\in{\rm Graph}~ F$ and let $\overline A\in{\mathcal L}(X,Y)$ be given. Then $$ {\rm lip} S((\overline y,\overline A)|\overline x)\le \nu^*(1,\|\overline x\|){\rm reg} (F+\overline A)(\overline x|\overline y). $$ \end{theorem} To prove the theorem we only need to show that \begin{equation}\label{5.11ab} {\rm sur}\Phi(\overline x|(\overline y, \overline A))\ge \frac{1}{\nu^*(1,\|\overline x\|)}{\rm sur} (F+\overline A)(\overline x|\overline y). \end{equation} So the proof (involving some calculation) can be obtained either from Theorem \ref{milt} or directly from the general regularity criterion of Theorem \ref{gencrit}, The concepts of perfect regularity and radius of regularity were introduced respectively in \cite{IS08} and \cite{DLR}. Theorem \ref{radform} is a new result. A finite dimensional version of Theorem \ref{stablin} for a class of $F$ with convex graph was proved in \cite{CGP10a}. We shall discuss the problems considered in this subsection in more details for finite dimensional mappings later in Section 8. \section{Finite dimensional theory.} In this section we concentrate on characterizations of regularity, subregularity and transversality for set-valued mappings between finite dimensional spaces. There are several basic differences that make the finite dimensional case especially rich. The first is that the subdifferential calculus is much more efficient. In addition certain properties different in the general case appear to be identical in $I\!\!R^n$. In particular, for a lower semicontinuous function the Dini-Hadamard subdifferential and the Fr\'echet subdifferential are identical. Therefore the usual notation used in the literature for this common subdifferential is $\hat{\partial}$ rather than $\partial_H$ or $\partial_F$. Likewise, as the limiting Fr\'echet and the $G$-subdifferentials are also equal, it is convenient to speak simply about {\it limiting subdifferential} and denote it simply by $\partial$. The second circumstance to be mentioned is the abundance of some special classes of objects of practical importance and definite theoretical interest. Enough to mention polyhedral and semi-algebraic sets and mappings (to be considered in the second part of the paper), semi-smooth functions, prox-regular functions and sets etc.. We do not discuss some interesting and important subjects, e.g. Kummer's inverse function theorem and its applications (well presented in the literature: much on the subjects can be found in \cite{DR,KK02}) or semismooth mappings (see e.g. \cite{PF13}). \subsection{Regularity.} \begin{theorem}\label{F1} A set-valued mapping $F: I\!\!R^n\rightrightarrows I\!\!R^m$ with locally closed graph is perfectly regular near any point of its graph. \end{theorem} \proof This is immediate from Theorem \ref{subcrit1}.\endproof \begin{theorem}\label{F2} The radius formula holds at any point of the graph of a set-valued mapping $F: I\!\!R^n\rightrightarrows I\!\!R^m$ with locally closed graph. Moreover, the lower bound in the definition of the radius of regularity is attained at a linear operator $A: I\!\!R^n\to I\!\!R^m$ of rank one. \end{theorem} \proof This is immediate from Theorem \ref{radform}.\endproof \begin{theorem}\label{F3} Let $F: I\!\!R^n\rightrightarrows I\!\!R^m$ be a set-valued mapping with locally closed graph, and let $(\bar x,\bar y)\in{\rm Graph}~ F$. Then \begin{equation}\label{eqf1} {\rm sur} F(\overline x|\overline y)=\lim_{\varepsilon\to 0}\inf \{ C(DF(x,y)):\; (x,y)\in({\rm Graph}~ F)\bigcap B((\bar x,\bar y),\varepsilon)\}. \end{equation} \end{theorem} \proof In view of Theorem \ref{tancrit2}, it is enough to verify that $C(DF(x,y))\ge r$ if $B(y,tr)\subset F(B(x,t))$ for all sufficiently small $t$ (of course for $(x,y)\in{\rm Graph}~ F$). So take a $v\in I\!\!R^m$ with $\| v\|\le r$ and let $h(t)$ be such that $\| h(t)\|\le 1$ and $y+tv\in F(x+th(t))$. If now $h$ is any limiting point of $h(t)$ as $t\to 0$, then $v\in DF(x,y)(h)$. This shows that $rB_{I\!\!R^m}\subset DF(x,y)(B_{I\!\!R^n})$. \endproof Similarly, inequality can be replaced by equality in the estimate of Lipschitz stability of solutions of the inclusion \begin{equation}\label{7.1} y\in F(x)+Ax \end{equation} with both $y$ and $A$ viewed as perturbations (cf. Theorem \ref{stablin}). But first we have to do some preliminary job. As in 5.4.2 we denote by $S(y,A)$ the set of solutions of (\ref{7.1}) and by $\Phi$ the inverse mapping $$ \Phi(x)=\{(y,A):\; y\in F(x)+Ax\}. $$ \begin{lemma} For any $x\in X$, let $E(x): Y\times {\mathcal L}(X,Y)\to Y$ be the linear operator defined by $E(y,\Lambda)= y-\Lambda x$. Then, under the assumptions of Theorem \ref{stablin} $$ \nu(1,\|x\|)C(E(x)\circ D\Phi(x,(y,A))\le C(D(F+A)(x,y)), $$ whenever $y\in F(x) + Ax$. \end{lemma} \proof By definition $(h,v,\Lambda)\in X\times Y\times {\mathcal L}(X,Y)$ belongs to $T({\rm Graph}~ \Phi,(x,y,A))$ if there are sequences $(h_n)\to h,\; (v_n)\to v,\ (\Lambda_n)\to \Lambda$ and $(t_n)\to +0$ such that $$ y+t_nv_n-(A+t_n\Lambda_n)(x+t_nh_n)\in F(x+t_nh_n) $$ or $$ y+t_n(v_n-\Lambda_nx + t_n\Lambda_nh_n)\in (F+A)(x+t_nh_n). $$ As $t_n\|\Lambda_nh_n\|\to 0$, it follows that $$ T({\rm Graph}~ \Phi,(x,y,A))=\{(h,v,\Lambda):\; (h,v-\Lambda x)\in T({\rm Graph}~(F+A),(x,y))\} $$ which amounts to \begin{equation}\label{5.17a} E(x)\circ D\Phi(x,(y,A))= D(F+A)(x,y). \end{equation} We have (Corollary \ref{constin}) $C(E(x))\cdot C(D\Phi(x,(y,A)))\le C(D(F+A)(x,y))$. On the other hand $E(x)^*(y^*)= (y^*, -y^*\otimes x)$ and therefore (Proposition \ref{calca}) $$ C(E(x))=\inf_{\|y^*\|=1}\| E(x)^* y^*\| = \nu (1,\| x\|). $$ This completes the proof of the lemma.\endproof \begin{theorem}[linear perturbations - finite dimensional case]\label{F4} Let $F: I\!\!R^n\rraI\!\!R^m$ be a set-valued mapping with locally closed graph, and let $\overline y\in F(\overline x)$. We consider $I\!\!R^m\times {\mathcal L}(I\!\!R^n,I\!\!R^m)$ with the norm $\nu(\| y\|,\| A\|)$, where $\nu$ is a certain norm in $I\!\!R^2$. Then, given an $A\in{\mathcal L}(I\!\!R^n,I\!\!R^m)$, we have $$ {\rm lip} S((\overline y, \overline A)|\overline x) = \nu^*(1,\|\overline x\|){\rm reg} (F+\overline A)(\overline x|\overline y). $$ \end{theorem} \proof Immediate from the lemma and Theorem \ref{stablin}.\endproof Finally, we have to mention that {\it a continuous single-valued mapping $f:I\!\!R^n\toI\!\!R^m$ can be strongly regular only if $m=n$.} This is a simple consequence of Brouwer's invariance of domain theorem (see e.g. \cite{KK02}). Theorem \ref{F1} was announced by Mordukhovich in a somewhat different form \cite{BM88} (see also \cite{BM93}). But the lower estimate for the modulus of surjection (which is actually the major step in the proof) is immediate from Ioffe \cite{AI84}. Theorem \ref{F2} was proved by Dontchev-Lewis-Rockafellar in \cite{DLR} and Theorem \ref{F3} by Dontchev-Quincampoix-Zlateva \cite{DQZ06}. Theorem \ref{F4} is a slightly generalized version of already mentioned result of C\'anovas, G\'omez and Senent-Parra \cite{CGP10a}. \subsection{Subregularity and error bounds.} Let $f$ be an extended-real-valued lsc function on $I\!\!R^n$. We can associate with this function the epigraphic map $$ Epi f(x) = \{\alpha\inI\!\!R^:\; \alpha\ge f(x) \} $$ Subregularity of such a mapping at a point $(\overline x,\alpha)$ (if $\alpha=f(\overline x)$ is finite) means that there is a $K>0$ such that $$ d(x,[f\le\alpha])\le K(F(x)-\alpha)^+ $$ for all $x$ close to $\overline x$. The constant $K$ in this case is usually called a {\it local error bound} for $f$ at $x$. We shall say more about error bounds in the second part of the paper. To characterize the subregularity property of epigraphic maps we define {\it outer limiting subdifferential} of $f$ at $x$ as follows: $$ \partial^{>}f(x)=\{\displaystyle\lim_{k\to\infty} x_k^*: \; \exists \; x_k\underset{f}\to x,\; f(x_k)> f(x),\; x_k^*\in \hat{\partial} f(x_k)\}. $$ \begin{theorem}[error bounds in $I\!\!R^n$]\label{erbr} Let $f$ be a lower semicontinuous function on $I\!\!R^n$ that is finite at $\overline x$. Then $K>0$ is a local error bound of $f$ at $\overline x$ if either of the following two equivalent conditions is satisfied: \vskip 1mm (a) \ $K\cdot \displaystyle\lim_{\varepsilon\to 0} \inf \{|\nabla f|(x):\; \| x-\overline x\|<\varepsilon, \; f(\overline x)<f(x)< f(\overline x) +K\varepsilon \} \ge 1$; \vskip 1mm (b) \ $K\cdot d(0,\partial^{>} f(\overline x)) \ge 1.$ \vskip 1mm Thus, if $F:I\!\!R^n\rightrightarrows I\!\!R^m$ has locally closed graph and $(\bar x,\bar y)\in{\rm Graph}~ F$, then $$ {\rm subreg} F(\overline x|\overline y)\le [\inf\{ \| x^*\|:\; x^*\in\partial^{>}d(\overline y,F(\cdot))(\overline x) \}]^{-1}. $$ \end{theorem} \proof If (a) holds, then $K$ is a local error bound by Lemma \ref{baslem} to be proved in the next section. To prove that (a)$\Rightarrow$(b), let $x^*\in\partial^{>}f(\overline x)$. This means that there are sequences $(x_k)$ and $(x_k^*)$ such that $x_k\to_f \overline x$, $f(x_k)>f(\overline x)$, $x_k^*\to x^*$ and $x_k^*\in\partial f(x_k)$. Choose $\varepsilon_k\downarrow 0$ such that $\| x_k-\overline x\|<\varepsilon_k$ and $f(x_k)- f(\overline x)<K\varepsilon_k$. If (a) holds, then $K\cdot \liminf |\nabla f|(x_k)\ge 1$. But $\| x_k^*\|\ge \nabla f|(x_k)$ (Proposition \ref{slsub}) and (b) follows. The opposite implication (b)$\Rightarrow$(a) also follows from Proposition \ref{slsub}. Indeed, denote by $r$ the value of the limit in the left side of (a), take an $\varepsilon>0$ and let $x$ satisfy the bracketed inequalities in (a) along with $|\nabla f|(x)<r+\varepsilon$. This means that $f+(r+\varepsilon)\|\cdot-x\|$ Applying the fuzzy variational principle, we shall find $u$ and $u^*\in\partial_F(u)$ such that $\|u-x\|<\varepsilon$, $f(u)<f(\overline x)+\varepsilon/K$ and $\| u^*\|<r+2\varepsilon$. This means that there is a sequence of pairs $(x_k,x_k^*)$ such that $x_k\to_f\overline x$, $x_k^*\in\partial_Ff(x_k)$ and $\limsup \| x_k^*\|\le r$. As (b) holds, it follows that $Kr\ge 1$. \endproof Conditions (a) and (b) are not necessary for $K$ to be an error bound of $f$ at $\overline x$. \begin{example}\label{conter}{\rm Consider $$ f(x) = \left\{\begin{array}{cl} 0,&{\rm if}\;x \le 0;\\ x+x^2\sin x^{-1},&{\rm if}\; x>0.\end{array}\right. $$ It is an easy matter to see that any $K>1$ is an error bound for $f$ at zero but at the same time $0\in\partial^{>} f(0)$.} \end{example} Such a pathological situation, however, does not occur if the function is ''not too nonconvex" near $\overline x$. \begin{proposition}\label{erbr1} Let $f$ be a lower semicontinuous function on $I\!\!R^n$ finite at $\overline x$. Suppose there are a $\theta>0$ and a function $r(t)=o(t)$such that $$ f(u)-f(x)\ge \langle x^*,u-x\rangle- r(\| u-x\|) $$ for all $x,\ u$ of a neighborhood of $\overline x$, provided $f(\overline x)< f(x)<f(\overline x)+\theta$ and $x^*\in\hat{\partial}(x)$. If under these conditions, $K>0$ is an error bound of $f$ at $\overline x$, then the conditions (a) and (b) of Theorem \ref{erbr} hold. \end{proposition} \proof Assume the contrary. Then there are $\varepsilon>0$ and a sequence of pairs $(x_k,\ x_k^*)\in\hat{\partial}f(x_k))$ such that $x_k\to_f\overline x$, $f(x_k)> f(\overline x)$ and $\| x_k^*\|\le K^{-1}-\varepsilon$. For any $k$ take an $\overline x_k\in [f\le f(\overline x)]$ closest to $x_k$. Then $\overline x_k\to f(\overline x)$ and by the assumption $$ f(\overline x_k)-f(x_k)\ge \langle x_k^*,\overline x_k-x_k\rangle- r(\|\overline x_k-x_k\|). $$ As $\| \overline x_k-x_k\|\to 0$, for large $k$ we have $r(\| \overline x_k-x_k\|)\le(\varepsilon/2)\| \overline x_k-x_k\|$. For such $k$ $$ f(x_k)\le f(\overline x_k)+ (\| x_k^*\|+(\varepsilon/2))\| \overline x_k-x_k\|. $$ It follows that $$ d(x_k,[f\le f(\overline x)])=\| \overline x_k-x_k\|\ge\frac{1}{\| x_k^*\|+(\varepsilon/2)}f(x_k), $$ that is $(K^{-1}-(\varepsilon/2))d(x_k,[f\le f(\overline x)])\ge f(x_k)$ contrary to the assumption.\endproof The last result of this subsection contains infinitesimal characterization of strong subregularity. \begin{theorem}[characterization of subregularity and strong subregularity]\label{charssub} Let again $F:I\!\!R^n\rightrightarrows I\!\!R^m$ have locally closed graph and $(\bar x,\bar y)\in{\rm Graph}~ F$. Then $\bullet$ \ $F$ is subregular at $(\bar x,\bar y)\in{\rm Graph}~ F$ if $d(0,\partial^{>}\psi_{\overline y}(\overline x))>0$; $\bullet$ \ a necessary and sufficient condition for $F$ to be strongly subregular at $(\bar x,\bar y)$ is that $DF(\bar x,\bar y)$ is nonsingular, that is \ $C^*(DF(\bar x,\bar y))>0$. \end{theorem} \proof The first statement is a consequence of Theorem \ref{erbr}. To prove the second, assume first that $F$ is strongly subregular at $(\bar x,\bar y)$, that is there is a $K>0$ such that $\| x-\overline x\|\le Kd(\overline y,F(x))$ for $x$ sufficiently close to $\overline x$. If $DF(\overline x,\overline y)$ were singular, Proposition \ref{dualb} would guarantee the existence of sequences $(h_k)\subset I\!\!R^n$ and $(v_k)\subset I\!\!R^m$ such that $\| h_k\|=1$, $\| v_k\|\to 0$ and $\overline y+t_kv_k\in F(\overline x+t_kh_k)$, so that for large $k$ $$ \|\overline x+t_kh_k - \overline x\|= t_k> Kt_k\| v_k\|=K\|\overline y+t_kv_k-\overline y\|\ge Kd(\overline y,F(\overline x+t_kh_k)), $$ contrary to our assumption. Let now $DF(\bar x,\bar y)$ be nonsingular. This means that $\| v\|\ge \kappa>0$ whenever $v\in DF(\bar x,\bar y)(h)$ with $\| h\|=1$. It immediately follows that, say, $\|y-\overline y\|\ge (\kappa/2)\| x-\overline x\|$ whenever $y\in F(x)$ and $x$ is sufficiently close to $\overline x$ which is strong subregularity of $F$ at $(\bar x,\bar y)$.\endproof Literature on local error bounds in $I\!\!R^n$ is very rich - see e.g. the monograph by Facchinei and Pang \cite{FP} that summarizes developments prior to 2003. Theorem \ref{erbr} and Proposition \ref{erbr1} seem to be new as stated but they are closely connected with the results of Ioffe-Outrata \cite{IO08} and Meng and Yang \cite{MY12} among others. The second part of Theorem \ref{charssub} as well as other results relating to strong subregularity and applications can be found in \cite{DR} and \cite{KK02}. (In \cite{KK02} the authors use the term "locally upper Lipschitz" property. The term "strong subregularity" seem to have appeared later.) Another sufficient condition for subregularity was suggested by Gfrerer \cite{HG11}. It would be interesting to understand how the two are connected. It should also be noted that no characterization for strong subregularity in terms of coderivatives is so far known. \subsection{Transversality.} We have mentioned already that the classical concepts of transversality and regularity are closely connected. To see how the concept of transversality can be interpreted in the context of variational analysis, we first consider the case of two intersecting manifolds in a Banach space. Let $X$ be a Banach space and $M_1$ and $M_2$ smooth manifolds in $X$, both containing some $\overline x$. As was mentioned in Subsection 1.4, the manifolds are transversal at $\overline x$ if either $\overline x\not\in M_1\cap M_2$ or the sum of the tangent subspaces to the manifolds at $\overline x$ is the whole of $X$: $T_{\overline x}M_1+ T_{\overline x}M_2 = X.$ The following simple lemma is the key to interpret this in regularity terms in a way suitable for extensions to the settings of variational analysis. \begin{lemma}\label{translem} Let $L_1$ and $L_2$ be closed subspaces of a Banach space $X$ such that $L_1+L_2=X$. Then for any $u, v\in X$ there is $h\in X$ such that $u+h\in L_1$ and $v+h\in L_2$. \end{lemma} \proof If $u+h\in L_1$, then $h\in -u+L_1$, so if the statement were wrong, we would have $(v-u+L_1)\cap L_2=\emptyset$. In this case there is a nonzero $x^*$ separating $v-u+L_1$ and $L_2$, that is such that $\langle x^*,x\rangle=0$ for all $x\in L_2$ and $\langle x^*,v-u+x\rangle\ge 0$ for all $x\in L_1$. But this means that $x^*$ vanishes on $L_1$ as well. In other words, both $L_1$ and $L_2$ belong to the annihilator of $x^*$ and so their sum cannot be the whole of $X$.\endproof The lemma effectively says that the linear mapping $(u,v,h)\mapsto (u+h,v+h)$ maps $L_1\times L_2\times X$ {\it onto} $X\times X$, that is this mapping is regular. If $\overline x\in M_1\cap M_2$, then applying the density theorem (Theorem \ref{dens}), we get as an immediate corollary that the set-valued mapping $\Phi(x)= (M_1-x)\times (M_2-x)$ from $X$ into $X\times X$ is regular at zero. This justifies the following definition \begin{definition}\label{vartrans}{\rm Let $S_i\subset X,\ i=1,\ldots,k$ be closed subsets of $X$. We say that $S_i$ are {\it transversal} at $\overline x\in X$ if either $\overline x\not\in \cap S_i$ or $\overline x\in\cap S_i$ and the set-valued mapping $$ x\mapsto F(x)= (S_1-x)\times\cdots\times (S_k-x) $$ from $X$ into $X^k$ is regular near $(\overline x,0,\ldots,0)$. In the latter case, we also say that {\it $S_i$ have transversal intersection} at $\overline x$. } \end{definition} This definition may look strange at the first glance but the following characterization theorem shows that it is fairly natural. \begin{theorem}\label{F5} Let $S_i\subset I\!\!R^n, \ i=1,\ldots,k$ and $\overline x\in\cap S_i$. Then the following statements are equivalent \vskip 1mm (a) \ $S_i$ are transversal at $\overline x$; \vskip 1mm (b) \ $x_i^*\in N(S_i,\overline x),\; x_1^*+\cdots + x_k^*= 0\ \Rightarrow \ x_1^*=\ldots=x_k^*=0$; \vskip 1mm (c) \ $d(x,\displaystyle\bigcap_{i=1}^k(S_i-x_i)\le K\max_id(x,S_i-x_i)$ if $x_i$ are close to zero and $x$ is close to $\overline x$. \end{theorem} \proof It is not a difficult matter to compute the limiting coderivative of $F$: if $(x_1,\ldots,x_k)\in F(x)$, then $$ D^*F(x|(x_1,\ldots,x_k))=\left\{\begin{array}{ll}\displaystyle\sum_{i=1}^k x_i^*,& {\rm if}\; x_i^*\in N(S_i,x_i+x);\\ \emptyset,&{\rm otherwise}.\end{array}\right. $$ Combining this with Theorem \ref{F1}, we prove equivalence (a) and (b). Furthermore, $F^{-1}(x_1,\ldots,x_k)= (S_1-x_1)\cap\cdots\cap(S_k-x_k)$, whence equivalence of (a) and (c).\endproof Note that implicit in (c) is the statement that the intersection of $S_i-x_i$ is nonempty if $x_i$ are sufficiently small. In case of two sets one more convenient characterization of transversality is available. \begin{corollary}\label{F6} Two sets $S_1$ and $S_2$ both containing $\overline x$ are transversal at $\overline x$ if and only if the set-valued mapping $\Phi: I\!\!R^n\timesI\!\!R^n\rightrightarrows I\!\!R^n$: $$ \Phi(x_1,x_2)=\left\{ \begin{array}{cl} x_1-x_2,&{\rm if}\; x_i\in S_i;\\ \emptyset,&{\rm otherwise}\end{array}\right. $$ is regular near $(\overline x,\overline x,0)$. \end{corollary} \proof We have $T({\rm Graph}~\Phi,((x_1,x_2),x_1-x_2)=\{(h_1,h_2,v):\; h_i\in T(S_i,x_i), \; v= h_1-h_2\}$, so that $$ D^*\Phi((\overline x,\overline x),0)(x^*)=\{ (x_1^*,x_2^*):\; x_i^*\in N(S_i,\overline x)+ x^* \}. $$ If we consider the max-norm $\| (x_1,x_2)\|=\max\{\| x_1\|,\| x_2\|\|\}$ in $I\!\!R^n\timesI\!\!R^n$, then it follows from Theorem \ref{F1} that $\Phi$ is regular near $(\overline x,\overline x,0)$ if and only if $$ \inf \{\| x_1^*-x^*\|+\| x_2^*+ x^*\|:\; x_i^*\in N(S_i,x_n),\; \| x^*\|=1\}>0. $$ This amounts to $N(S_1,\overline x)\cap(-N(S_2,\overline x))=\{ 0\}$, which is exactly the property in the part (b) of the theorem.\endproof In view of the equivalence between (a) and (c) in Theorem \ref{F5}, the following definition looks now very natural. \begin{definition}[subtransversality]\label{subtrand}{\rm We shall say that closed sets $S_1,\ldots,S_k$ are {\it subtransversal at} $\overline x\in\cap S_i$ if there is a $K>0$ such that for any $x$ close to $\overline x$ $$ d(x,\bigcap_{i=1}^k S_i)\le K\sum_{i=1}^kd(x,S_i). $$ } \end{definition} In a similar way, it is easy to see that subtrasversality is equivalent to subregularity of the same mapping $F$ and to get a sufficient subtransversalty condition from Theorem \ref{erbr}. In the next section we shall be able to see the key role subtransversality plays in some problems of optimization and subdifferential calculus. We conclude with a brief discussion of transversality of a mapping and a set. \begin{theorem}\label{tran1} Let $F: I\!\!R^n\rightrightarrows I\!\!R^m$ have locally closed graph, and let $S\subset I\!\!R^m$ be closed. Assume that $\overline y\in F(\overline x)\cap S$. Then the following statements are equivalent: (a) the set-valued mapping $\Phi: (x,y)\mapsto (F(x)-y)\times (S-y)$ is regular near $((\bar x,\bar y),(0,0))$; (b) the sets ${\rm Graph}~ F$ and $I\!\!R^n \times S$ have transversal intersection near $(\bar x,\bar y)$; (c) $0\in D^*F(\bar x,\bar y)(y^*)\; \&\; y^*\in N(S,\overline y)\; \Rightarrow\; y^*= 0$. \end{theorem} \proof Equivalence of (b) and (c) follows from Theorem \ref{F5}. To prove that (a) and (b) are equivalent, set $\Psi (x,y) = ({\rm Graph}~ F-(x,y))\times (I\!\!R^n\times S-(x,y))$. If $((\xi,\mu),(\eta,\nu))\in \Psi(x,y)$, then $(\mu,\nu)\in \Phi(u,y)$ with $u= \xi+x$. Conversely, if $(\mu,\nu)\in\Phi(u,y)$, then $(u,\mu+y)\in {\rm Graph}~ F$ and $(w,\nu+y)\in I\!\!R^n\times S$ for any $w\inI\!\!R^n$. Then for any $x$, we have, setting $\xi =u-x$, $\eta= w-x$ , that $((\xi,\mu),(\eta,\nu))\in \Psi(x,y)$. (b) $\Rightarrow$ (a). If (b) holds, then $\Psi$ is regular near $((\bar x,\bar y),((0,0),(0,0)))$. So let $((\xi,\mu),(\eta,\nu))\in \Psi(x,y)$ with $(x,y)$ sufficiently close to $(\bar x,\bar y)$ and $\xi,\mu,\eta,\nu$ sufficiently close to zeros of the corresponding spaces. Take a small $t>0$ and let $\| \xi'-\xi\|<t$ etc. Then by (b) there is a $K>0$ and $(x',y')$ such that $\| x'-x\|\le Kt$, $\| y'-y\|\le Kt$ and $((\xi',\mu'),(\eta',\nu'))\in \Psi(x',y')$. We have $$ \xi'=u'-x',\quad \mu'\in F(u')-y',\quad \eta'= w'-x',\quad,\nu'\in S-y' $$ for some $(u',v')\in{\rm Graph}~ F$ and $w'\in I\!\!R^n$. We have therefore $\| u'-u\|\le \| x'-x\|+\| \xi'-\xi\|\le (K+1)t$. Thus, whenever $(\mu,\nu)\in\Phi(u,y)$ with $(u,y)$ close to $(\bar x,\bar y)$ and $(\mu,\nu)$ close to $(0,0)$ and $t>0$ is sufficiently small, for any $\mu',\nu'\in I\!\!R^m$ that differ from $\mu,\nu$ at most by $t$, there is a pair $(u',y')$ within $(K+1)t$ of $(u,y)$ such that $\mu'\in F(u')-y'$ and $\nu'\in S-y'$, that is (a). (a) $\Rightarrow$ (b). Here the arguments are similar, actually even a bit shorter. Let $((\xi,\mu),(\eta,\nu))\in \Psi(x,y)$ with $(x,y)$ close to $(\bar x,\bar y)$ and $(\xi,\mu),(\eta,\nu))$ close to $((0,0),(0,0))$. Then as we have seen, $(\mu,\nu)\in \Phi(u,y)$ with $u= \xi+x$, also close to $\overline x$. Let further $\|\mu'-\mu\|<t,\ \| \nu'-\nu\|<t$. If $t$ is sufficiently small, then by (a) we can find $u', y'$ such that $\| u'-u\|\le Kt,\ \| y'-y\|\le Kt$ with some positive $K$ such that $(\mu',\nu')\in \Phi(u,y)$. Take $x'= x$, $\xi'=\overline u'-x$, $\eta'=\eta$. Then as is immediate from what was explained in the first paragraph of the proof $((\xi,,\mu'),(\eta',\nu'))\in \Psi(x',y')$. Thus $\Psi$ is regular near $((\bar x,\bar y),((0,0),(0,0)))$.\endproof The proposition justifies the following definition. \begin{definition}\label{vartrans1}{\rm Let $F: I\!\!R^n\rightrightarrows I\!\!R^m$ have locally closed graph, let $S\subset I\!\!R^m$ be a closed set, and let $(\bar x,\bar y)\in{\rm Graph}~ F$. We say that $F$ is {\it transversal to $S$ at} $(\bar x,\bar y)$ if either $\overline y\not\in S$ or $\overline y\in S$ and ${\rm Graph}~ F$ and $I\!\!R^n\times S$ are transversal at $(\bar x,\bar y)$. We say that $F$ is {\it transversal} to $S$ if it is transversal to $S$ at any point of the graph. Likewise, if $\overline y\in F(\overline x)\cap S$, we shall say that $F$ is {\it subtransversal} to $S$ and $(\bar x,\bar y)$, provided $$ d((x,y),{\rm Graph}~ F\cap(X\times S))\le Kd((x,y),{\rm Graph}~ F) + d(y,S)) $$ for $(x,y)$ of a neighborhood of $(\bar x,\bar y)$. } \end{definition} It is almost obvious from (a) that in case $\overline y\in F*(\overline x)\cap S$, transversality of $F$ to $S$ at $(\bar x,\bar y)$ implies regularity of the mapping $x\mapsto F(x)-S$ near $(\overline x,0)$. Without going into technical details the explanation is as follows. Suppose we wish to find an $x$ such that $z\in F(x)-S$. By (a) there are some $(x,y)$ such that $(0,z)\in {\rm Graph}~ F- (x,y)$ and $(0,0)\in I\!\!R^n\times S - (x,y)$. This means that $z\in F(x)-y$, on the one hand, and $y\in S$, on the other hand, as required. The converse however does not seem to be valid at least for a set-valued $F$. The situation here is similar to that considered in Example \ref{contr1}. However there the converse is also true in one important case. \begin{theorem}\label{F7} Assume that $F: I\!\!R^n \toI\!\!R^m$ is Lipschitz in a neighborhood of $\overline x$ and $C\subset I\!\!R^n$, $Q\subset I\!\!R^m$ are nonempty and closed. Assume further that $\overline y=F(\overline x)\in Q$. Let finally $$ \Phi(x) =\left\{\begin{array}{cl} F(x)-Q,&{\rm if}\; x\in C;\\ \emptyset,&{\rm otherwise.}\end{array} \right. ;\quad F_C(x) =\left\{\begin{array}{cl} F(x),&{\rm if}\; x\in C;\\ \emptyset,&{\rm otherwise.}\end{array} \right. $$ Then $ D^*\Phi(\overline x,0)(y^*)=\partial(y^*\circ F_C)(\overline x)$, if $y^*\in N(Q,0)$ and $D^*\Phi(\overline x,0)(y^*)=\emptyset$ otherwise. Thus $$ {\rm sur}\Phi(\overline x|0)=\min\{\| x^*\|:\; x^*\in\partial(y^*\circ F|_C)(\overline x), \; y^*\in N(Q,\overline y),\; \| y^*\|=1\}. $$ \end{theorem} \noindent (Here of course $(y^*\circ F|_C)(x)=\infty$ if $x\not\in C$.) If we compare this with Theorem \ref{tran1}, we see that transversality of $F_C$ to $Q$ at $\overline x$ is equivalent to regularity of $F_C-Q$ near $(\overline x,0)$. We note also the following simple corollary of the theorem \begin{corollary}\label{CF7} Under the assumption of the theorem $$ D^*\Phi(\overline x,0)(y^*)\subset \partial(y^*\circ F)(\overline x) +N(C,F(\overline x)), \quad {\rm if} \; y^*\in N(Q,0). $$ \end{corollary} The set-valued mapping in Definition \ref{vartrans} was introduced in \cite{AI00} where it was shown that subtransversality of a collection of sets is equivalent to subregularity of the mapping. Theorem \ref{F5} was partly proved in \cite{AK06} (equivalence of (a) and (c)) and partly in \cite{LLM09} (equivalence of (a) and (b)). We refer to \cite{AK06} for more equivalent descriptions (some looking very technical) of transversality and related properties. The results relating to transversality of set-valued mappings and sets in the image space seem to be new. The exception is Theorem \ref{F7} that can be extracted from Theorem 5.23 of \cite{BM}. \vskip 1cm \centerline{\bf\Large Part 2. Applications} \section{Special classes of mappings} If additional information on the structure of a mapping is available, it is often possible to get stronger results and/or better estimates for regularity rates and to develop more convenient mechanisms to compute or estimate the latter. In this section we briefly discuss how this can be implemented for three important classes of mappings. \subsection{Error bounds.} By an {\it error bound} for $f$ (at level $\alpha$) on a set $U$ we mean any estimate for the distance to $[f\le\alpha]$ in terms of $(f(x)-\alpha)^+$ for $x\in U$. We shall be mainly interested in estimates of the form \begin{equation}\label{6.1.1} d(x,[f\le\alpha])\le K(f(x)-\alpha)^+ \end{equation} (which sometimes are called {\it linear} or {\it Lipschitz} error bounds). As follows from the definition, error bounds can be viewed as rates of metric subregularity of the set-valued mapping ${\rm Epi}f(x)= [f(x),\infty)=\{\alpha: \; (x,\alpha)\in{\rm epi}~ f\}$ from $X$ into $I\!\!R$. \begin{lemma}[Basic lemma on error bounds]\label{baslem} Let $X$ be a complete metric space, let $U\subset X$ be an open set, and let $f$ be a lower semi-continuous function. Suppose that $|\nabla f|(x)>r>0$ for any $u\in U\backslash [f\le 0]$. Then for any $\overline x\in U$ such that $f (\overline x)<r d(\overline x,X\backslash U)$ there is a $\overline u$ such that $f(\overline u)\le 0$ and $d(\overline u,\overline x)\le r^{-1}(f(\overline x))^+$. \end{lemma} \proof Without loss of generality, we may assume that $f$ is nonnegative: just take $f^+$ instead of $f$. So take an $\overline x$ as in the statement. By Ekeland's principle there is a $\overline u$ such that $d(\overline u,\overline x)\le r^{-1}f(\overline x)$ and $f (x) + rd(x,\overline u)>f (\overline u)$ if $x\neq \overline u$. We claim that $f(\overline u)\le 0$. Indeed, otherwise, by the assumption there would be an $x\neq \overline u$ such that $f (\overline u)-f(x)\ge rd(x,\overline u)$ -- a contradiction.\endproof For simplicity we shall speak here mainly about {\it global} error bounds, corresponding to $U=X$, at the zero level. We shall denote by $K_f$ the lower bound of $K$ such that (\ref{6.1.1}) holds for all $x$. We also set for brevity $$ S= [f\le 0],\qquad S_0 = [f=0]. $$ \subsubsection{Error bounds for convex functions.} We shall start with the simplest case of a convex function $f$ (extended-real-valued in general) on a Banach space $X$. \begin{theorem}\label{conver} Let $X$ be a Banach space and $f$ a proper closed convex function on $X$. Assume that $S=[f\le 0]\neq\emptyset$. Then \begin{equation}\label{6.1.2} K_f^{-1}= \inf_{x\not\in S}\ \sup_{ \| h\|\le 1}(-f'(x;h)) = \inf_{x\not\in S}d(0,\partial f(x)) = \inf_{x\not\in S}\displaystyle{\rm sur} ({\rm Epi} f)(x,f(x)). \end{equation} \end{theorem} \noindent Here $\partial f(x)=\{x^*: f(x+h)-f(x)\ge \langle x^*,h\rangle\}$ is the convex subdiffential. \proof Equality of the three quantities on the right is not connected with regularity and we omit the proof. To prove the first equality, we observe that the inequality $K_f^{-1}\le r=\inf_{x\in[f>0]}\sup_{\| h\|\le 1}(-f'(x;h))$ is immediate from Basic Lemma because for a convex function $|\nabla f|(x) = -\inf_{\| h\|\le 1}f'(x;h)$. So it remains to prove the opposite inequality for which we can assume that $r>0$. Take a positive $r'$ and $\delta$ such that $\delta< r'<r$ and let $TU(x)$ be the set of pairs $(u,t)$ satisfying \begin{equation}\label{6.1.7} \|u-x\|\le t, \qquad f(u)\le f(x)-r't \end{equation} By Ekeland's variational principle for any $\delta>0$ there is a $(\overline u,\bar t)\in TU(x)$ such that $f(u)+\delta \| u-\overline u\|$ attains its minimum at $\overline u$. Clearly $\bar t>0$ (as $f(x)>0$). We claim that $f(\overline u)=0$. Indeed, if $f(\overline u)> 0$, then there is an $h$ with $\| h\|=1$ such that $-f'(\overline u;h)>r'$, that is $f(\overline u+th)<f(\overline u)-r't$ for some $t>0$. Set $u=\overline u+th$. Then $f(u)<f(\overline u)-\delta\| u-\overline u\|$ and we get a contradiction with the definition of $\overline u$. Thus $f(\overline u)= 0$ which means that $$ d(x,S_0)\le \| \overline u-x\|\le t\le \frac{1}{r'}f(x) $$ and we are done as $r'$ can be chosen arbitrarily close to $r$ and $x$ is an arbitrary point of $[f>0]$. \endproof There is another way to characterize $K_f$ in terms of normal cones to $[f\le 0]$. \begin{theorem}\label{errng} For any continuous convex function $f$ on a Banach space $X$ \begin{equation}\label{6.1.11} K_f= \inf_{x\in[f=0]}\inf \{\tau> 0:\; N([f=0],x)\bigcap B_{X^*}\subset [0,\tau]\partial f(x)\}. \end{equation} \end{theorem} \subsubsection{Some general results on global error bounds.} Let us turn now to the general case of a lsc function on a complete metric space. Denote now by $K_f(\alpha,\beta)$ (where $\beta >\alpha\ge 0$) the lower bound of $K$ such that \vskip 2mm \centerline{$d(x,[f\le \alpha])\le K f(x)^+$ if $\alpha<f(x)\le \beta$.} \vskip 2mm \noindent Clearly, $K_f=\lim_{\beta\to\infty}K_f(0,\beta)$. \begin{theorem}\label{azcorv} Let $X$ be a complete metric space and $f$ a lower semicontinuous function on $X$. If $[f\le 0]\neq\emptyset$, then $$ \inf_{x\in[0<f\le\beta]} |\nabla f|(x)= \inf_{\alpha\in[0,\beta)}K_f(\alpha,\beta)^{-1} . $$ \end{theorem} \proof Set $r=\inf_{x\in[0<f\le\beta]} |\nabla f|(x)$. The inequality $K_f(\alpha,\beta)^{-1}\ge r$ for $0\le\alpha<\beta$ is immediate from Lemma \ref{baslem}. This proves that the left side of the equality cannot be greater than the quantity on the right. To prove the opposite inequality it is natural to assume that $K_f(\alpha,\beta)^{-1}\ge\xi> 0$ for all $\alpha\in[0,\beta)$. For any $x\in [f>\alpha]$ and any $\varepsilon >0$ such that $f(x)-\varepsilon>\alpha$ choose a $u=u(\varepsilon)\in [f\le f(x)-\varepsilon]$ such that $d(x,u)\le (1+\varepsilon)d(x,[f\le f(x)-\varepsilon])\le (1+\varepsilon)\xi^{-1}\varepsilon$ and therefore $u\to x$ as $\varepsilon\to 0$. On the other hand, $\xi d(x,u)\le f(x)-f(u)$ which (as $u\neq x$) implies that $\xi\le |\nabla f|(x)$, whence $\xi\le |\nabla f|(x)$, and the result follows. \endproof As an immediate consequence we get \begin{corollary}\label{genin} Under the assumption of the theorem $$ K_f^{-1}\ge \inf_{x\in[f> 0]} |\nabla f|(x). $$ \end{corollary} A trivial example of a function $f$ having an isolated local minimum at a certain $\overline x$ and such that $\inf f<f(\overline x)$ shows that the inequality can be strict. This may happen of course even if the slope is different from zero everywhere on $[f>0]$. In this case an estimate of another sort can be obtained. Set (for $\beta >0$) $$ d_f(\beta)=\sup_{x\in [ f\le\beta]}d(x,[f\le 0]) $$ and define the functions $$ \kappa_{f,\varepsilon}(t)=\sup\{\frac{1}{|\nabla f|(x)}:\; |f(x)-t|<\varepsilon\};\quad \kappa_f(t)=\lim_{\varepsilon\to 0}\kappa_{f,\varepsilon}(t). $$ \begin{proposition}\label{intin} Let $\beta>0$. Assume that $[f\le 0]\neq\emptyset$ and $|\nabla f|(x)\ge r>0$ if $x\in [0< f\le \beta]$. Then $$ d_f(\beta)\le\int_0^{\beta}\kappa_f(t)dt. $$ \end{proposition} Following the pioneering 1952 work by Hoffmann \cite{AJH52} (to be proved later in this section), error bounds, both for nonconvex and, especially, convex functions have been intensively studied, especially during last 2-3 decades, both theoretically, in connection with metric regularity, and also in view of their role in numerical analysis, see e.g. \cite{CC09, FP,LP98,NY05,WS06,ZN04}. Basic lemma was proved in \cite{AI00}, its earlier version corresponding to $U=X$ was proved by Az\'e-Corvellec-Lucchetti and appeared in \cite{ACL}. A finite dimensional versions of Theorems \ref{conver} and \ref{errng} were proved in Lewis-Pang \cite{LP98}. Klatte and Li \cite{KL99}. The equality $K_f^{-1}=\inf\{d(0,\partial f(x)):\; x\in[f> 0]\}$ in Theorem \ref{conver} was proved by Zalinescu (see \cite{CZ}). The first two equalities in the theorem can be found in \cite{AC02,AC04} and the third equality for polyhedral functions on $I\!\!R^n$ in \cite{MPR10}. Theorem \ref{errng} was proved by Zheng and Ng \cite{ZN04} and Theorem \ref{azcorv} by Az\'e and Corvellec in \cite{AC02}. The papers also contain sufficiently thorough bibliographic comments. Here we follow \cite{AI13} where proofs of all stated and some other results can be found. \subsection{Mappings with convex graphs.} \subsubsection{Convex processes.} We start with the simplest class of convex mappings known as convex processes. By definition a {\it convex process} is a set-valued mapping ${\mathcal A}: X\rightrightarrows Y$ from one Banach space into another whose graph is a convex cone. A convex process is {\it closed} if its graph is a closed convex cone. The closure ${\rm cl}{\mathcal A}$ of a convex process ${\mathcal A}$ is defined by ${\rm Graph}~({\rm cl}{\mathcal A}) = {\rm cl}({\rm Graph}~{\mathcal A})$. We shall usually work with closed convex processes. A convex process is {\it bounded} if there is an $r>0$ such that $\| y\|\le r\| x\|$ whenever $y\in {\mathcal A}(x)$. A simplest nontrivial example of an unbounded closed convex process is a densely defined closed unbounded linear operator, as say the mapping $x(\cdot)\mapsto \dot x(\cdot)$ from $C[0,1]$ into itself which associates with every continuously differentiable $x(\cdot)$ its derivative and the empty set with any other element of $C[0,1]$. According to Definition \ref{normsvm}, given a convex process ${\mathcal A}: X\rightrightarrows Y$, the {\it adjoint process} ${\mathcal A}^*: Y^*\rightrightarrows X^*$ (always closed) is defined by $$ {\mathcal A}^*(y^*)=\{ x^*\in X^*: \; \langle x^*,x\rangle\le \langle y^*,y\rangle,\; \forall\; (x,y)\in {\rm Graph}~ {\mathcal A}\}. $$ By ${\mathcal A}^{**}$ we denote a convex process from $X$ into $Y$ whose graph is the intersection of $-{\rm Graph}~({\mathcal A}^*)^*$ with $X\times Y$, that is ${\mathcal A}^{**}(x)=\{y:\; -y\in({\mathcal A}^*)^*(-x)\}$. Simple separation arguments show that ${\mathcal A}^{**}={\rm cl}{\mathcal A}$ for any convex process. \begin{proposition}\label{convproc} Let $A: X\rightrightarrows Y$ be a convex process. Then ${\mathcal A}(Q)$ is a convex set if so is $Q$ and for any $x_1,\ x_2\in X$ $$ {\mathcal A}(x_1)+{\mathcal A}(x_2)\subset {\mathcal A}(x_1+x_2). $$ \end{proposition} \begin{proposition}\label{tanconv} Let $K\subset X$ be a convex closed cone. Then for any $x\in K$ the tangent cone $T(K,x)$ is the closure of the cone generated by $K-x$. In particular $K\subset T(K,x) $. \end{proposition} \noindent The propositions are the key element in the proof of the following fundamental property of convex processes. \begin{theorem}[regularity moduli of a convex process]\label{regproc} For any closed convex process ${\mathcal A}: X\rightrightarrows Y$ from one Banach space into another $$ C({\mathcal A})=C^*({\mathcal A}^*)={\rm sur} {\mathcal A}(0|0)={\rm contr}{\mathcal A}(0|0). $$ \end{theorem} \noindent Note that the left inequality is equivalent to $\| {\mathcal A}^{-1}\|_-=\|({\mathcal A}^{-1})^*\|_+$ (cf \cite{BoL}). \proof We first observe that the right equality is a consequence of the other two in view of Proposition \ref{dualb}. The inequality $C^*({\mathcal A}^*)\ge C({\mathcal A})$ follows from Theorem \ref{bancon}. The same theorem together with the definition of Banach constants implies that $$C^*({\mathcal A}^{**})\ge C^*(({\mathcal A}^*)^*)\ge C({\mathcal A}^*)\ge C^*({\mathcal A}^*).$$ But ${\mathcal A}^{**} = {\mathcal A}$, as ${\mathcal A}$ is closed, so that $C^*({\mathcal A}^{**})=C^*({\mathcal A}))\le C({\mathcal A})$ (see again Theorem \ref{bancon}). This proves the left equality. Passing to the proof of the middle equality, we first observe that by Proposition \ref{dualb} $C({\mathcal A})={\rm contr}{\mathcal A}(0|0)\ge {\rm sur}{\mathcal A}(0|0)$ as the rate of surjection can never exceed the modulus of controllability. On the other hand, by Proposition \ref{tanconv} $D{\mathcal A}(0,0)(h)\subset D{\mathcal A}(x,y)(h)$ for all $(x,y)\in{\rm Graph}~ {\mathcal A}$ and all $h$. Hence by Theorem \ref{tancrit1} ${\rm sur} {\mathcal A}(0|0)\ge C(D{\mathcal A}(0,0))$. But $D{\mathcal A}(0,0)(h)={\mathcal A}(h)$ as the tangent cone to a closed convex cone at zero coincides with the latter. Thus ${\rm sur}{\mathcal A}(0|0)\ge C({\mathcal A})$. \endproof \begin{corollary}[perfect regularity of convex processes]\label{prcp} Any closed convex process is perfectly regular at the origin. \end{corollary} Note that a convex process may be not perfectly regular outside of the origin. For instance, consider in the space $C[0,1]$ the mapping into itself defined by $A(x(\cdot))= x(\cdot) + K$ where $K$ is the cone of nonnegative functions. We conclude this subsection by considering the effect of linear perturbations. If ${\mathcal A}$ is a convex process, then so is ${\mathcal A}+A$ where $A$ is a linear bounded operator from $X$ into $Y$. Thus if ${\mathcal A}$ is closed, then ${\mathcal A}+A$ is perfectly regular at the origin and we get as an immediate consequence of Theorem \ref{radform} \begin{theorem}[radius of regularity of a convex process]\label{convrad} If ${\mathcal A}: X\rightrightarrows Y$ is a closed convex process, then $$ {\rm rad}{\mathcal A}(0|0)={\rm sur}{\mathcal A}(0|0). $$ \end{theorem} Convex processes were introduced by Rockafellar \cite{RTR67,RTR} as an extension of linear operators and subsequently thoroughly studied by Robinson \cite{SMR72}, Borwein \cite{JMB83,JMB86a} and Lewis \cite{AL99,AL01}. In particular, \cite{SMR72} contains an extension to convex processes of Banach-Schauder open mapping theorem. Another remarkable result (which is actually a special case of Theorem 5 in the paper) can be reformulated as follows: {\it let $X$ and $Y$ be Banach spaces, and let ${\mathcal A}: X\rightrightarrows Y$ and ${\mathcal T}: X\rightrightarrows Y$ be closed convex processes. Then $C({\mathcal A}-{\mathcal T})\ge C({\mathcal A})- \|{\mathcal T}\|_-$}. The result equivalent to the equality $C({\mathcal A})=C^*({\mathcal A}^*)$ (Theorem \ref{regproc}) was proved and further discussed in \cite{JMB83,JMB86a} and Theorem \ref{convrad} in \cite{AL99} along with the equality of the radius and distance to infeasibility for convex processes.. \subsubsection{ Theorem of Robinson-Ursescu.} \begin{theorem}[surjection modulus of a convex map]\label{qru} Let $X$ and $Y$ be Banach spaces, and let $F: X\rightrightarrows Y$ be a set-valued mapping with convex and locally closed graph. Suppose there are $(\bar x,\bar y)\in{\rm Graph}~ F$, $\alpha>0$ and $\beta>0$ such that $F(B(\overline x,\alpha))$ is dense in $B(\overline y,\beta)$. Then \begin{equation}\label{con1} {\rm sur} F(\overline x|\overline y)\ge \frac{\beta}{\alpha}. \end{equation} \end{theorem} \proof We can set $\overline x=0$, $\overline y=0$. It is clear that $F(t\alpha B_X)$ is dense in $t\beta B_Y$ for any $t\in (0,1)$. Denote $r=\beta/\alpha$. We shall show that, given a $\gamma >0$, there is an $\varepsilon>0$ such that $F(B(x,(1+\gamma)t))$ is dense in $B(v,rt)$ if $\| x\|<\varepsilon$, $\| v\|<\varepsilon$ and $v\in F(x)$. The theorem then will follow from Corollary \ref{lodense} So take a small $\varepsilon>0$, and let $\| x_0\|<\varepsilon$, $\| v_0\|<\varepsilon$ and $v_0\in F(x_0)$. Let further $y\in B(v_0,rt)$ for some $t\in (0,\varepsilon)$. Consider the ray emanating from $v_0$ through $y$ and let $y_1$ be the point of the ray with $\| y_1\|=\beta$, that is there is a $\lambda>0$ such that $$ y= \frac{1}{1+\lambda}y_1 +\frac{\lambda}{1+\lambda}v_0,\quad \lambda\ge\frac{\beta - \varepsilon}{rt}. $$ We have $\| y_1-y\|=\lambda\| v_0-y\|$, that is $$ \lambda=\frac{\|y_1-y}{\| v_0-y\|}\ge \frac{\beta -\varepsilon-rt}{rt};\qquad 1+\lambda\ge\frac{\beta-\varepsilon}{rt} $$ In particular, if $\beta\ge (1+2r)\varepsilon$, which we may assume, then $\lambda\ge 1$. Take a $\delta>0$. By the assumption there is an $x_1\in \alpha B$ such that $\| y_1-v_1\|<\delta$ for some $v_1\in F(x_1)$. Set $$ v=\frac{1}{1+\lambda}v_1 +\frac{\lambda}{1+\lambda}v_0,\quad x=\frac{1}{1+\lambda}x_1 +\frac{\lambda}{1+\lambda}x_0 $$ Then $v\in F(x)$ as ${\rm Graph}~ F$ is convex. We have $\| y-v\|\le \delta/(1+\lambda)\le\delta/2$ and $$ \| x-x_0\|\le \frac{1}{1+\lambda}\| x_1-x_0\|\le \frac{\alpha+\varepsilon}{1+\lambda}\le \frac{\alpha+\varepsilon}{\beta-\varepsilon}rt. $$ If $$ 1+\gamma\ge \frac{\alpha+\varepsilon}{\beta-\varepsilon}\cdot\frac{\beta}{\alpha},$$ this completes the proof as $\delta$ can be chosen arbitrary small.\endproof As a corollary we get \begin{theorem}[Robinson-Ursescu \cite{SMR76b,CU75}]\label{roburs} Let $X$ and $Y$ be Banach spaces. If the graph of $F: X\rightrightarrows Y$ is convex and closed and $\overline y\in{\rm int}~ F(X)$, then $F$ is regular at any $(\bar x,\bar y)\in{\rm Graph}~ F$. \end{theorem} \proof Let $\overline y\in F(\overline x)$. We have to show that there are $\alpha>0$ and $\beta>0$ such that $F(B(\overline x,\alpha))$ is dense in $B(\overline y,\beta)$ which is easy to do with the help of the standard argument using Baire category. \endproof \subsubsection{Mappings with convex graphs. Regularity rates.} Here we give two results containing exact formulas for the rate of surjection of set-valued mappings with convex graph. \begin{theorem}\label{modual} Let $F: X\rightrightarrows Y$ be a set-valued mapping with convex and locally closed graph. If $\overline y\in F(\overline x)$, then $$ {\rm sur} F(\overline x|\overline y)=\lim_{\varepsilon\to +0}\ \inf_{\| y^*\|=1}\ \inf_{x^*}\Big(\| x^*\| +\frac{1}{\varepsilon}S_{{\rm Graph}~ (F-(\bar x,\bar y))}(x^*,y^*)\Big). $$ \end{theorem} The theorem was proved in Ioffe-Sekiguchi \cite{IS08}, see also for \cite{AI13} for a short proof. It allows to also get a "primal" representation for the rate of surjection of a convex set-valued mapping. The key to this development is the concept of {\it homogenization} ${\mathcal Q}$ of a convex set $Q\subset X$ which is the closed convex cone in $X\timesI\!\!R$ generated by the set $Q\times \{1\}$. It is an easy matter to verify (if $Q$ is also closed) that $(x,t)\in{\mathcal Q}$ if and only if $x\in tQ$ if $t >0$ and $x\in Q^{\infty}$, the recession cone of $Q$, if $t=0$. (Recall that $Q^{\infty}=\{h\in Q:\; x+h\in Q,\; \forall x\in Q\}$.) Given a set-valued mapping $F: X\rightrightarrows Y$ with convex closed graph, we associate with $F$ and any $(\bar x,\bar y)\in X\times Y$ (not necessarily in the graph of $F$) a convex process ${\mathcal F}_{(\bar x,\bar y)}: X\timesI\!\!R\rightrightarrows Y$ whose graph is the homogenization of ${\rm Graph}~ F-(\bar x,\bar y)$. It is easy to see that $$ {\mathcal F}_{(\bar x,\bar y)}(h,t) =\left\{\begin{array}{cl} t\big( F(\overline x+\dfrac{h}{t})-\overline y\big),& {\rm if}\; t>0,\\ F^{\infty}(h),&{\rm if} \; t=0,\\ \emptyset,&{\rm if} \; t<0,\end{array}\right. $$ where $F^{\infty}$ is the ``horizon'' mapping of $F$ whose graph is the recession cone of ${\rm Graph}~ F$: $$ {\rm Graph}~ F^{\infty}=\{(h,v):\; (x+h,y+v)\in{\rm Graph}~ F,\; \forall (x,y)\in{\rm Graph}~ F\}. $$ If $(\bar x,\bar y)= (0,0)$, we shall simply write ${\mathcal F}$ (without the subscript) and call this convex process the {\it homogenization} of $F$. In the theorem below we use the $\varepsilon$-norms in $X\times I\!\!R$: $\| (h,t)\|_{\varepsilon}=\max\{\| x\|,\varepsilon t\}$ and denote by $C_{\varepsilon}({\mathcal F}_{(\bar x,\bar y)})$ the Banach constant of ${\mathcal F}_{(\bar x,\bar y)}$ corresponding to this norm. \begin{theorem}[primal representation of the surjection modulus]\label{moprimal} If $F: X\rightrightarrows Y$ is a set-valued mapping with convex and locally closed graph, then $$ {\rm sur} F(\overline x|\overline y)= \lim_{\varepsilon\to +0}C_{\varepsilon}({\mathcal F}_{(\bar x,\bar y)}). $$ \end{theorem} \proof We have (setting below $h=t(x-\overline x),\; v=t(y-\overline y)$) $$ \begin{array}{lcl} {\rm Graph}~ {\mathcal F}_{(\bar x,\bar y)}^*&=&\{(x^*,y^*,\lambda): \langle x^*,h\rangle -\langle y^*,v\rangle +\lambda t\le 0:\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\forall\;(h,v,t)\in{\rm Graph}~{\mathcal F}_{(\bar x,\bar y)}\}\\ &= &\{(x^*,y^*,\lambda): t[\langle x^*,x-\overline x\rangle -\langle y^*,y-\overline y\rangle +\lambda]\le 0:\\ &&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \forall \; (x,y)\in{\rm Graph}~ F,\; t>0\}\\ &=& \{(x^*,y^*,\lambda): s_{{\rm Graph}~ F-(\bar x,\bar y)}(x^*,-y^*)+\lambda\le 0\}. \end{array} $$ As the support function of ${\rm Graph}~ F-(\bar x,\bar y)$ is nonnegative, it follows that $\lambda\le 0$ whenever $(x^*,y^*,\lambda)\in{\rm Graph}~{\mathcal F}_{(\bar x,\bar y)}$. The norm in $X^*\times I\!\!R$ dual to $\|\cdot\|_{\varepsilon}$ is $\|(x^*,\lambda)\|_{\varepsilon}=\| x^*\|+\varepsilon^{-1}|\lambda|$. Let $d_{\varepsilon}$ stand for the distance in $X^*\timesI\!\!R$ corresponding to this norm. Then $$ \begin{array}{lcl} d_{\varepsilon}(0,{\mathcal F}_{(\bar x,\bar y)}^*{(\bar x,\bar y)}(y^*))&=& \inf\{\|x^*\|+\varepsilon^{-1}|\lambda|:\; s_{{\rm Graph}~ F-(\bar x,\bar y)}(x^*,-y^*)+\lambda\le 0\}\\ &=&\displaystyle\inf_{x^*}(\| x^*\|+\varepsilon^{-1}s_{{\rm Graph}~ F-(\bar x,\bar y)}(x^*,-y^*)). \end{array} $$ It remains to compare this with Theorem \ref{modual} to see that $$ {\rm sur} F(\overline x|\overline y)=\lim_{\varepsilon\to +0}\inf_{\| y^*\|=1}d_{\varepsilon}(0,{\mathcal F}_{(\bar x,\bar y)}^*(y^*)) $$ and then to refer to Theorem \ref{regproc} to conclude that the quantity on the right is precisely the limit as $\varepsilon\to 0$ of $\inf_{\| y^*\|=1}C_{\varepsilon}({\rm cl}{\mathcal F}_{(\bar x,\bar y)}(y^*))$, where the closure operation can be dropped because as we mentioned the norms (and therefore the Banach constants) of a convex process and its closure coincide.\endproof The concept of homogenization was introduced by H\"ormander \cite{LH55}. The idea to apply homogenization for regularity estimation goes back to Robinson's \cite{SMR76}. His main result actually says that ${\rm sur} F(\overline x|\overline y) \ge C_1({\mathcal F}_{(\bar x,\bar y)})$. In a somewhat different context homogenization techniques was applied by Lewis \cite{AL01} for estimating distance to infeasibility of so called conic systems. Full statement of Theorem \ref{moprimal} was proved also in \cite{IS08}. We have not discussed here some well developed problems relating to regularity of maps with convex graphs, e.g. stability under perturbations of systems of convex inequalities - see e.g. \cite{CLMP09,AI13,SMR75} and references in the first two quoted papers. \subsection{Single-valued Lipschitz maps.} The collection of analytic tools that allow to compute and estimate regularity moduli of Lipschitz single-valued mappings contains at least two devices, not available in the general situation, which are a lot more convenient to work with than coderivatives. The first is the {\it scalarized coderivative} (associated with a subdifferential): $$ {\mathcal D}^*F(x)(y^*) = \partial(y^*\circ F)(x) $$ and the other results from suitable local approximations of the mapping either by homogeneous set-valued mappings or by sets of linear operators. The following result is straightforward. \begin{proposition}\label{scalf} If $F: X\to Y$ is Lipschitz continuous near $x\in X$, then for every $y^*\in Y^*$ \begin{equation}\label{6.14} \partial_F(y^*\circ F)(x)= D_F^*F(x)(y^*). \end{equation} \end{proposition} Things are more complicated with the Dini-Hadamard subdifferential. From now on we assume that all spaces are G\^ateaux smooth. \begin{definition}\label{dircom}{\rm A homogeneous set-valued mapping ${\mathcal A}: X\rightrightarrows Y$ is a {\it strict Hadamard prederivative} of $F: X\to Y$ at $\overline x$ if $\|{\mathcal A}\|_+<\infty$, and for any norm compact set $Q\subset X$ \begin{equation}\label{dircomd} F(x+th)-F(x)\subset t{\mathcal A} (h)+r(t,x)t\| h\|B_Y,\quad\forall\; h\in Q, \end{equation} where $r(t,x)=r(t,x,Q)\to 0$ when $x\to \overline x,\ t\to +0$. If moreover the inclusion holds with $Q$ replaced by $B_X$ then ${\mathcal A}$ is called {\it strict Fr\'echet prederivative} of $F$ at $\overline x$. Clearly, for a Fr\'echet prederivative we can write $r(t,x)$ in the form $\rho (t,\| x-\overline x\|)$. } \end{definition} There are some canonical ways for constructing prederivatives. The first to mention is the generalized Jacobian introduced by Clarke \cite{FHC76b} for mappings in the finite dimensional case and then extended to some classes of Banach spaces by P\'ales and Zeidan \cite{PZ07,PZ07a}. Another construction, not associated with linear operators was intruduced in \cite{AI81}. Take an $\varepsilon >0$ and set $$ {\mathcal H}_{\varepsilon}(h):=\{\lambda^{-1}(F(x+\lambda h)- F(x)): \; x, \; x + \lambda h \in {\rm dom}~ F \cap B(\bar{x}, \varepsilon),\; \lambda >0\},\ \ h\in X. $$ Then $0\in {\mathcal H}_{\varepsilon}(0)$ and for $t > 0$ we have $$ {\mathcal H}_{\varepsilon}(th)= t\{(t\lambda)^{-1}(F(x+t\lambda h)- F(x)): \; x, \; x + t\lambda h \in {\rm dom}~ F \cap B(\bar{x}, \varepsilon),\; \lambda >0\}, $$ that is ${\mathcal H}_{\varepsilon}(th)=t{\mathcal H}_{\varepsilon}(h)$. Thus ${\mathcal H}_{\varepsilon}$ is positively homogeneous and it is an easy matter to see that (\ref{dircomd}) holds with $r(t,x)=0$. We say that $F: X\to Y$ is {\it directionally compact} at $\overline x\in{\rm dom}~ F$ if it has a (norm) compact-valued strict Hadamard prederivative with closed graph. It is {\it strongly directionally compact} if there is a compact-valued strict Fr\'echet prederivative with closed graph. The simplest, and probably the most important example of a directionally compact (actually even strong directionally compact) mapping is an integral operator associated with a differential equation, e.g. $$ x(\cdot) \mapsto F(x(\cdot))(t) = x(t) - \int _0^tf(s,x(s))ds $$ with $f(t,\cdot)$ Lipschitz with summable rate. \begin{proposition}[\cite{AI97}]\label{scaldh} If $F: X\to Y$ is Lipschitz continuous near $x$, then $$ \partial_H(y^*\circ F)(x) \subset D_H^*F(x)(y^*),\quad \forall y^*\in Y^*. $$ If furthermore $F:X\to Y$ is directionally compact at $x$, then $$ D_H^*F(x)(y^*)= \partial_H(y^*\circ F)(x)\quad\&\quad D_G^*F(x)(y^*)= \partial_G(y^*\circ F)(x),\quad \forall y^*\in Y^*. $$ \end{proposition} Combining this proposition with Theorem \ref{subcrit} we get \begin{theorem}\label{subcrit4} Let $F: X\to Y$ satisfy the Lipschitz condition in a neighborhood of $\overline x$. If $F$ is directionally compact at all $x$ of the neighborhood, then $$ {\rm sur} F(\overline x)\ge\lim_{\varepsilon\to 0} \inf\{\| x^*\|:\; x^*\in \partial_H(y^*\circ F)(x),\; \| y^*\|=1,\; \| x-\overline x\|<\varepsilon\}, $$ \end{theorem} The obvious inequality $$ (y^*\circ F)(x+h)-(y^*\circ F)(x)\ge \inf_{w\in{\mathcal H}(x)(h)}\langle y^*,w\rangle $$ (where ${\mathcal H}(x)$ is a strict prederivative at $x$) leads to the estimate ${\rm sur} F(\overline x)\ge \displaystyle\liminf_{x\to\overline x}C^*({\mathcal H}(x))$ under the assumptions of the theorem. A better result can be proved with the help of the general metric regularity criteria if $F$ has a strict Fr\'echet prederivative at $\overline x$. \begin{theorem}\label{t1} Assume that $Y$ is G\^ateaux smooth and $F:X\to Y$ satisfies the Lipschitz condition in a neighborhood of \ $\overline x$ and, moreover, admits at $\overline x$ a strict Fr\'echet prederivative ${\mathcal H}$ with norm compact values such that for any $y^*$ with $\| y^*\|=1$ \begin{equation} \label{eqBasic1} \sup_{\| h\|=1} \inf_{w \in {\mathcal H}(h)} \langle y^*,w\rangle \ge \rho > 0. \end{equation} \noindent Then ${\rm sur} F(\overline x) \geq \rho$. \end{theorem} \proof With no loss of generality we may assume that the norm in $Y$ is G\^ateaux smooth off the origin. Take an $\varepsilon \in (0, \rho/3) $ and an $r>0$ such that \begin{equation}\label{eqBasic2} F(x')-F(x)\in{\mathcal H}(x)+\varepsilon\| x'-x\|, \end{equation} if $x,x'\in B(\overline x,r)$. Take an $x\in\overset{\circ}B(\overline x,r/2)$ and a $y\in Y$, different from $F(x)$. Let $y^*$ denote the derivative of $\|\cdot\|$ at $y - F(x)$. Then \begin{equation} \label{eq30} \lim_{t \to 0} t^{-1} \big(\|y - F(x) + tw\|-\|y -F(x)\|) = \langle y^*,w\rangle, \quad \mbox{for every} \quad w \in Y. \end{equation} By \eqref{eqBasic1}, there is an $h \in S_X$ such that \begin{eqnarray}\label{1betar} \langle y^*,w\rangle > \rho - \varepsilon, \quad \mbox{for all} \quad w \in {\mathcal H}(h). \end{eqnarray} Since the set $-{\mathcal H}(h)$ is compact and the limit in \eqref{eq30} is uniform with respect to $w$ from any fixed compact set, we conclude that for sufficiently small $t >0$ $$ \|y- F(x)- tw\| - \|y- F(x)\| + \langle y^*,tw\rangle < t \varepsilon \quad \mbox{for all} \quad w \in {\mathcal H}(h). $$ This and \eqref{1betar} imply that \begin{equation}\label{15bbb} \|y- F(x)-tw\| <\|y- F(x)\| - \langle y^*,tw\rangle + \varepsilon t \leq \|y- F(x)\| - t(\rho - 2 \varepsilon) \end{equation} for all \ $w \in {\mathcal H}(h)$. Let $x':=x+th$. Then $\|x' - x \| = \|th\| = t < r/2$, hence $x' \in B(\overline x,r)$. Since ${\mathcal H}$ is positively homogeneous, we have ${\mathcal H}(x' - x) = {\mathcal H}(th) = t {\mathcal H}(h)$. Thus by \eqref{eqBasic2} there is a $w \in {\mathcal H}(h)$ such that \begin{eqnarray}\label{16bb} \|F(x')-F(x)-tw\| \le t\varepsilon. \end{eqnarray} Now, we are ready for the following chain of estimates \begin{eqnarray*} \|y - F(x') \| & \leq & \big\|F(x)-F(x') + tw\big\|+ \big\|y- F(x) - tw\big\|\\ &<& \varepsilon t+ \|y- F(x)\| -(\rho-2\varepsilon)t \quad\quad \hbox{(by (\ref{16bb}) and (\ref{15bbb}))}\\ &=& \|y- F(x)\|-(\rho- 3\varepsilon) t = \|y- F(x)\|-(\rho- 3\varepsilon)\|x'-x\|. \end{eqnarray*} It remains to apply the criterion of Theorem \ref{secmil}. \endproof A slight modification of the proof allows to get the following \begin{theorem}\label{t2} Assume that $F:X\to Y$ satisfies the Lipschitz condition in a neighborhood of $\overline x$ and, moreoover, there are a homogeneous set-valued mapping ${\mathcal H}: X\rightrightarrows Y$ with norm compact values and $\beta\ge 0$ such that (\ref{eqBasic1}) holds and \begin{equation} \label{eq5} F(x+h) - F(x)\subset {\mathcal H}(h) + \beta\| x'-x\|B_Y. \end{equation} \noindent Then ${\rm sur} F(\overline x) \geq \rho-\beta$. \end{theorem} This theorem, in turn, allows us to look at what happens when a Lipschitz mapping is approximated by a bunch of linear operators. Indeed, if ${\mathcal T}$ is a collection of linear operators from $X$ to $Y$, then the set-valued mapping $X\ni x\longmapsto {\mathcal H}(x):=\{Tx:\; T\in{\mathcal T}\}$ is of course positively homogeneous. It is an easy matter to see that ${\mathcal H}$ inherits some properties of ${\mathcal T}$: for us it is important to observe that when ${\mathcal T}$ is (relatively) norm compact in ${\mathcal L}(X,Y)$ with the norm $\| T\|=\sup \{\| Tx\|:\; \| x\|\le 1\}$, then so are the values of ${\mathcal H}$, if ${\mathcal T}$ is bounded, then the values of ${\mathcal H}$ are also bounded etc.. Thus we come to the following conclusion. \begin{theorem}\label{t3} Assume that for a given $\overline x\in{\rm dom}~ F$ there is a convex subset ${\mathcal T} \subset {\mathcal L}(X,Y)$ which is norm compact in ${\mathcal L}(X,Y)$ and has the following two properties: (a) there is a $\beta>0$ such that for any $x,x'$ in a neighborhood of $\overline x$ there is a $T\in {\mathcal T}$ such that \begin{equation}\label{eq8} \|F(x)-F(x') - T(x-x')\|\le \beta\| x-x'\|; \end{equation} \indent (b) there are $\rho >0$ and $\varepsilon >0$ such that for any $T\in T$ \begin{equation}\label{eq9} \varepsilon\rho B_Y\subset T(\varepsilon B_X). \end{equation} \noindent Then ${\rm sur} F(\overline x)\ge \rho-\beta$. \end{theorem} Scalarization formulas first appeared in \cite{AI84} for mappings between finite dimensional spaces and \cite{AK85} for mappings between Fr\'echet smooth spaces, although scalarized coderivatives were considered already in \cite{AI81,AK81}. The very term ``coderivative" was introduced in \cite{AI81}. The concept of prederivative was introduced in \cite{AI81} and a characterization of directional compactness in \cite{AI97}, see also \cite{JT95a} for an earlier result. Theorems \ref{t1} and \ref{t2} will appear in \cite{CFI}. Theorem \ref{t3} was proved in \cite{CF13}. An earlier result without constraints on the domain of the mapping was proved by P\'ales in \cite{ZP97} We also refer to \cite{CFI} for a shorter proofs of the last theorem. Note that the convexity requirement in Theorem \ref{t3} is essential (consider, for instance, $F(x)=|x|: I\!\!R\to I\!\!R$ and ${\mathcal T}$ containing two operators $T_1(x)=x$ and $T_2(x)=-x$). Because of this requirement the estimate provided by Theorem \ref{t3} is generally less precise than those of Theorems \ref{subcrit4} and \ref{t1} (consider for instance the mapping $I\!\!R^2\toI\!\!R:\; F(x_1,x_2)= |x_1|-|x_2|$), but it can be easier to apply in certain cases (e.g. in the finite dimensional case when we can take the generalized Jacobian as ${\mathcal T}$ - see \cite{FHC76b}). \subsection{Polyhedral sets and mappings} This subsection contains some elementary results concerning geometry of polyhedral sets in $I\!\!R^n$ and regularity of set-valued mappings with polyhedral graphs. Deeper problems associated with variational inequalities over convex polyhedral sets will be discussed in the next section. \begin{definition}[polyhedral sets]\label{poly}{\rm A {\it convex polyhedral set} (or a {\it convex polyhedron}) $Q\subset I\!\!R^n$ is an intersection of finitely many closed linear subspaces and hyperplanes, that is \begin{equation}\label{polyd} Q=\{x\inI\!\!R^n:\; \langle x_i^*,x\rangle\le\alpha_i,\ i=1,\ldots,k;\; \langle x_i^*,x\rangle=\alpha_i,\; i=k+1,\ldots,m \} \end{equation} for some nonzero $x_i^*\inI\!\!R^n$ and $\alpha_i\inI\!\!R$. Following \cite{DR} we shall use the term {\it polyhedral set} for finite unions of convex polyhedra. } \end{definition} Clearly, any polyhedral set is closed. Also: as any linear equality can be replaced by two linear inequalities, we can represent any polyhedral set by means of a system of linear inequalities only. Elementary geometric argument allow to reveal one of the most fundamental property of polyhedral sets: {\it orthogonal projection of a polyhedral set is a polyhedral set}. In fact a linear image of a polyhedral set is polyhedral (see \cite{RTR} for this and other basic properties of polyhedral sets). A set-valued mapping $R^n\rightrightarrows I\!\!R^m$ is (convex) polyhedral if so is its graph. Our primary interest in this section is to study regularity properties of such mappings. \begin{proposition}[local tangential representation]\label{loctan} Let $Q\subset I\!\!R^n$ be a polyhedral set and $\overline x\in Q$. Then there is an $\varepsilon >0$ such that $$ Q\cap B(\overline x,\varepsilon)= \overline x+ T(Q,\overline x)\cap(\varepsilon B). $$ \end{proposition} As an immediate consequence, we conclude that {\it regularity properties of a polyhedral set-valued mapping with closed graph at a point of the graph are fully determined by the corresponding properties at zero of its graphical derivative at the point.} One more useful corollary concerns normal cones of a polyhedral sets. \begin{proposition}\label{locnorm} Let $Q\subset I\!\!R^n$ be a polyhedral set. Then for any $\overline x\in Q$ there is an $\varepsilon>0$such that $N(Q,x)\subset N(Q,\overline x)$ for any $x\in Q\cap B(\overline x,\varepsilon)$. \end{proposition} Our first result is the famous Hoffmann theorem on error bounds for a system of linear inequalities. Set $a=(\alpha_1,\ldots,\alpha_m)\inI\!\!R^m$ and let $Q(a)$ be defined by (\ref{polyd}). \begin{theorem}[Hoffmann]\label{hoffmann} Given $x_i^*\inI\!\!R^n$. Then there is a $K>0$ such that the inequality $$ d(x,Q(a))\le K\Big(\sum_{i=1}^k(\langle x_i^*,x\rangle-\alpha_i)^+ +\sum_{i=k+1}^m|\langle x_i^*,x\rangle-\alpha_i|\Big) $$ holds for all $x\inI\!\!R^n$ and all $a\inI\!\!R^m$ such that $Q(a)\neq\emptyset$. \end{theorem} \proof We shall apply Theorem \ref{conver}. Take an $a$ and set $$ f(x) = \sum_{i=1}^k (\langle x_i^*,x\rangle-\alpha_i)^+ +\sum_{i=k+1}^m |\langle x_i^*,x\rangle -\alpha_i| . $$ Then $Q(a)= [f\le 0]$. Set $$ \begin{array}{ll} I_1(x)=\{i\in\{1,\ldots,k\}: \; \langle x_i^*,x\rangle\le\alpha_i \},& J_+(x)=\{i\in\{1,\ldots,m\}: \; \langle x_i^*,x\rangle>\alpha_i \};\\ I_0(x)=\{i\in\{k+1,\ldots,m\}: \; \langle x_i^*,x\rangle=\alpha_i \},& J_-(x)=\{i\in\{k+1,\ldots,m\}: \; \langle x_i^*,x\rangle<\alpha_i \} \end{array} $$ Then $$ \partial f(x) =\sum_{i\in I_1(x)}[0,1]x_i^*+\sum_{i\in I_0(x)}[-1,1]x_i^* +\sum_{i\in J_+(x)} x_i^*-\sum_{i\in J_-(x)} x_i^*. $$ If $x\not\in Q(\alpha)$, then $0\not\in\partial f(x)$ and $d(0,\partial f(x))>0$. We observe now that the dependence of $\partial f(x)$ of $x$ and $a$ is fully determined by the decomposition of the index set ${1,\ldots,m}$. Let $\Sigma$ be the collection of all decompositions of the index set into four subsets $I_1,\ I_0,\ J_+,\ J_-$ such that $I_1\subset\{1,\ldots,k\}$, $I_0,J_-\subset \{k+1,\ldots,m\}$ and $$ 0\not\in \sum_{i\in I_1}[0,1]x_i^*+\sum_{i\in I_0}[-1,1]x_i^* +\sum_{i\in J_+} x_i^*-\sum_{i\in J_-} x_i^*. $$ For any $\sigma\in\Sigma$ denote by $\gamma(\sigma)$ the distance from zero to the set in the right-hand side of the above inclusion, and let $K$ stand for the upper bound of $\gamma(\sigma)^{-1}$ over $\sigma\in\Sigma$. Then $K<\infty$ since $\Sigma$ is a finite set. Clearly, $K$ does not depend on either $a$ or $x$. On the other hand, $K\partial f(x)\ge 1$. It remains to refer to Theorem \ref{conver} to conclude the proof. \endproof As an immediate consequence, we get \begin{theorem}[regularity of convex polyhedral mappings]\label{polyreg} Let $F: I\!\!R^n\rightrightarrows I\!\!R^m$ be a polyhedral set-valued mapping. Then (a) there is a $K>0$ such that $d(y,F(\overline x))\le K\| x-\overline x\|$ for any $\overline x\in{\rm dom}~ F $ and any $(x,y)\in{\rm Graph}~ F$; (b) there is a $K>0$ (different from that in (a)) such that $d(x,F^{-1}(y))\le Kd(y,F(x))$ for any $x\in{\rm dom}~ F$ and $y\in F(X)$. \end{theorem} \noindent and \begin{theorem}[global subtransversality of convex polyhedral sets]\label{subpoly} Any two convex polyhedral sets $Q_1$ and $Q_2$ with nonempty intersection are globally subtransversal: there is a $K>0$ such that $$ d(x,Q_1\cap Q_2)\le K(d(x,Q_1)+ d(x,Q_2)). $$ \end{theorem} To prove Theorem \ref{polyreg} we have to apply the Hoffmann estimate to the graph of $F$. Concerning Theorem \ref{subpoly}, it should be observed that global transversality does not imply transversality at any point. As a simple example, consider the half spaces $S_1=\{x:\; \langle x^*,x\rangle \ge 0\}$ and $S_2=\{x:\; \langle x^*,x\rangle \le 0\}$ with some $x^*\neq 0$. The intersection of the sets is ${\rm Ker}~ x^*\neq \emptyset$. But the inclusions $x_1-x\in S_1$ and $x_2-x\in S_2$ imply $\langle x^*,x_1\rangle\ge \langle x^*,x_2\rangle$, hence (see Definition \ref{vartrans}) $S_1$ and $S_2$ are not transversal at points of ${\rm Ker}~ x^*$. The results easily extend to all (not necessarily convex) polyhedral mappings. \begin{theorem}[subregularity of polyhedral mappings]\label{semireg} Let $F: I\!\!R^n\rightrightarrows I\!\!R^m$ be a semi-linear set-valued mapping with closed graph. Then (a) there is a $K>0$ such that for any $\overline x\in{\rm dom}~ F$ there is an $\varepsilon >0$ such that $d(y,F(\overline x))\le K\| x-\overline x\|$ for all $(x,y)\in{\rm Graph}~ F$ such that $\| x-\overline x\|<\varepsilon$; (b) there is a $K>0$ (different from that in (a)) such that for any $(\bar x,\bar y)\in {\rm Graph}~ F$ there is an $\varepsilon >0$ such that $d(x,F^{-1}(y))\le K d(y,F(x))$ if $\| x-\overline x\|<K\varepsilon$. Thus $F$ is subregular at any point of its graph. \end{theorem} \proof We have $F(x)=\cup_{i=1}^k F_i(x)$, where all $F_i$ are convex polyhedral set-valued mappings. By Theorem \ref{polyreg} for any $i$ there is a $K_i$ such that $d(y,F_i(x))\le K_i\| x-\overline x\|$ for any $\overline x\in{\rm dom}~ F_i$ and any $(x,y)\in{\rm Graph}~ F_i$. Now fix some $\overline x\in{\rm dom}~ F$, and let $I=\{i:\;\overline x\in{\rm dom}~ F_i\}$. Choose an $\varepsilon>0$ so small that $d(x,{\rm dom}~ F_i)>\varepsilon$ if $i\not\in I$ and $\| x-\overline x\|<\varepsilon$. (Clearly, such an $\varepsilon$ can be found as all ${\rm dom}~ F_i$ are polyhedral sets, hence closed.) If now $y\in F(x)$ and $\| x-\overline x\|<\varepsilon$, then $I(x,y)=\{i:\; y\in F_i(x)\}\subset I$. On the other hand, as we have seen, there are $K_i$ such that $y\in F_i(x)$ implies that $d(y,F_i(\overline x))\le K_i\| x-\overline x\|$. Thus, if $y\in F(x)$ and $\| x-\overline x\|<\varepsilon$, then $$ d(y,F(\overline x))\le\max_{i\in I(x,y)}d(y,F_i(\overline x))\le (\max_i K_i)\| x-\overline x\|. $$ This proves the first statement. To prove the second, we apply the first to $F^{-1}$ and find $K$ and $\varepsilon$ such that $d(x,F^{-1}(\overline y))\le K\| v-\overline y\|$ if $v\in F(x)$ and $\| v-\overline y\|<\varepsilon$. If $d(\overline y,F(x)) <\varepsilon$, it follows that $d(x,F^{-1}(\overline y))\le Kd(\overline y,F(x))$. This inequality trivially holds if $d(\overline y,F(x)) \ge\varepsilon$ and $\| x-\overline x\|\le K\varepsilon$. \endproof The property in the second part of the theorem falls short of metric regularity because it does not guarantee that the $\varepsilon$ will be uniformly bounded away from zero if we slightly change $\overline y$. The following simple example illustrates the phenomenon. \begin{example}{\rm Let $X= Y= R,\ Y$, and let $$ F_1(x)=\left\{\begin{array}{cl} I\!\!R_+,&{\rm if}\; x>0,\\ I\!\!R,&{\rm if}\; x=0,\\ \emptyset,&{\rm if}\; x<0\end{array}\right.;\qquad F_2(x)=\left\{\begin{array}{cl} I\!\!R_-,&{\rm if}\; x<0,\\ I\!\!R,&{\rm if}\; x=0,\\ \emptyset,&{\rm if}\; x>0\end{array}\right. $$ and $F(x) =F_1(x)\cup F_2(x)$. Fix some $y>0$ and $x<0$. Then $F^{-1}(y)= H_+$ and $d(x,F^{-1}(y))=|x|$, $d(y,F(x))= y$ so that for no $K$ the inequality $d(x,F^{-1}(y)\le Kd(y,F(x))$ holds in a neighborhood of $(0,0)$. } \end{example} \begin{corollary}[subtransversality of polyhedral sets]\label{subsemi} Any two semi-linear sets $Q_1$ and $Q_2$ with nonempty intersection are subtransversal at any common point of the sets. $$ d(x,Q_1\cap Q_2)\le K(d(x,Q_1)+ d(x,Q_2)). $$ \end{corollary} To conclude, we mention that {\it for any polyhedral mapping $F: R^n\rightrightarrows I\!\!R^n$ the set of critical values (that is such $y\inI\!\!R^m$ such that ${\rm sur} F(x|y)=0$ for some $x\in F^{-1}(y)$) is a polyhedral set of dimension smaller than $m$.} This will immediately follow from the semi-algebraic Sard theorem stated in the next subsection. \subsection{Semi-algebraic mappings, stratifications and the Sard theorem.} Most of the results of this subsection, including the Sard theorem can be extended to a wide class of objects, so called {\it definable} sets, mappings and functions. We however confine ourselves here to semi-algebraic functions whose definition is much simpler (compare with the general definition of definability) and does not require any specific effort\footnote{It should be mentioned that recently Barbet, Dambrine, Daniilidis, Rifford \cite{BDDR} proved a remarkable result containing extensions of the Sard theorem to some other important classes of non-smooth functions.} We shall concentrate basically on two topics: consequences of the general theory and studies associated with semi-algebraic geometry, mainly in connection with the Sard theorem. \subsubsection{Basic properties {\rm (see \cite{BCR,MC})}.} A semi-algebraic set in $I\!\!R^n$ is by definition a union of finitely many sets of solutions of a finite system of polynomial equalities and inequalities of $n$ variables: $$ \{ x\in I\!\!R^n:\; P_i(x)=0,\ i=1,\ldots,k,\; P_i(x)<0,\ i=k+1,\ldots, m\}. $$ As immediately follows from the definition, every algebraic set is semi-algebraic, every polyhedral set is semi-algebraic, unions and intersections of finite collections of semi-algebraic sets are again semi-algebraic. The main fact of the semi-algebraic geometry is the deep Tarski-Seidenberg theorem which roughly speaking says that a linear projection of a semi-algebraic set is a semi-algebraic set. This theorem determines stability of the class of semi-algebraic sets with respect to a broad variety of transformations. A mapping (no matter single or set-valued) is semi-algebraic if its graph is semi-algebraic. Here is a list of some basic properties of semi-algebraic sets and mappings: $\bullet$ \ the closure and interior of a semi-algebraic set is semi-algebraic; $\bullet$ \ Cartesian product of semialgebraic sets is semi-algebraic; $\bullet$ \ composition of semi-algebraic mappings is semi-algebraic; $\bullet$ \ image and preimage of a semi-algebraic set under a semi-algebraic mapping is semi-algebraic; $\bullet$ \ derivative of a (single-valued) semi-algebraic mapping is semi-algebraic; $\bullet$ \ the upper and lower bound of a finite collection of extended-real-valued semi-algebraic functions is semi-algebraic; $\bullet$ \ if we have a semi-algebraic function of two (vector) variables, then its upper or lower bound with respect to one of the variables on a semi-algebraic set is semi-algebraic; $\bullet$ \ if $F$ is a semi-algebraic set-valued mapping such that every $F(x)$ is a finite set, then the number of elements in each $F(x)$ does not exceed certain finite $N$. For us, in the context of variational analysis and, especially, regularity theory, the most important is that $\bullet$ \ subdifferential mapping of a semi-algebraic function or the coderivative mapping of a semi-algebraic map is semi-algebraic (no matter of which subdifferential on $I\!\!R^n$: Fr\'echet, Dini-Hadamard, limiting or Clarke, we are talking about); $\bullet$ \ slope of a semi-algebraic function is a semi-algebraic function of the point; $\bullet$ \ rates of regularity of a semi-algebraic functions are also semi-algebraic functions of the point of the graph. \begin{definition}\label{stratd}{\rm A finite partition $(M_i)$ of a set $Q\subsetI\!\!R^n$ is called} $C^r$-Whitney stratification of $Q$ {\rm if each $M_i$ is a $C^r$-manifold and the following two properties are satisfied: (a) if $(x_k)\subset M_i$ converges to some $x$ belonging to another element $(M_j)$ of the partition, and the unit normal vectors $v_k\in N_{x_k}M_i$ converge to some $v$, then $v\in N_x M_j$; (b) if $M_j\cap {\rm cl} M_i\neq\emptyset$, then $M_j\subset {\rm cl} M_i$. } \end{definition} \noindent Elements of partitions are usually called {\it strata}. The following remarkable fact is due to S. \L ojasievicz: \begin{theorem}[stratification theorem]\label{strat} Given a semi-algebraic set $Q\subset I\!\!R^n$ and an $r \in \mathbb{N}$. Then $Q$ admits a Whitney stratification into semi-algebraic $C^r$-manifolds. \end{theorem} Of course, stratification is not unique. But it is easy to understand that maximal dimensions of the strata coincide for all Whitney stratifications. This observation justifies the following \begin{definition} {\rm The} dimension $\dim Q$ {\rm of a semi-algebraic set $Q$ is the maximal dimension of the strata in Whitney stratifications of $Q$}. \end{definition} The most important consequence of the stratification theorem is a Sard-type theorem for semi-algebraic set-valued mappings, \begin{definition}\label{sardd} {\rm Let $F: I\!\!R^n\rightrightarrows I\!\!R^m$ be a set-valued mapping with semi-algebraic graph, and let $\partial$ stand either for the limiting or for the Clarke subdifferential. A point $\overline y\in I\!\!R^m$ is a {\it critical value} of $F$ if there is an $x\inI\!\!R^n$ such that $y\in F(x)$ and $0\in D^*F(x|y)(y^*)$ for some $y^*\neq 0$. } \end{definition} \begin{theorem}[semi-algebraic Sard theorem]\label{sardsa} Critical values of a semi-algebraic set-valued mapping $F: I\!\!R^n\rightrightarrows I\!\!R^m$ form a semi-algebraic set of dimension not exceeding $m-1$. In particular an extended-real valued semi-algebraic function can have at most finitely many critical values. \end{theorem} For the theory of semi-algebraic sets and mappings see \cite{BCR,YC}. The Sard theorem was first proved by Bolte-Daniilidis-Lewis \cite{BDL06} for real-valued functions and then by Ioffe \cite{AI08} for set-valued mappings (in both cases the theorems were stated for more general classes of objects - semi-analytic functions in \cite{BDL06} and arbitrarily stratifiable maps in \cite{AI08}). \subsubsection{Transversality.} We are finally ready to extend transversality theory (not just the definition) beyond the smooth domain. To begin with, we observe that a direct extension of Proposition \ref{prethom} does not hold if $F$ is not smooth. \begin{example}\label{transfail}{\rm Consider the function $$ f(x,w) = |x|-|w| $$ viewed as a mapping from $I\!\!R^2$ into $I\!\!R$. This mapping is clearly semi-algebraic, even polyhedral. It is easy to verify that the mapping is regular at every point with the modulus of surjection identically equal to one (if we take the $\ell^{\infty}$ norm in $I\!\!R^2$). Furthermore $$ Q = f^{-1}(0)=\{ (x,w):\; |x|=|w|\} $$ and the restriction to $Q$ of the projection $(x,w)\to w$ is also a regular mapping with the modulus of surjection equal one. However, the partial mapping $x\to f(x,0)= |x|$ is not regular at zero. } \end{example} However, the following statement is true. \begin{proposition}[\cite{AI11b}]\label{regpar} Let $F: I\!\!R^m\times I\!\!R^k\rightrightarrows I\!\!R^n$ be a semi-algebraic set-valued mapping with locally closed graph, and let $\overline y\in F(\overline x,\bar p)$. Assume that (a) $F$ is regular at $((\overline x,\bar p),\overline y)$; (b) the set-valued mapping $ I\!\!R^m\timesI\!\!R^n\rraI\!\!R^k$ associating the set $\{ p:\ y\in F(x,p)\}$ with any $(x,y)\inI\!\!R^n\timesI\!\!R^n$ is regular at $((\overline x,\overline y),\bar p)$; (c) there is a Whitney stratification $(M_i)$ of ${\rm Graph}~ F$ such that the restriction of the projection $(x,p)\to p$ to the set $S_i=\{ (x,p): (x,p,\overline y)\in M_i\}$, where $M_i$ is the stratum containing $(\overline x,\bar p,\overline y)$, is regular at $(\overline x,\bar p)$. Then $F_{\bar p}: x\mapsto F(x,\bar p)$ is regular at $(\bar x,\bar y)$. \end{proposition} It is now possible to state and prove a set-valued version of Theorem \ref{thomsm}. \begin{theorem}\label{thomset} Let the mapping $F: I\!\!R^n\timesI\!\!R^k\rightrightarrows I\!\!R^m$ with closed graph and a closed set $S\subsetI\!\!R^m$ be both semi-algebraic. Denote by $F_p$ the set-valued mapping $x\mapsto F(x,p)$. If $F$ is transversal to $S$, then for all $p$, with possible exception of a semi-algebraic set of dimension smaller than $k$, $F_p$ is transversal to $S$. \end{theorem} \proof The theorem is trivial if $F(x,p)\cap S=\emptyset$ for all $(x,p)$, so we assume that $F(x,p)$ meets $S$ for some values of the arguments. Then $(0,0)$ is a regular value of the mapping $\Psi: I\!\!R^n\times I\!\!R^m\timesI\!\!R^k\to I\!\!R^m$, $\Psi(x,y,p)= (F(x,p)-y)\times (S-y)$. Let $Q= \Psi^{-1}(0,0)$. This is a semi-algebraic set, so by Theorem \ref{sardsa} there is a semi-algebraic set $C_0\inI\!\!R^k$ such that $\dim C_0<k$ and every $p\inI\!\!R^k\backslash C_0$ is a regular value of the restriction $\pi|_Q$ of the projection $(x,y,p)\mapsto p$. Take an $r> N+m-k$, and let $(M_i)_{ i=1,\ldots r}$ be a $C^1$-Whitney stratification of ${\rm Graph}~ \Psi$ with all $M_i$ being semi-algebraic manifolds. Then for any $i$ there is a semi-algebraic set $C_i\subset I\!\!R^k$ such that any $p\in I\!\!R^k\backslash C_i$ is a regular value of $\pi|_{M_i}$. The union $C=\bigcup_{i=0}^r C_i$ is also a semi-algebraic set of dimension smaller than $k$ and, as we have just seen, for any $p\not\in C$ all of the assumptions of Proposition \ref{regpar} are satisfied for $\Psi$. Therefore $(0,0)$ is a regular value of $\Psi_p$. By Proposition \ref{tran1} this means that $F_p$ is transversal to $S$.\endproof \section{Some applications to analysis and optimization} In this section we give several examples illustrating the power of regularity theory as a working instrument for treating various problems in analysis and optimization. We do not try each time to prove the result under the most general assumptions. The purpose is rather to demonstrate how regularity considerations help to understand and/or simplify the analysis of one or another phenomenon. Again, it should be said that some interesting areas of application of metric regularity remain outside the scope of the paper. Just mention the role of regularity in numerical optimization (see e.g. \cite{DR,KK02,KK09}) or connections with metric fixed point theory (e.g. \cite{AAGDO,DF11,DF12,AI11a,AI14}) or recent developments associated with tilt stability, quadratic growth etc. (e.g. \cite{AG08,AG14,DI15,DL13,KK02,PR98} ). \subsection{Subdifferential calculus} In each of the three calculus rules stated in Proposition \ref{basrul} we assume one function Lipschitz. One of the reasons (especially important in the proof of the exact sum rule) is that Lipschitz functions have bounded subdifferentials. But what happens when both functions are not Lipschitz? For instance, what can be said about normal cone to an intersection of sets? As in the calculus of convex subdifferentials, we do need some qualification conditions to ensure the result. \begin{theorem}\label{sumtran} Let $X$ be a Banach space and $S_i,\ i=1,2$ are closed subsets of $X$. Let further $\overline x\in S= S_1\cap S_2$. If $S_1$ and $S_2$ are subtransversal at $\overline x$, then $$ N_G(S,\overline x)\subset N_G(S_1,\overline x)+ N_G(S_2,\overline x). $$ \end{theorem} Explicitly, this theorem was first mentioned in \cite{AI00} but de facto it was proved already in \cite{AI89a} (see also \cite{IP96}, Proposition 3). It turns out that subtransversality is the most general of all so far available conditions that would guarantee the inclusion. The most popular subdifferential transversality condition (condition (b) of Theorem \ref{F5}) may be much stronger. The inclusion is among the most fundamental facts of the subdifferential calculus: enough to mention that in the majority of publications on the subject it is used as the starting point for deriving all other calculus rules. Below is a sketch of the proof of the theorem for the finite dimensional situation. \proof We need the following elementary and/or well known facts of functions on and sets in $I\!\!R^n$: $\bullet$ \ $\hat N(Q,x)\cap B= \hat{\partial} d(\cdot,Q)(x)$ if $x\in Q$; $\bullet$ \ if $x^*\in\hat{\partial}d(\cdot,Q)(x)$ and $u\in Q$ is the closest to $x$, then $x^*\in \hat{N}(Q,u)$; $\bullet$ \ if $x\in Q$ and $f(\cdot)$ is nonnegative, equal to zero at $x$ and $f(u)\ge d(u,Q)$ in a neighborhood of $x$, then $\hat{\partial} d(\cdot,Q)(x)\subset\hat{\partial}f(x)$. Combining this with the definition of the limiting subdifferential, we conclude that for $Q$, $f$ and $x$ as above, $\partial d(\cdot,Q)(x)\subset \partial f(x)$ - the fact that is surprisingly missing from monographic publications. By the assumption there is a $K>0$ such that $d(x,S)\le K(d(x,S_1)+d(x,S_2))$, so applying the above to $f(x) = K(d(x,S_1)+d(x,S_2))$ along with the exact calculus rule of Proposition we conclude that $\partial d(\cdot,S)(\overline x)\subset K(\partial(\cdot, S_1)(\overline x)+ \partial(\cdot, S_1)(\overline x))$ and the result follows.\endproof \subsection{Necessary conditions in constrained optimization.} We discuss here two ways to apply regularity theory to necessary optimality conditions and then a general approach to necessary conditions associated with one of them. Both substantially differ from classical proofs that include linearization and separation as the major steps (see e.g. \cite{DM65,RVG77,IT,SMR76,SMR76c}). Verification of relevance of linearization is usually the central and most difficult part of the proofs. It is established under certain constraint qualifications which always imply and often are equivalent to regularity of the constraint mapping (as in case of the popular Mangasarian-Fromovitz and Slater qualification conditions) (see e.g. \cite{SMR76} where the connection with regularity was made explicit). We refer to \cite{AK85,BM88,BM} for extensions of the classical approach to nondifferentiable optimization in which convex separation is replaced by an ``extremal principle". The point is however that a fuller use of regularity arguments makes the way to necessary conditions much shorter. To begin with we shall consider the problem \begin{equation}\label{P1} {\rm minimize} \; f(x),\quad {\rm s.t.}\; F(x)\in Q, \; x\in C \end{equation} (where $F: X\to Y$ is single-valued and $Q\subset Y$ and $C\subset X$ are closed sets) assuming for simplicity that both $X$ and $Y$ are finite dimensional although the results have been originally proved in much more general situations. \subsubsection{Non-covering principle.} So let $\overline x\in C$ be a solution of the problem. Let $\Psi$ stand for the restriction to $C$ of the set-valued mapping $x\mapsto \{f(x)-I\!\!R_-\}\times(F(x)-Q)$ from $X$ into $Z=I\!\!R\times Y$. Clearly, this mapping cannot be regular near $(\overline x,(f(\overline x),0)) $. (Indeed, if $U$ is a small neighborhood of $\overline x$, then $\Psi(U)$ cannot contain points $(f(\overline x)-\varepsilon,0)$. It follows that the negation of any condition sufficient for regularity is a necessary condition for $\overline x$ to be a local solution in the problem. Applying Theorem \ref{F7} and Corollary \ref{CF7} we get the following result. \begin{theorem}\label{N1} Assume that $F: I\!\!R^n\to I\!\!R^m$ is Lipschitz in a neighborhood of $\overline x$. If $\overline x$ is a local solution of (\ref{P1}), then there is a nonzero pair $(\lambda, y^*)$ such that $\lambda \ge 0$, $y^*\in N(Q,\overline y)$ and \begin{equation}\label{P2} 0\in\partial(\lambda f + (y^*\circ F|_C))(\overline x). \end{equation} \end{theorem} This formulation needs some comments. We have stated the theorem in finite dimensions for simplicity, its infinite dimensional version can be found e.g. in \cite{AI87}. Note further that a more customary formulation would be \begin{equation}\label{P3} 0\in \partial (\lambda f + (y^*\circ F))(\overline x) + N(C,\overline x). \end{equation} \noindent This condition is usually more convenient (constraints are separated) but in general weaker than (\ref{P2}). It is equivalent to (\ref{P2}) if e.g. $C=X$ (obvious) or if both $f$ and $F$ are continuously differentiable and the constraint qualification \begin{equation}\label{P2b} 0\in F'(\overline x)y^*+ N_C(\overline x),\quad y^*\in N_Q(F(\overline x)) \;\Rightarrow\; y^*=0 \end{equation} is satisfied (see e.g. \cite{RW}, Example 10.8) which means that $F|_C$ is transversal to $Q$ at $\overline x$ (Proposition \ref{regpar}). Finally, we observe that the necessary condition is stated in the Lagrangian form. Again, such condition can be substantially more precise than the "separated" condition $0\in\lambda\partial f(\overline x) +\partial(y^*\circ F)(\overline x)$ (say in the absence of the constraint $x\in C$) which in various forms often appears in literature. Both conditions are equivalent if, say $f$ is continuously differentiable. The ``non-covering" approach to necessary optimality condition was first applied probably by Warga \cite{JW76} in a fairly classical setting of the standard optimal control problem. Warga refers not to the Lyusternik- Graves theorem but to the result of Yorke \cite{JY72} which is a weakened version of the theorem for integral operators associated with ordinary differential equations. But already the same year the controllability - optimality dichotomy appeared as the main tool of proving necessary conditions for nonsmooth optimal control in the papers by Clarke \cite{FHC76c} and Warga \cite{JW76a}. In the context of an abstract optimization problem a non-covering criterion seems to have been first applied by Dmitruk-Milyutin-Osmolowski in \cite{DMO} to problems with finitely many functional constraints and recently, to problems with mixed structure (partly smooth and partly close to convex), by Avakov, Magaril-Il'yaev and Tikhomirov \cite{AMT13}. In the next subsection 8.3 we demonstrate the work of this techniques for an abstract relaxed optimal control problem. Theorem \ref{N1} in an infinite dimensional setting was obtained in \cite{AI87} with the same proof based on the non-covering criterion. \subsubsection{Exact penalty.} The immediate predecessor of the approach we are going to discuss here was the idea of an ``exact penalty" \ offered by Clarke \cite{FHC76a,FHC83}: if $f$ attains a local minimum on a closed set $S$ at $\overline x\in S$ and satisfies the Lipschitz condition near $\overline x$, then $\overline x$ is a point of unconstrained minimum of $g(x)=f(x)+K d(x,S)$ with $K$ greater than the Lipschitz constant of $f$ near $\overline x$. Clarke used a fairly sophisticated reduction technique to apply this idea to problems with functional constraints. The arguments however are dramatically simplified by direct invoking regularity considerations. Let us return to the problem (\ref{P1}), assuming as above that $F$ is single-valued Lipschitz $X=I\!\!R^n$, $Y=I\!\!R^m$, and set as in Theorem \ref{F7} $$ \Phi(x) =\left\{\begin{array}{cl} F(x)-Q,&{\rm if}\; x\in C;\\ \emptyset,&{\rm otherwise.}\end{array} \right. $$ Then our problem can be reformulated as \begin{equation}\label{P2a} {\rm minimize} \; f(x),\quad {\rm s.t.}\; 0\in\Phi(x). \end{equation} Suppose that $\Phi$ is subregular at $(\overline x,0)$. This means that there is some $K_0>0$ such that $d(x,\Phi^{-1}(0))\le K_0d(0,\Phi(x))$ for $x$ of a neighborhood of $\overline x$. But $\Phi^{-1}(0)$ is the feasible set of our problem, so that there is some other $K_1>0$ such that the function $f(x) +K_1 d(0,\Phi(x)) $ attains local minimum at $\overline x$ or equivalently, the function $f(x)+K_1d(y,F(x)-Q)$ attains a local minimum at $\overline x$ subject to $x\in C$. The last function is Lipschitz continuous near $\overline x$, hence there is a $K$ such that \begin{equation}\label{P8} g(x) = f(x)+K(d(y,F(x)-Q) + d(x,C) \end{equation} attains an unconditional minimum at $\overline x$. If on the other hand, $\Phi$ is nor subregular at $\overline x$, Theorems \ref{F1} and \ref{F7} imply together that $0\in \partial(y^*\circ F)(\overline x)+ N(C,\overline x)$ for some nonzero $y^*\in N(Q,F(\overline x))$. From here we easily get a weakened version of Theorem \ref{N1} with the Lagrangian condition replaced by its ``separated" versions $$ 0\in \partial f(\overline x)+ \partial(y^*\circ F)(\overline x)+ N(C,\overline x),\quad y^*\in N(Q,F(\overline x)). $$ This is a definite drawback, as we have already mentioned which however is counterbalanced by some serious advantages. First we note that $g$ is defined in terms of the original data which makes it possible to study higher order optimality conditions using this function. This is how such a techniques was used for the first time in \cite{AI79} in order to get necessary optimality conditions earlier obtained by Levitin-Milyutin-Osmolowski in \cite{LMO}. Another advantage is that the second approach is more universal. It can work for problems for which using scalarized coderivatives is either difficult or just impossible as say, in problems involving inclusions $0\in \Phi (x)$ with general set-valued $\Phi$. This is a typical case in optimal control of dynamic systems described by differential inclusions. Loewen \cite{PDL} was the first to use this approach to prove a maximum principle in a free right end point problem of that sort. The analytic challenge in his proof was to find an upper estimate for the distance to the feasible set. However the next step in the development, the "optimality alternative" discussed below, excludes even any need in such an estimate. \subsubsection{Optimality alternative.} Consider the abstract problem with $(X,d)$ being a complete metric space: \vskip 1mm \centerline{\rm minimize\quad $f(x)$, \ \ subject to \ \ $x\in Q\subset X$.} \begin{theorem}\label{optalt} Let $\varphi$ be a nonnegative lsc function on $X$ equal to zero at $\overline x$. If $\overline x\in Q$ is a local solution to the problem, then the following alternative holds true: \vskip 1mm \noindent$\bullet$ \ either there is a $\lambda >0$ such that the function $\lambda f+\varphi$ has an unconstrained local minimum at $\overline x$; \noindent$\bullet$ \ or there is a sequence $(x_n)\to \overline x$ such that $\varphi(x_n)<n^{-1}d(x_n,Q)$ and the function $x\mapsto \varphi(x)+n^{-1}d(x,x_n)$ attains a local minimum at $x_n$ for each $n$. \end{theorem} We shall speak about {\it regular case} if the first option takes place and {\it singular} or {\it non-regular case} otherwise. \vskip 1mm \proof Indeed, either there is an $R>0$ such that $R\varphi(x)\ge d(x,Q)$ for all $x$ of a neighborhood of $\overline x$, or there is a sequence $(z_n)$ converging to $\overline x$ and such that $n^2\varphi(z_n)<d(z_n,Q)$. In the first case (as $f$ is Lipschitz) we have for $x\not\in Q$ and $u\in Q$ close to $x$ (so that e.g. $d(x,u)< 2d(x,Q)$: $$ f(x)\ge f(u)-Ld(x,u)\ge f(\overline x)-2LR\varphi(x), $$ if $L$ is a Lipschitz constant of $f$. As $X$ is complete and $\varphi$ is lower semicontinuous, we can apply Ekeland's principle to $\varphi$ (taking into account that $\varphi(z_n)<\inf \varphi + n^{-2}d(z_n,Q)$) and find $x_n$ such that $d(x_n,z_n)\le n^{-1}d(z_n,Q)$, $\varphi(x_n)\le \varphi(z_n)$ and $\varphi(x)+n^{-1}d(x,x_n)>\varphi(x_n)$ for $x\neq x_n$. We have finally $$ d(x_n,Q)\ge d(z_n,Q)-d(x_n,z_n)\ge(1-n^{-1})d(z_n,Q)\ge (1-n^{-1})n^2\varphi(z_n)\ge n\varphi(x_n) $$ as claimed.\endproof Thus, a constrained problem reduces to one or a sequence of unconstrained minimization problems. Hopefully, such problems can be easier to analyze thanks to the freedom of choosing $\varphi$ which we call {\it test function} in the sequel. Even before the alternative was explicitly stated it was de facto used to prove the maximum principle in various problems of optimal control \cite{GI93,AI97,RV}. Here is a brief account of how the alternative works for optimal control of systems governed by differential inclusions. \subsubsection{Optimal control of differential inclusion.} As the first example of application of the alternative we shall briefly consider the following problem of optimal control of a system governed by differential inclusion (see also the next subsection 8.3): minimize \begin{equation}\label{cost} \ell(x(0),x(T)) \end{equation} on trajectories of the differential inclusion \begin{equation}\label{difinc} \dot x\in F(t,x), \end{equation} satisfying the end point condition \begin{equation}\label{end} (x(0),x(T))\in S. \end{equation} The natural space to treat the problem is $W^{1,1}$. Let $\overline x(\cdot)$ be a local solution. For any $x(\cdot)\in W^{1,1}$ set $$ \varphi(x(\cdot)) = \int_0^T d(\dot x(t),F(t,x(t)))dt + d((x(0),x(T)), S). $$ Clearly, $\varphi$ is nonnegative and $\varphi(\overline x(\cdot))=0$. Thus, if $\ell$ is a Lipschitz function, we can apply the alternative to get necessary optimality condition. According to the alternative, either there is a $\lambda >0$ such that $\overline x(\cdot)$ is a local minimum of $$ \lambda\ell(x(0),x(T))+ d((x(0),x(T)), S) + \int_0^T d(\dot x(t),F(t,x(t)))dt, $$ or there is a sequence $(x_n(\cdot))$ converging to $\overline x(\cdot)$ such that every $x_n(\cdot)$ is not feasible in (\ref{cost})-(\ref{end}) and is a local minimum of the functional $$ d((x(0),x(T)), S) + \int_0^T d(\dot x(t),F(t,x(t)))dt + n^{-1}\Big( \| x(0)-x_n(0)\| +\int_0^T\| \dot x(t)-\dot x_n(t)\|dt\Big) $$ In both cases we get an (unconstrained) Bolza problem. Analysis of such problem needs different techniques and we refer to \cite{AI97,RV} where necessary optimality conditions for the problem were obtained along these lines. A more general result was established a few years later by Clarke \cite{FHC05}(actually the most general for optimal control of differential inclusions so far) but a shorter proof of Clarke's theorem based on optimality alternative is now also available \cite{AI15}. To conclude, I wish to note that this is not the only possible application of regularity related ideas to optimal control. We can refer to \cite{RV05} for the discussion of the role of metric regularity in the Hamilton-Jacoby theory of optimal control. \subsubsection{Constraint qualification.} The last question we intend to briefly discuss in this subsection concerns constraint qualifications in optimization problems. They often play an important role in proofs but their basic function is to guarantee that the multiplier $\lambda$ of the cost function is in the necessary (e.g. Lagrangian) optimality conditions is positive. The point is that constraint qualifications are often connected with regularity properties of the constraint mapping. We shall discuss just one example. Let us say that the problem is {\it normal} at a certain feasible point if the constraint mapping is regular at the point. The {\it problem is normal} if either the feasible set is empty or the problem is normal at every feasible point. In the case of the problem (\ref{P1}) the constraint mapping is the restriction of $F$ to $C$, so by Theorem \ref{F7} normality is guaranteed if $F$ is transversal to $Q$, that is if $y^*\in N(Q,F(x))$ and $0\in D^*F|_C(\overline x,0)(y^*)$ imply together that $y^*=0$ which in turn imply that \begin{equation}\label{P5} 0\in \partial(y^*\circ F)(x) + N(C,x),\quad\&\quad y^*\in N(Q,F(x))\; \Rightarrow y^*=0. \end{equation} This is the now standard constrained qualification in nonsmooth optimization (see e.g. \cite{DR,KK02,BM,RW}). If $f$ and $F$ are continuously differentiable and the sets $C$ and $Q$ are convex, (\ref{P5}) is dual to Robinson's constraint qualification \cite{SMR76}. \subsection{An abstract relaxed optimal control problem.} Here we apply the optimality alternative to get necessary optimality condition in the problem \begin{equation}\label{glawyp} \text{\rm minimize}\quad f(x) \quad \text{\rm s.t.}\quad F(x,u)=0,\; u\in U. \end{equation} Here $F: X\times U\to Y$, $X$ and $Y$ are separable Banach spaces and $U$ is a set. The problem is similar to problems with mixed smooth and convex structures studied in \cite{IT,VMT82}. But contrary to \cite{IT,VMT82}, here we do not assume that $F$ is continuously differentiable in $x$. We shall formulate the requirements on $F$ a bit later. First we need to introduce and discuss some necessary concepts. We say that a continuous mapping $F: X\to Y$ is {\it semi-Fredholm} at $\overline x$ it has at $\overline x$ a strict prederivative of the form ${\mathcal H}(x)= Ax+ \| h\|Q$, where $A: X\to Y$ is a linear bounded operator that send $X$ onto a closed subspace of $Y$ of finite codimension and $Q\subset Y$ is a compact set (that can be assumed convex and symmetric). We say furthermore that $S\subset X$ is {\it finite-dimensionally generated} if $S=\Lambda^{-1}(P)$ where $\Lambda: X\to R^n$ is a continuous linear operator and $P\subset I\!\!R^n$ is closed. \begin{proposition}[non-covering principle for (\ref{glawyp}) \cite{AI87,GI93}]\label{regfred} Let $F: X\to Y$ be semi-Fredholm at $\overline x$, and let $S$ be a finite-dimensionally generated subset of $X$. Let further $F|_S$ be the restriction of $F$ to $S$, that is the set-valued mapping equal to $\{F(x)\}$ on $S$ and $\emptyset$ outside of $S$. If $F|_S$ is not regular near $\overline x$, then there is a $y^*\neq 0$ such that $0\in\partial_G(y^*\circ F)(\overline x)+N_G(S,\overline x)$. Moreover, the weak$^*$-closure of the set of such $y^*$ with norm $1$ does not contain zero\footnote{More general versions of this result can be found in many publications related to ``point estimates" and compactness properties of subdifferentials - see e.g \cite{AI89a,JT95,JT95a,JT99,BM} }. \end{proposition} We intend to use this principle to prove the following theorem. \begin{theorem}\label{lagrange} Let $(\overline x,\overline u)$ be a solution of (\ref{glawyp}). We assume that ({\bf A$_1$}) $f$ satisfies the Lipschitz condition in a neighborhood of $\overline x$; ({\bf A$_2$}) for any $u\in U$ the mapping $F(\cdot,u)$ is Lipschitz in a neighborhood of $\overline x$, and $F(\cdot,\overline u)$ is semi-Fredholm at $\overline x$; ({\bf A$_3$}) $F(x,U)$ is a convex set for any $x$ of a neighborhood of $\overline x$; ({\bf A$_4$}) $S$ is finite-dimensionally generated Let further ${\mathcal L}(\lambda,y^*,x,u)=\lambda f(x)+\langle y^*,F(x,u)\rangle$ be the Lagrangian of the problem. Then there are $\lambda\ge 0$ and $y^*\in Y^*$ such that the following relations hold true: $$ \begin{array}{ll} \lambda + \| y^*\|>0 & \text{(non-triviality)};\\ 0\in \partial_G{\mathcal L}(\lambda,y^*,\cdot,\overline u)(\overline x) + N_G(S,\overline x)& \text{(Euler-Lagrange inclusion)} ;\\ \langle y^*,F(\overline x,\overline u)\rangle\ge\langle y^*,F(\overline x,u)\rangle,\quad\forall \; u\in U & \text{(the maximum principle)} . \end{array} $$ \end{theorem} \proof Given a finite collection ${\mathcal U}=(u_1,\ldots,u_k)$ of elements of $U$, we define a mapping $\Phi_{{\mathcal U}}: X\timesI\!\!R^k\to Y$ by $$ \Phi_{{\mathcal U}}(x,\alpha_1,\ldots,\alpha_k)= F(x,\overline u)+\sum_{i=1}^{k}\alpha_i(F(x,u_i)-F(x,\overline u)). $$ It is an easy matter to see that this mapping is also semi-Fredholm at $(\overline x,0)$. Consider the problem $$ \text{\rm minimize}\; f(x) \quad \text{\rm s.t.}\;\Phi_{{\mathcal U}}(x,\alpha_1,\ldots,\alpha_k)=0,\; x\in S,\; \alpha_i\ge 0. \eqno{(\bf P_{{\mathcal U}})} $$ Then $(\overline x,0,\ldots,0)$ solves the problem (as immediately follows from ({\bf A$_3$})). Let further $\Psi: X\times I\!\!R^k \to Y$ be defined by $$ \Psi(x,\alpha_0,\ldots,\alpha_k)= (f(x)+\alpha_0,\Phi_{{\mathcal U}}(x,\alpha_1,\ldots,\alpha_k)). $$ This mapping cannot be regular in a neighborhood of $(\overline x,0,\ldots,0)$ because no point $(f(\overline x)-\varepsilon, 0,\ldots,0)$ can be a value of $\Psi$ at $x\in S$ close to $\overline x$ and $\alpha$ close to zero. It is an easy matter to verify that $\Psi$ is also semi-Fredholm at $(\overline x,0,\ldots,0)$ and we can apply Proposition \ref{regfred}. Set $\hat S= S\times I\!\!R_-^{k+1},\; \hat{{\mathcal L}}(\lambda,y^*,x,\alpha_0,\ldots,\alpha_k)= \lambda(f(x)+\alpha_0)+ \langle y^*,\Psi(x,\alpha_0,\ldots,\alpha_k)\rangle$. By the proposition there are multipliers $(\lambda,y^*)\neq 0$ such that $$ 0\in\partial_G\hat{{\mathcal L}}(\lambda,y^*,\cdot)(\overline x,0,\ldots,0)+ N_G(\hat S,(\overline x,0,\ldots,0)). $$ We have (using the standard rules of subdifferential calculus ) $$ \begin{array}{l} N_G(\hat S,(\overline x,0,\ldots,0))= N_G(\overline x,S)\timesI\!\!R_-^{k+1};\\ \partial_G\hat{{\mathcal L}}(\lambda,y^*,\cdot)(\overline x,0,\ldots,0)\subset \partial_G{\mathcal L}(\lambda,y^*,\cdot,\overline u)(\overline x) \\ \qquad\qquad\qquad\qquad\qquad +(\lambda,\langle y^*,F(\overline x,u_1)-F(\overline x,\overline u)\rangle,\ldots,\langle y^*, F(\overline x,u_i)-F(\overline x,\overline u)\rangle). \end{array} $$ It follows that there are $\xi_i\le 0,\; i=0,\ldots,k$ such that $$ \begin{array}{l} 0\in \partial_G {\mathcal L}(\lambda,y^*,\cdot,\overline u)(\overline x)+N_G(S,\overline x);\\ \lambda=-\xi_0\ge 0;\\ \langle y^*,F(\overline x,u_i)-F(\overline x,\overline u)\rangle = \xi_i\ge 0,\quad i=1,\ldots,k. \end{array} $$ The relations remain obviously valid if we replace $\lambda,y^*$ by $r\lambda,ry^*$ with some positive $r$. Thus for any finite collection $(u_1,\ldots,u_k)\subset U$ we can find a pair of multipliers $(\lambda,y^*)$ satisfying the three above mentioned relations and the normalization condition $\lambda + \| y^*\|=1$. Let $\Omega(u_1,\ldots,u_k)$ be the weak$^*$-closure of all such pairs. Then $\Omega(u_1,\ldots,u_k)$ is weak$^*$-compact and by Proposition \ref{regfred} does not contain zero. It remains to notice that the increase of the set $(u_1,\ldots,u_k)$ may result only in decrease of $\Omega(u_1,\ldots,u_k)$ and therefore there is a nonzero pair $\lambda,y^*$ common to all sets $\Omega(u_1,\ldots,u_k)$. \endproof \subsection{Genericity in tame optimization.} Here by ``tame optimization" we mean optimization problems with semi-algebraic data. We consider the same class of problems as in (\ref{P1}). This time however we are interested in the effects of perturbations and shall work with a family of problems depending on a parameter $p$: \begin{equation}\label{P3a} {\rm minimize}\quad f(x,p),\quad {\rm s.t.}\quad F(x,p)\in Q,\; x\in C. \end{equation} \noindent Here $x$ is an argument in the problem and $p$ is a parameter. So subdifferentials and derivatives that will appear below are always with respect to $x$ alone. If $p$ is fixed, then we denote the corresponding problem by ${\mathcal P}_p$. Before we continue, we have to mention that for a semi-algebraic set $S\subset I\!\!R^n$ the properties $\bullet$ \ $S$ is a set of first Baire category in $I\!\!R^n$; $\bullet$ \ $S$ has $n$-dimensional Lebesgue measure zero; $\bullet$ \ $\dim S< n$ \noindent are equivalent. Thus, when we deal with semi-algebraic objects e.g. in $I\!\!R^k$, the word ``generic" means "up to a semi-algebraic set of dimension smaller than $k$." We shall assume that $p$ is taken from an open set $P\subset I\!\!R^k$ and, as before, $x\inI\!\!R^n$ and $F$ takes values in $I\!\!R^m$. Our main assumption is that \vskip 1mm \centerline{\it the restriction $F|_C(x,p)$ of $F$ to $C$ is transversal to $Q$.} \vskip 1mm \noindent This is definitely the case when $k=m$ and $F(x,p) = F(x)-p$. As to $F$ itself, we assume that it is continuous with respect to $(x,p)$ and locally Lipschitz in $x$. The sets $C$ and $Q$ as usual are assumed closed. \begin{theorem}[generic normality]\label{F8} Under the stated assumptions for a generic $p\in P$, the mapping $F|_C(\cdot,p)$ is transversal to $Q$. Thus for a generic $p$ the problem ${\mathcal P}_p$ is normal. \end{theorem} \proof The first statement is immediate from Theorem \ref{thomset}, while the second from the comments following the statement of Theorem \ref{F7}.\endproof Let us call a point $x$ feasible in ${\mathcal P}_p$ a {\it critical point} of the problem if the non-degenerate Lagrangian necessary condition of 8.2.1 $$ 0\in\partial(f + (y^*\circ F|_C))(x,p),\quad y^*\in N(Q,F(x,p)) $$ is satisfied. In this case the value of $f$ at $x$ is called a {\it critical value} of ${\mathcal P}_p$. \begin{theorem}[generic finiteness of critical values]\label{F9} If under the stated assumptions, ${\mathcal P}_p$ is normal, then the problem may have only finitely many critical values. Thus there is an integer $N$ such that for a generic $p$ the number of critical values in the problem does not exceed $N$. \end{theorem} \proof Consider the function $$ {\mathcal L}_p(x,y,y^*)= f(x,p) +\langle y^*,F|_C(x,p)-y\rangle + i_Q(y). $$ As follows from the standard calculus rules, $$ \partial{\mathcal L}_p(x,y,y^*) = \partial(f+ y^*\circ F|_C)(x,p)\times (N(Q,y)-y^*)\times\{F(x,p)-y\}. $$ Thus, $(x,y,y^*)$ is a critical point of ${\mathcal L}_p$ if and only if $F(x,p)=y$, $0\in N(Q,y)-y^*$, that is $y\in Q$ and $y^*\in N(Q,y)$, \ and $0\in \partial(f+y^*\circ F|_C)(x,p)$. In other words, $(x,y,y^*)$ is a critical point of ${\mathcal L}_p$ if and only if $x$ is a feasible point in ({\bf P}), $y=F(x,p)$ and the necessary optimality condition is satisfied at $x$ with $y^*$ being the Lagrange multiplier. We also see that in this case ${\mathcal L}_p(x,y,y^*)=f(x,p)$. In other words, critical values of the problem are precisely critical values of ${\mathcal L}$. By the Sard theorem ${\mathcal L}_p$ may have at most finitely many critical values, whence the theorem.\endproof The last result we are going to present here has been so far proved only under some additional assumptions on elements of the problem. We shall explain it for the classical case, although semi-algebraic nature of the data remains crucial. \begin{theorem}[generic finiteness of critical points]\label{F10} Assume that $p=(q,y)$ with $q\inI\!\!R^n$ and $y\inI\!\!R^m$, $f(x,p)= f(x)-\langle q,x\rangle$, $F(x,p)=F(x)-y$ with $f(x)$ and $F(x)$ both continuously differentiable Assume further that the sets $C$ and $Q$ are closed and convex. Then there is an integer $N$ such that for a generic $p$ the number of pairs $(x,y^*)$, such that $x$ is a critical point in ${\mathcal P}_p$ and $y^*$ a corresponding Lagrange multiplier does not exceed $N$. \end{theorem} The theorem follows from the two results below that contain valuable information about geometry of subdifferential mappings of semi-algebraic functions. \begin{proposition}[dimension of the subdifferential graph \cite{DL12}]\label{dime} The dimension of the graph of the subdifferential (no matter which, Fr\'echet, limiting or Clarke) mapping of a semi-algebraic function on $R^n$ is $n$. \end{proposition} \begin{proposition}[finiteness of preimage \cite{AI11b,DL12}]\label{fine} Let $F:I\!\!R^n\rightrightarrows I\!\!R^n$ be a semi-algebraic set-valued mapping such that $\dim({\rm Graph}~ F)\le n$. If $y$ is a regular value of $F$, then $F^{-1}(y)$ contains at most finitely many elements. Thus, there is an integer $N$ such that for a generic $y$ the number of elements in $F^{-1}(y)$ cannot exceed $N$. \end{proposition} To see how the propositions lead to the proof of the theorem, we note first that $D^*F|_C(x)(y^*)=F'(x)y^*+N_C(x)$ if $x\in C$, $F$ is smooth and $C$ convex. By Theorem \ref{tran1} $F|_C$ is transversal to $Q$ if and only if \begin{equation}\label{P10} x\in C,\; F(x)\in Q+y,\; 0\in F'(x)y^*+N_C(x),\; y^*\in N(Q,F(x)-y)\; \Rightarrow y^*=0, \end{equation} and by Theorem \ref{thomset} this holds for a generic $y$. Consider the function $$ g(x,y)= f(x) + i_C(x)+i_Q(F(x)-y) $$ By Proposition \ref{dime} the dimension of the graph of its subdifferential is $n+m$. Then so is the dimension of the graph of the mapping $$ \Gamma (x,y^*)= \{(q,y):\; (q,y^*)\in\partial g(x,y) \}. $$ Now by the Sard theorem generic $(q,y)$ is a regular value of $\Gamma$ so (Proposition \ref{fine}) for a generic $(q,y)$ there are finitely many $(x,y^*)$ such that $(q,y)\in \Gamma(x,y^*)$. Finally, if for such $(q,y)$ the qualification condition (\ref{P10}) is satisfied, then $$ \partial g(x,y) = \{(q,y^*):\; f'(x)+ (y^*\circ F(\cdot))'(x) + N(C,x),\; y^*\in N(Q,F(x)-y)\} $$ (even if $Q$ is not convex - see again Exercise 10.8 in \cite{RW}) which in particular means that $x$ is a critical point of ${\mathcal P}_p$ and $y^*$ is a Lagrange multiplier in the problem. \subsection{Method of alternating projection.} This is one of the most popular methods to solve feasibility problem due to its simplicity and efficiency. The feasibility problem in its simplest form consists in finding a common point of two sets, say $Q$ and $S$. The recipe offered by the method of alternating projection is the following: starting with a certain $x_0$, we choose for $k=0,1,\ldots$ $$ x_{2k+1}\in\pi_Q(x_{2k}), \quad x_{2k+2}\in\pi_S(x_{2k+1}), $$ where $\pi_Q(x)$ is the collection of points of $Q$ closest to $x$ etc.. Von Neumann was the first to show in mid-30s (see \cite{JVN50}) that in case of two subspaces the method converges to a certain point in the intersection of two closed subspaces in a Hilbert space (depending of course on the starting point). Later in the 60s Bregman \cite{LB65} and Gubin-Polyak-Raik \cite{GPR67} applied it to convex subsets in $I\!\!R^n$. In particular it was shown in \cite{GPR67} that the convergence is linear if relative interiors of the sets meet. Later Bauschke and Borwein \cite{BB96} proved linear convergence if the sets are subtransversal at any common point. But in computational practice the method was successfully applied even for nonconvex sets. The first explanation was given by Lewis, Luke and Malik \cite{LLM09}: if at a certain point $\overline x$ in the intersection the sets are transversal and at least one of the sets is not ``too non-convex" in a certain sense (super-regular in the terminology of the authors) then linear convergence of alternating projections to a certain point common to the sets (not necessarily $\overline x$) if the starting point is sufficiently close to $\overline x$. And very recently it was shown by Druzviatskyj, Ioffe and Lewis \cite{DIL15} that transversality alone guarantees linear convergence. In fact linear convergence was proved in \cite{DIL15} under a substantially weaker condition of ``intrinsic transversality" of the sets, but we believe that geometric essence of the phenomenon is captured by the transversality $\Rightarrow$ linear convergence implication. The question whether linear convergence is guaranteed by subtransversality, as in the convex case, remains open (see \cite{HL12}). Here is a short proof of linear convergence under the transversality assumption. Set $$ \varphi (x,y)=i_Q(x)+i_S(y) + \| x-y\|. $$ We claim that if $Q$ and $S$ are transversal at $\overline x\in Q\cap S$, then there are $\kappa>0$ and $\delta>0$ such that for any $x\in Q,\ y\in S$ close to $\overline x$ $$ \max\{|\nabla\varphi(\cdot,y)|(x),|\nabla \varphi (x,\cdot)|(y)\}\ge \kappa. $$ To this end, we first note that by Theorem \ref{F5} $$ \theta = \sup\{\langle u,v\rangle:\; u\in N(Q,\overline x),\; v\in -N(S,\overline x),\; \|u\|=\| v\|=1\}<1. $$ Fix a certain $\kappa\in(0,1)$ and assume that there are sequences $(x_n)\subset Q,\ (y_n)\subset S$, $x_n\neq y_n$, converging to $\overline x$ and such that $$ |\nabla\varphi(\cdot,y_n)|(x_n)< \kappa,\quad |\nabla \varphi (x_n,\cdot)|(y_n)<\kappa, $$ that is the functions \begin{equation}\label{min} x\mapsto \varphi (x,y_n)+ \kappa\| x-x_n\|\quad{\rm and}\quad y\mapsto \varphi(x_n,y)+\kappa\|y-y_n\| \end{equation} attain local minima respectively at $x_n$ and $y_n$. This means that \begin{equation}\label{3} 0\in w_n^*+\frac{x_n-y_n}{\|x_n-y_n\|}+\kappa B;\qquad 0\in z_n^*+\frac{y_n-x_n}{\|x_n-y_n\|}+\kappa B \end{equation} for some $w_n^*\in N(Q,x_n)$ and $z_n^*\in N(S,y_n)$. Thus, for any limit point $(w^*,z^*)$ of $(w_n^*,z_n^*)$, we have $$ w^*= e+a,\qquad z^* = -e+b, $$ where $\| e\|=1,\; \| a\|\le\kappa,\; \| b\|\le \kappa$. Consequently $$ \theta\ge\frac{\langle e+a,e+b\rangle}{\|e+a\|\|e+b\|}\ge \frac{(1-\kappa)^2}{(1+\kappa)^2} $$ and we get \begin{equation}\label{ineq} \kappa\ge\frac{1-\sqrt{\theta}}{1+\sqrt{\theta}}, \end{equation} This proves the claim. Then $\pi_Q(y)={\rm argmin}~\varphi(\cdot,y)$ and the method of alternating projections can be written as follows: $$ x_{n+1}\in {\rm argmin}~\varphi(x_n,\cdot);\quad x_{n+2}\in{\rm argmin}~\varphi(\cdot,x_{n+1}). $$ We obviously have $|\nabla\varphi(x_n,\cdot)|(x_{n+1})|=0$. For a given $x$ (not necessarily in Q), consider the function $\psi_x(y)=i_S(y)+\|x-y\|$. For any $c\in (0,1)$ condition $|\nabla\psi_x|(x_{n+1})\le c$ obviously holds if $$ \langle x-x_{n+1},x_n-x_{n+1}\rangle\ge \sqrt {1- c^2}\| x-x_{n+1}\|\| x_n-x_{n+1}\|. $$ Take a $c<\kappa$, and let $K_c$ be the collection of $c$ satisfying the above inequality. This is an ice-cream cone with vertex at $x_{n+1}$. If $x\in Q\cap K_c$, then $\nabla\varphi(\cdot,x_n+1)(x)\ge \kappa>c$. On the other hand, as is easy to see, the distance from $x_n$ to the boundary of $K_c$ is precisely $cr$, where $r=\| x_n-x_n+1\|$. Applying Basic lemma for error bounds (Lemma \ref{baslem}), we conclude that there is an $x\in Q$ with $\varphi(x,x_{n+1})\le \varphi(x_n,x_{n+1}) - c\kappa\| x_{n+1}-x_n\|$. It follows that $$ \|x_{n+2}-x_{n+1}\|= \varphi(x_{n+2},x_{n+1})\le (1-c^2)\| x_{n+1}-x_{n+1}\| $$ which is linear convergence of $(x_n)$. \subsection{Generalized equations.} By a generalized equation we mean the relation $$ 0\in f(x)+F(x), $$ where $f$ is a single-valued and $F: X\rightrightarrows Y$ a set-valued mapping. Variational inequalities and necessary optimality conditions in constraint optimization with smooth cost and constraint functions are typical examples. The problem discussed in the theorem below is what happens with the set of solutions of the generalized equation if the single-valued term is slightly perturbed. \begin{theorem}[implicit function for generalized equations]\label{imgeq} Let $X$, $Y$ be metric spaces, and let $Z$ be a normed space. Consider the generalized equation \begin{equation}\label{3.315} 0\in f(x,p)+F(x), \end{equation} where $f: X\times P\to Z$ and $F: X\rightrightarrows Z$. Let $(\overline x,\bar p)$ be a solution to the equation. Set $\overline z=-f(\overline x,\bar p)$ and suppose that the following two properties hold: (a) Either $X$ or the graph of $F$ is complete in the product metric and $F$ is regular near $(\overline x,\overline z)$ with ${\rm sur} F(\overline x|\overline z)>r$; (b) there is a $\rho>0$ such that $f$ is continuous on $\overset{\circ}B (\overline x,\rho)\times \overset{\circ}B (\bar p,\rho)$ and $ f(\cdot,p)$ satisfies on $\overset{\circ}B (\overline x,\rho)$ the Lipschitz condition with constant $\ell<r$ for all $p\in\overset{\circ}B (\bar p,\rho)$. Let $S(p)$ stand for the solution mapping of (\ref{3.315}). Then $$ d(x, S(p'))\le (r-\ell)^{-1}\| f(x,p)-f(x,p')\|. $$ if $x\in S(p)$ is close to $\overline x$ and $p, p'$ are sufficiently close to $\bar p$. Thus, if $f(x,\cdot)$ satisfies the Lipschitz condition with constant $\alpha$ on a neighborhood of $\bar p$ for all $x\in\overset{\circ}B(\overline x,\rho)$, then $S(\cdot)$ has the Aubin property near $(\bar p,\overline x)$ with ${\rm lip} S(\bar p|\overline x)\le\alpha(r-\ell)^{-1}$. Finally, if in addition $F$ is strongly regular near $(\overline x,\overline z)$, then $S(\cdot)$ has a Lipschitz localization $s(\cdot)$ at $(\bar x,\bar y)$ with Lipschitz constant not greater than $\alpha(r-\ell)^{-1}$, so that $$ d(s(p), s(p'))\le (r-\ell)^{-1}\| f(s(p),p)-f(s(p),p')\|\le \alpha(r-\ell)^{-1}d(p,p'). $$ \end{theorem} Note that in view of Theorem \ref{critgen} condition (a) is equivalent to the assumption that there are $r>0$ and $\xi >0$ such that $|\nabla_{\xi}\varphi_z|(x,v)>r$ (where $\varphi_z(x,v)=d(z,v)+i_{{\rm Graph}~ F}(x,v)$) if e.g. $d(z,\bar p)<\rho,\; \| z\|<\rho$ and $z\neq v\in F(x)$. \proof Set $G(x,p)=f(x,p)+F(x)$ and let $H(p,z)= (G(\cdot,p))^{-1}(z)$, so that $S(p) = H(p,0)$. As the Lipschitz constants of functions $f(\cdot,p)$ are bounded by the same $\ell$ for all $p\in \overset{\circ}B(\bar p,\rho)$, it follows from Theorem \ref{milt} that there is a $\delta>0$ such that for every $p\in \overset{\circ}B (\bar p,\rho)$ the inequality $d(x,H(p,z))\le (r-\ell)^{-1}d(z,G(x,p))$ holds if $d(x,\overline x)<\delta$ and $\|z-z(p)\|<\delta$, where $z(p)=f(\overline x,p)-f(\overline x,\bar p)\in G(\overline x,p)$. As $f$ is continuous, we can choose $\lambda>0$ such that $\| z(p)\|<\delta$ for $p\in\overset{\circ}B(\bar p,\lambda)$. For such $p$ we have $0\in\overset{\circ}B(z(p),\delta)$ and therefore if $d(\bar p,p')<\lambda$, we get, taking into account that $0\in f(x,p)+F(x)$ by the assumption, $$ \begin{array}{lcl} d(x,S(p'))&\le& (r-\ell)^{-1}d(0,G(x,p'))= (r-\ell)^{-1}d(0,f(x,p')+F(x))\\ &=& (r-\ell)^{-1}d(-f(x,p'),F(x))\le(r-\ell)^{-1}\|f(x,p')-f(x,p)\| \end{array} $$ This proves the first part of the theorem. The second now follows from Theorem \ref{mimpl}. \endproof The concept of generalized equation was introduced by Robinson in \cite{SMR79}. The theorem proved in \cite{SMR79,SMR80} corresponded to $f$ continuously differentiable in $x$ and $F$ being either a maximal monotone operator or $F(x)=N(C,x)$, where $C$ is a closed convex set. We refer to \cite{DR} for further results and bibliographic comments on generalized equations which is one of the central objects of interest in the monograph. An earlier version of part (a) of the theorem with a less precise estimate can be found in \cite{KK02} (Theorem 4.9). Part (b) of the theorem relating to strong regularity is the basic statement of Theorem 5F.4 of \cite{DR} (generalizing the earlier results of Robinson in \cite{SMR80,SMR91}; see also \cite{ALD95} for an earlier result). Our proof however is different: here the theorem appears as a direct consequence of Milyutin's perturbation theorem. Note that in most of the related results in \cite{DR} it is assumed (following \cite{SMR91}) that there exists a ``strict estimator $h(x)$ for $f$ of modulus $\ell$" such that ${\rm sur}(F+h)(x|\overline y+h(\overline x))\ge r$. This is a fairly convenient device for practical purpose but it adds no generality to the result as the case with $h$ reduces to the setting of the theorem if we replace $F+h$ by $F$ and $f-h$ by $f$. \subsection{Variational inequalities over polyhedral sets.} {\it Variational inequality} is a relation of the form \begin{equation}\label{vareq} 0\in \varphi(x)+N(C,x), \end{equation} where $\varphi:I\!\!R^n\toI\!\!R^n$ is a single-valued mapping and $C\subset I\!\!R^n$ is a convex set. If $C$ is a cone, it is equivalent to $$ x\in K,\quad F(x)\in K^{\circ},\quad \langle x,F(x)\rangle =0. $$ The problem of finding such an $x$ is known as a {\it complementarity problem} (see e.g. \cite{FP}). Problems of this kind typically appear in nonlinear programming in connection with necessary optimality conditions. Consider for instance the problem \begin{equation}\label{nesop} {\rm minimize}\quad f_0(x)\quad{\rm s.t}\quad f_i(x)\le 0,\ i=1,\ldots,k,\ f_i(x)\le 0,\ i=k+1,\ldots,m. \end{equation} with $f_0,\ldots,f_m$ twice continuously differentiable. If $\overline x$ is a solution of the problem, then (assuming that the problem is normal and setting $f = (f_1,\ldots,f_m)$) there is a $\overline y\in I\!\!R^m$ such that $$ \nabla f_0(\overline x)+\langle \overline y,\nabla f(\overline x)\rangle =0. $$ Setting $$ \varphi(x,y)=\left(\begin{array}{c} \nabla f_0(\overline x)+\langle \overline y,\nabla f(\overline x)\rangle,\\ f(x)\end{array}\right);\qquad C=I\!\!R^n\times I\!\!R^m_+, $$ we see that $(\bar x,\bar y)$ solves (\ref{vareq}) (with $x$ replaced by $(x,y)$). Consider the set valued mapping $\Psi(x)= \varphi(x)+N(C,x)$ associated with (\ref{vareq}) assuming that $C$ is a convex polyhedral set. What can be said about regularity of such mapping near a certain $(\bar x,\bar y)\in{\rm Graph}~\Phi$? Applying Milyutin's perturbation theorem (Theorem \ref{milt}) and Theorem \ref{strlip} and taking into account that the Lipschitz constant of $h\to \varphi(x+h)-\phi'(x)h$ at zero is zero, we immediately get \begin{proposition}\label{lvar} Let $\overline y\in\Psi(\overline x)$ for some $\overline x\in C$. Set $A=\varphi'(\overline x)$ and $\hat{\Psi}(x) = Ax +N(C-\overline x,x)$. Then $\Psi$ is (strongly) regular near $(\bar x,\bar y)$ if and only if $\hat{\Psi}$ is (strongly) regular near $(0,0)$ and ${\rm sur}\Psi(\overline x|\overline y)={\rm sur}\hat{\Psi}(0|0)$. \end{proposition} In other words, the regularity properties of $\Psi$ are the same as of its ``linearization" $\hat{\Psi}$. Therefore in what follows we can deal only with the {\it linear variational inequality} \begin{equation}\label{lvareq} 0\in Ax+N(C,x) \end{equation} and the associated mapping $$ \Phi(x) = Ax+N(C,x). $$ The key role in our analysis is played by the concept of a {\it face} of a polyhedral set $C$ which is any closed subset $F$ of $C$ such that any segment $\Delta\subset C$ containing a point $x\in F$ in its interior lies in $F$. A face of $C$ {\it proper} if it is different from $C$. We refer to \cite{RTR} for all necessary information about faces. The following facts are important for our discussion: $\bullet$ the set ${\mathcal F}_C$ of all faces of $C$ is finite; $\bullet$ $F\in{\mathcal F}_C$ if and only if there is a $y\inI\!\!R^n$ such that $F=\{x\in C:\; \langle y,x\rangle\ge\langle y,u\rangle,\;\forall\; u\in C \}$; $\bullet$ if $F,F'\in{\mathcal F}_C$ and $F\cap {\rm ri}~\!F'\neq\emptyset$, then $F'\subset F$; a proper face of $C$ lies in the relative boundary of $C$; $\bullet$ if $F\in{\mathcal F}_C$ and $x_1,\ x_2$ belong to the relative interior of $F$, then $T(C,x_1)= T(C,x_2)$ and $N(C,x_1)= N(C,x_2)$. The last property allows to speak about the tangent and normal cones to $C$ at $F$ which we shall denote by $T(C,F)$ and $N(C,F)$. It is an easy matter to see that \begin{equation}\label{dimpol} \dim F+\dim N(C,F)= n;\qquad \dim(F+N(C,F))=n. \end{equation} For any $x\in C$ denote by $F_{\min}(x)$ the minimal element of ${\mathcal F}_C$ containing $x$. The is straightforward \begin{equation}\label{ri} x\in F\in {\mathcal F}_C, \;\&\; F=F_{\min}(x)\; \Leftrightarrow\; x\in{\rm ri}~\! F. \end{equation} \begin{proposition}\label{nonsing} If $\Phi$ is regular near $(x,y)$ and $F= F_{\min}(x)$, then\\ $$\dim (A(F)+ N(C,F))=n.$$ In particular, $A$ is one-to-one on $F$. \end{proposition} \proof If $\dim F=0$, then $x$ is an extreme point of $C$ in which case $T(C,x)$ is a convex cone containing no lines and its polar therefore has nonempty interior. On the other hand, if $x\in{\rm int}~ C$, then $N(C,u)=\{0\}$ for all $u$ of a neighborhood of $x$ and $\Phi(u)=Au$ for such $u$. So by regularity $A$ is an isomorphism. Thus in the sequel we may assume that the dimensions of both $F$ and $N(C,F)$ are positive. By changing $(x,y)$ slightly, we can guarantee that $y$ belong to the relative interior of $N(C,F)$. Let $\varepsilon>0$ be so small that the distances from $x$ and $y$ to the relative boundaries of $F$ and $N(C,F)$ are greater than $\varepsilon$. Then any $(u,v)$ such that $u\in C$, $v\in N(C,u)$, $\| u-x\|<\varepsilon,\ \| v-y\|<\varepsilon$ must belong to $F\times N(C,F)$. This means that $\Phi(B(x,\varepsilon))\cap B(y,\varepsilon)\subset A(F)+N(C,F)$ and the result follows from (\ref{dimpol}). Indeed, the dimension equality is immediate from the last inclusion. On the other hand, if $A$ is not one-to one on $F$, then $\dim A(F)<\dim F$ and by (\ref{dimpol}) $\dim A(F)+\dim N(C,F)<n$. \endproof Let $C\subset I\!\!R^n$ be a convex polyhedron, and let $F$ be a proper face of $C$. Let $L$ be the linear subspace spanned by $F$ and $M$ the linear subspace spanned by $N(C,F)$. These subspaces are complementary by (\ref{dimpol}) and orthogonal. By Proposition \ref{nonsing} $A(L)$ and $M$ are also complementary subspaces if $\Phi$ is regular near any point of the graph. Let $\pi_M$ be the projection onto $M$ parallel to $A(L)$, so that $\pi_M(A(F))=0$. Set $K_M=(T(C,F))\cap M$, and let $A_M$ be the restriction of $\pi_M\circ A$ to $M$. Then $K_M$ is a convex polyhedral cone in $M$ and its polar $K_M^{\circ}$ (in $M$) coincides with $N(C,F)$. \begin{definition}\label{fact} {\rm The set-valued mapping $\Phi_M(x) = A_Mx + N(K_M,x)$ viewed as a mapping from $M$ into $M$ will be called {\it factorization of $\Phi$ along $F$}. } \end{definition} Observe that the graph of a factorization mapping is a union of convex polyhedral cones. \begin{proposition}\label{regfact} If $\Phi$ is regular near $(\overline x,A\overline x)$ for some $\overline x\in C$, then the factorization of $\Phi$ along $F=F_{\min}(\overline x)$ is globally regular on $I\!\!R^n$. \end{proposition} \proof Set $K_1=T(C,F)=T(C,\overline x)$ and consider the mapping $\Phi_1(x)= Ax+N(K_1,x)$. By Proposition \ref{loctan}, $\Phi_1(x)=\Phi (\overline x+x)- A\overline x$ for $x$ close to zero. Therefore $\Phi_1$ is regular near $(0,0)$, hence globally regular by Proposition \ref{loctan}. Observe that $K_1=K_M+L$ and $K_1^{\circ}= N(K,F)$ and consequently $N(K_1,x)\subset N(K,\overline x)=N(K,F)$ for any $x\in K_1$. As $\Phi_1$ is globally regular, there is a $\rho>0$ such that $d(x,\Phi_1^{-1}(z))\le \rho d(z,\Phi_1(x))$ for all $x,z\inI\!\!R^n$. Take now $x,z\in M$. We have (taking into account that $N(K_M,x)=N(K_1,x+\xi)$ for any $\xi\in L$ and $A_Mx = A(x+\xi)$ for some $\xi\in L$) $$ \begin{array}{lcl} d(z,\Phi_M(x))&=&\inf\{\|z-A_Mx-y\|:\; y\in N(K_M,x) \}\\ &\ge& \inf\{\|z-A(x+\xi)-y\|:\;\xi\in L,\; y\in N(K_1,x+\xi) \}\\ &=& \displaystyle\inf_{\xi\in L}d(z,\Phi_1(x+\xi))=d(z,\Phi_1(w)) \end{array} $$ for some $w\in x+L$. On the other hand, there is a $w'\inI\!\!R^n$ such that $z\in\Phi_1(w')$ and $\| w-w'\|=d(w,\Phi_1^{-1}(z))$. Let $x'$ be the orthogonal projection of $w'$ to $M$. We have $z=Aw'+y$ for some $y\in N(K_1,w')\subset M$. Therefore $Aw'\in M$ and moreover $A_Mx'=Aw'$. The latter is a consequence of the following simple observation: \begin{equation}\label{observe} v=Aw\in M,\quad x\in M,\; x\perp(w-x)\; \Rightarrow\; A_Mx= v. \end{equation} Indeed, $z=w-x\in L$, hence $Ax= Aw+Az= v+Az$ and, as $v\in M$ and $Az\in A(L)$ we have $\pi_M(Ax)= v +\pi_M(Az)=v$. It follows, as $N(K_M,x')= N(K_1,w')$), that $z\in\Phi_M(x')$ and $$ d(x,\Phi_M^{-1}(z))\le \|x-x'\|\le\| w-w'\|=d(w,\Phi_1^{-1}(z))\le\rho d(z,\Phi_1(w))\le d(x,\Phi_M(x)), $$ that is $\Phi_M$ is regular on $M$ (with the rate of metric regularity not greater than $\rho$). \endproof The following theorem is the key observation that paves way for proofs of the main result. \begin{theorem}\label{geom} Let $C=K$ be a convex polyhedral cone. If $\Phi$ is regular near $(0,0)$ (hence globally regular by Proposition \ref{globa}), then $A(K)\cap K^{\circ} =\{0\}$. \end{theorem} \proof The result is trivial if $n=1$. Assume that it holds for $n=m-1$, and let $m=n$. Note that the inclusion $A(K)\subset K^{\circ}$ can hold only if $K=\{0\}$. Indeed, if the inclusion is valid, then $\Phi(x)\in A(K)+ K^{\circ}=K^{\circ}$ for any $x\in K$, so by regularity $K^{\circ}$ must coincide with the whole of $I\!\!R^n$ and hence $K=\{0\}$. Thus if there is a nonzero $u\in A(K)\cap K^{\circ}$, we can harmlessly assume that $u$ is a boundary point of $K^{\circ}$ and there is a nonzero $w\in N(K^{\circ},u)$. Then $w\in K$ and $u\in N(K,w)$. Let $F=F_{\min}(w)$ so that $u\in N(K,F)$. Let as before, $L$ be the linear subspace spanned by $F$ and $M$ the linear subspace spanned by $N(K,F)$. These subspaces are complementary by (\ref{dimpol}) and orthogonal. By Proposition \ref{nonsing} $A(L)$ and $M$ are also complementary subspaces. Clearly, $u$ does not belong either to $L$ or to $A(L)$, the latter because otherwise the dimension of $A(F)+ N(K,F)$ would be strictly smaller than $n$. Consider the factorization $\Phi_M$ of $\Phi$ along $F$. Then $u\in K_M^{\circ}$ by definition. But as follows from (\ref{observe}) $u$ also belongs to $A_M(K_M)$. As $\Phi_M$ is regular by Proposition \ref{regfact} and $\dim M<m$, the existence of such a $u$ contradicts to the induction hypothesis. \endproof We are ready to state and proof the main result of the subsection. \begin{theorem}[regularity implies strong regularity]\label{main} Let $C$ be a polyhedral set and $\Phi(x) = Ax+N(C,x)$. If $\Phi$ is globally regular then the inverse mapping $\Phi^{-1}$ is single-valued and Lipschitz on $I\!\!R^n$. Thus, global regularity of $\Phi$ implies global strong regularity. \end{theorem} In other words, the solution map of $y\in \Phi(x)$ is everywhere single-valued and Lipschitz. \proof We only need to show that $\Phi^{-1}$ is single-valued: the Lipschitz property will then automatically follow from regularity. The theorem is trivially valid if $n=1$. Suppose it is true for $n\le m-1$ and consider the case $n=m$. We have to show that, given a convex polyhedron $C\inI\!\!R^m$ and a linear operator $A$ in $I\!\!R^m$ such that $\Phi(x)= Ax+ N(C,x)$ is globally regular on $I\!\!R^n$, the equality $Ax+y=Au+z$ for some $x,u\in C$, $y\in N(C,x)$, $z\in N(C,u)$ can hold only if $x=u$ and $y=z$. \noindent {\bf Step 1}. To begin with we observe that the equality $Au=Ax+y$ for some $u,x\in C$ and $y\in N(C,x)$ may hold only if $u=x$. Indeed, $u-x\in T(C,x)$. The same argument as in the proof of Proposition \ref{regfact} shows that $\Phi_1(w)= Aw+ N(T(C,x),w)$ is also globally regular and therefore by Theorem \ref{geom} $A(T(C,x))\cap N(C,x)=\{0\}$. It follows that $A(u-x)= y=0$. But regularity of $\Phi_1$ implies (by Proposition \ref{nonsing}) that $A$ is one-to one on $T(C,x)$, hence $u=x$. \noindent {\bf Step 2}. Assume now that for some $x,u\in C,\; u\neq x$, the equality $Ax+y=Au+z$, or $A(u-x)=y-z$, holds with $y\in N(C,x)$, $z\in N(C,u)$. We first show that this is impossible if $x\in F_{\min}(u)$. If under this condition $x\in{\rm ri}~\!C$, then $u$ is also in ${\rm ri}~\!C$ which means that $N(C,x)=N(C,u)$ coincides with the orthogonal complement $E$ to the subspace spanned by $C-C$. We have $y-z\in E$ and $u-x\in C-C$. By Proposition \ref{nonsing} $A(u-x)=y-z=0$ and the second part of the proposition implies that $u=x$. Let now $F=F_{\min}(x)$ be a proper face of $C$. Then $F\subset F_{\min}(u)$ and therefore $z\in N(C,F)$. Denote as before by $L$ the subspace spanned by $F$ and by $M$ the subspace spanned by $N(C,F)$, and let $\Phi_M$ be the factorization of $\Phi$ along $F$. Set $v=A(u-x)=y-z$. Then $v\in M$ as both $y$ and $z$ are in $N(C,F)$. Let $w$ be the orthogonal projection of $u-x$ onto $M$. Then by (\ref{observe}) $Aw=v$ and therefore $A_Mw=v$. Thus (recall that $y,z\in M$) $$ A_Mw+z= (\pi_M\circ A)(u-x) +z=\pi_M(A(u-x)+z)= \pi_My=y. $$ On the other hand, it is clear that $y\in N(K_M,0)$ and $z\in N(K_M,w)$. Indeed, $z\in N(T(C,x),u-x)$ (since $\langle z,v-x\rangle \le\langle z,u-x\rangle$ for all $v\in C$ on the one hand and, as we have seen, $z\in N(C,x)$, on the other) and therefore $z\in N(K_M,w)$ as $z\in M$ and $w-(u-x)\in L$. As $\dim M<m$, we conclude by the induction hypotheses that $w=0$, hence $u-x\in L$. But $A(u-x)=y-z\in M$ and a reference to proposition \ref{nonsing} again proves that $u=x$. \noindent{\bf Step 3}. It remains to consider the case when neither $x$ nor $u$ belongs to the minimal face of the other. Let $\kappa$ be the modulus of metric regularity of $\Phi$ or any bigger number. Choose $\varepsilon>0$ so small that the ball of radius $(1+\kappa)\varepsilon$ around $x$ does not meet any face $F\in{\mathcal F}_C$ not containing $x$. This means that $x\in F_{min}(w)$ whenever $w\in C$ and $\| w- x\|\le (1+\kappa)\varepsilon$. Let further $N$ be an integer big enough to guarantee that $\delta= N^{-1}\| y\|<\varepsilon$. Regularity of $\Phi$ allows to construct recursively a finite sequence of pairs $(u_k,z_k), \ k=0,1,\ldots,m$ such that $$ (u_0,z_0)=(u,z),\quad z_k\in F_{max}(u_k),\quad u_k+z_k= x+(1-m^{-1}k)y,\quad \|u_k-u_{k-1}\|\le \kappa\delta. $$ Then $u_N+z_N= x$. As follows from the result obtained at the first step of the proof, this means that $u_N=x$. This in turn implies, as $u_0\neq x$, that for a certain $k$ we have $u_k\neq x,\; \| u_k-x\|\le \kappa\delta< \kappa\varepsilon$. By the choice of $\varepsilon$ this implies that $x\in F_{min}(u_k)$. But in this case the result obtained at the second step excludes the possibility of the equality $u_k+z_k= x+(1-m^{-1}k)y$ unless $u_k=x$. So we again get a contradiction that completes the proof.\endproof The material presented in this subsection is a part of my recent paper \cite{AI15a} which contains also a proof (based on a similar ideas) of another principal result concerning uniqueness and Lipschitz behavior of solutions to variational inequalities over polyhedral sets due to Robinson \cite{SMR92}. Theorem \ref{main} was first stated by Dontchev-Rockafellar \cite{DR96} with a comment that it follows from a comparison of the mentioned Robinson's result and another theorem (proved by Eaves and Rothblum \cite{ER90}) containing an openness criterion for piecewise affine mappings. The given proof seems to give the first self-contained and reasonably short justification for the result. We refer to \cite{DR,FP} for further details. \subsection{Differential inclusions -- existence of solutions.} Here we consider the Cauchy problem for differential inclusions: \begin{equation}\label{di1} \dot x\in F(t,x),\quad x(0)=x_0, \end{equation} where $F:I\!\!R\timesI\!\!R^n\rraI\!\!R^n$. We assume that $\bullet$ \ $F$ is defined on some $\Delta\times U$ (that is $F(t,x)\neq\emptyset$ for all $x\in U$ and almost all $t\in\Delta$), where $\Delta = [0,T]$ and $U$ is an open subset of $I\!\!R^n$ containing $x_0$; $\bullet$ \ the graph of $F(t,\cdot)$ is closed for almost every $t\in \Delta$; $\bullet$ \ $F$ is measurable in $t$ in the sense that the function $t\mapsto d((x,y),{\rm Graph}~ F(t,\cdot))$ is measurable for all pairs $(x,y)\inI\!\!R^n\timesI\!\!R^n$. By a solution of (\ref{di1}) on $[0,\tau]\subset [0,\Delta]$ we mean any absolutely continuous $x(t)$ defined on $[0,\tau]$ and such that $\dot x(t)\in F(t,x(t))$ almost everywhere on $[0,\tau]$. \begin{theorem}\label{exdif} Assume that there is a summable $k(t)$ such that \begin{equation}\label{di2} h(F(t,x),F(t,x'))\le k(t)\| x-x'\|,\quad\forall\; x,x'\in U,\;\text{a.e. on}\quad [0,1]. \end{equation} Let further $x_0(\cdot)$ be an absolutely continuous function on $[0,T]$ with values in $U$ such that $x_0(0)=x_0$ and $\rho(t)= d(\dot x_0(t),F(t,x_0(t)))$ is a summable function. Then there is a solution of (\ref{di1}) defined on some $[0,\tau]$, $\tau>0$. Specifically, set $r=d(x_0,I\!\!R^n\backslash U)$, and let $\tau\in (0,T]$ be so small that \begin{equation}\label{di3} 1> k_{\tau}= \int_0^{\tau} k(t)dt;\quad(1-k_{\tau})r> \xi_{\tau}=\int_0^{\tau}d(\dot x_0(t),F(t,x_0(t)))dt. \end{equation} Then for any $\varepsilon>0$ there is a solution $x(\cdot)$ of (\ref{di1}) defined on $[0,\tau]$ and satisfying \begin{equation}\label{di4} \int_0^{\tau}\| \dot x(t)-\dot x_0(t)\|\le \frac{1+\varepsilon}{1-k_{\tau}}\xi_{\tau}. \end{equation} \end{theorem} \noindent Recall that $h(P,Q)$ is the Hausdorff distance between $P$ and $Q$. \proof We may set $x_0(t)\equiv0$ (replacing if necessary $F(t,x)$ by $F(t,x_0(t)+x)-\dot x(t)$ and $U$ by $r\overset{\circ}B$). Let $X=W_0^{1,1}[0,\tau]$ stand for the space of $I\!\!R^n$-valued absolutely continuous functions on $[0,\tau]$ equal to zero at zero with the norm $$ \| x(\cdot)\|_{\tau}=\int_0^{\tau} \| \dot x(t)\|dt, $$ and let $I$ denote the identity map in $X$. Let finally ${\mathcal F}$ be the set-valued mapping from $X$ into itself that associates with every $x(\cdot)$ the collection of absolutely continuous functions $y(\cdot)$ such that $y(0)=0$ and $\dot y(t)\in F(t,x(t))$ a.e.. We have to prove the existence of an $x(\cdot)\in X$ satisfying (\ref{di4}) and \begin{equation}\label{di5} 0\in (I-{\mathcal F})(x(\cdot)) \end{equation} Note first that the graph of ${\mathcal F}$ is closed, that is whenever $x_n(\cdot)\to x(\cdot)$, $y_n(\cdot)\in{\mathcal F}(x_n(\cdot))$ and $y_n(\cdot)$ norm converge to $y(\cdot)$, then $y(\cdot)\in {\mathcal F}(x(\cdot))$. Let ${\mathcal U}$ be the open ball of radius $r$ around zero in $X$. Thus $x(t)\in U$ for any $t\in [0,\tau]$ whenever $x(\cdot)\in {\mathcal U}$ and therefore by (\ref{di2}) ${\mathcal F}$ is Lipschitz on ${\mathcal U}$ with ${\rm lip}{\mathcal F}({\mathcal U})\le k_{\tau}$. On the other hand, $I$ is Milyutin regular on ${\mathcal U}$ with ${\rm sur}_mI({\mathcal U})=1$. By Theorem \ref{milt1} \begin{equation}\label{di6} {\rm sur}_m(I-{\mathcal F})({\mathcal U})\ge 1-k_{\tau}. \end{equation} In particular $B(y(\cdot),(1-k_{\tau})\rho)\subset (I-{\mathcal F})(\rho B)$ for any $y(\cdot)\in (I-{\mathcal F})(0)$ if $\rho< r$. Take a $y(\cdot)\in X$ such that $ \dot y(t)\in F(t,0)$ and $\|\dot y(t)\|=d(0,F(t,0))$ a.e.. Then $\| y(\cdot)\|_{\tau}= \xi_{\tau}<(1-k_{\tau})r$ by (\ref{di3}). Thus $0\in B(y(\cdot),(1-k_{\tau})\rho)$ for some $\rho<r$ and therefore there is an $x(\cdot)$ with $\| x(\cdot)\|_{\tau}<\rho$,\ $0\in(I-{\mathcal F})(x(\cdot))$. \endproof The theorem is close to the original result of Filippov \cite{AF67}. Versions of this results and its applications can be found in many subsequent publications, see e.g \cite{AC,AF}. Typical proofs of existence results for differential inclusions use either some iteration procedures or selection theorems to reduce the problem to existence of solutions of differential equations. Observe that our proof appeals to non-local regularity theory. \end{document}
arXiv
\begin{document} \author{Christian Hirsch} \author{Benedikt Jahnel} \author{Andr{\'a}s~T{\'o}bi{\'a}s} \address[Christian Hirsch]{University of Mannheim, Institute of Mathematics, 68161 Mannheim, Germany} \email{[email protected]} \address[Benedikt Jahnel]{Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstra{\ss}e 39, 10117 Berlin, Germany} \email{[email protected]} \address[Andr{\'a}s~T{\'o}bi{\'a}s]{Technical University of Berlin, Institute of Mathematics, Stra{\ss}e des 17.~Juni 136, 10623 Berlin, Germany.} \email{[email protected]} \keywords{Large deviations; lower tails; stabilizing functionals; random geometric graph; $k$-nearest neighbor graph; relative neighborhood graph; Voronoi tessellation; clique count} \subjclass[2010]{60K35; 60F10; 82C22} \begin{abstract}This work develops a methodology for analyzing large-deviation lower tails associated with geometric functionals computed on a homogeneous Poisson point process. The technique applies to characteristics expressed in terms of stabilizing score functions exhibiting suitable monotonicity properties. We apply our results to clique counts in the random geometric graph, intrinsic volumes of Poisson--Voronoi cells, as well as power-weighted edge lengths in the random geometric, $k$-nearest neighbor and relative neighborhood graph. \end{abstract} \title{Lower large deviations for geometric functionals} \section{Introduction and main results} \label{introSec} Considering the field of random graphs, there is a subtle difference in the understanding between upper and lower tails in a large-deviation regime. For instance, when considering the triangle count in the Erd\H{o}s--R\'enyi graph, the probability of observing atypically few triangles is described accurately via very general Poisson-approximation results \cite{janson1, janson2}. On the other hand, the probability of having too many triangles requires a substantially more specialized and refined analysis \cite{misLog}. This begs the question whether a similar dichotomy also arises in the large-deviation analysis of functionals that are of geometric rather than combinatorial nature. For instance, Figure \ref{rareFig} shows a typical realization of the random geometric graph in comparison to a realization with an atypically small number of edges. In geometric probability, elaborate results are available for large and moderate deviations of geometric functionals exhibiting a similar behavior in the upper and the lower tails \cite{yukLDP2,yukLDP,eichelsbacher}. However, they prominently do not cover the edge count in the random geometric graph, whose upper tails have been understood only recently \cite{harel}. \begin{figure} \caption{Typical realization of the random geometric graph (left) next to a realization having fewer than $75\%$ of the expected number of edges (right).} \label{rareFig} \end{figure} In the present work, we provide three general results, Theorems \ref{upThm}, \ref{gilbThm} and \ref{neighbThm}, tailored to studying large-deviation lower tails of geometric functionals. For the proofs, we resort to a method inspired by the idea of sprinkling \cite{sprinkling}. We perform small changes in those parts of the domain where the underlying point process exhibits highly pathological configurations. After this procedure, we can compare the resulting functionals to approximations that are then amenable to the point-process based large-deviation theory from \cite{georgii2} or \cite{yukLDP2,yukLDP}. Among the examples covered by our method are clique counts in the random geometric graph, inner volumes of Poisson--Voronoi cells and power-weighted edge lengths in the random geometric, $k$-nearest neighbor and relative neighborhood graph. In the rest of this section, we set up the notation and state the main results. Then, Section \ref{sec-examples} illustrates those results through the examples. Finally, Section \ref{sec-proofs} contains the proofs. We study functionals on a homogeneous Poisson point process $X = \{X_i\}_{i \ge 1} \subset \mbb R^d$ with intensity 1, whose distribution on the space $\mathbf N$ of locally-finite configurations will be denoted by $\mbb P$. Following the framework of \cite{yukLDP2}, these functionals are realized as averages of scores associated to the points of $X$. More precisely, a \emph{score function} $$\xi:\, \mbb R^d \times \mathbf N \to [0, \infty)$$ is any bounded measurable function. To simplify notation, we shift the coordinate system to the considered point and write $\xi(X - X_i) = \xi(X_i, X)$. In this notation $\varphi\mapsto\xi(\varphi)$ acts on configurations $\varphi\in \mathbf N_o$, the family of locally-finite point configurations with a distinguished node at the origin $o \in \mbb R^d$. We then consider lower tails of functionals of the form \begin{align} \label{hnEq} H_n = H_n^\xi(X) = \frac1{n^d}\sum_{X_i \in X\cap Q_n}\xi(X - X_i), \end{align} i.e., averages of the score function over all points in the box $Q_n = [-n/2, n/2]^d$ of side length $n \ge 1$ centered at the origin. In a first step, we derive upper bounds for the lower tail probabilities. To that end, we work with approximating score functions $\xi^r$ that are \emph{$r$-dependent} for some $r > 0$. That is, $\xi^r(\varphi) = \xi^r(\varphi \cap B_r)$ for every $\varphi \in\mathbf N_o$, where $B_r$ denotes the Euclidean ball of radius $r$ centered at the origin. To state the main results, we resort to the entropy-based formulation of the large-deviation rate function. We write $$h(\mbb Q) =\lim_{n\uparrow\infty}\frac{1}{n^d}\int{\rm d} \mbb Q_n\log\frac{{\rm d}\mbb Q_n}{{\rm d} \mbb P_n}$$ for the \emph{specific relative entropy} of a stationary point process $\mbb Q$, where $\mbb Q_n$ and $\mbb P_n$ denote the restrictions of $\mbb Q$ and $\mbb P$ to the box $Q_n$, respectively. If $\mbb Q_n$ is not absolutely continuous with respect to the restricted Poisson point process, we adhere to the convention that the above integral is infinite. Further, $\mbb Q^o[\xi]$ is the expectation of $\xi$ with respect to the Palm version $\mbb Q^o$ of $\mbb Q$, see~\cite{georgii2} for details. Here is our first main theorem. \begin{theorem}[Upper bound] \label{upThm} Let $a > 0$ and assume the score function $\xi$ to be the pointwise increasing limit of a family $\{\xi^ r\}_{r \ge 1}$ of $r$-dependent score functions. Then, \begin{align} \label{gilbUpEq} \limsup_{n \uparrow \infty} \frac {1}{n^d}\log \mbb P(H_n \le a) \le -\inf_{\mbb Q:\, \mbb Q^o[\xi] \le a}h(\mbb Q). \end{align} \end{theorem} For the lower bound, we give two sets of conditions. The first deals with score functions $\xi$ that are \emph{increasing} in the sense that $\xi(\varphi) \le \xi(\psi)$ for every $\varphi \subset \psi$. This applies for instance to clique counts and power-weighted edge lengths in the random geometric graph. \begin{theorem}[Lower bound for bounded-range scores] \label{gilbThm} Let $a> 0$ and assume the score function $\xi$ to be increasing and $r$-dependent for some $r > 0$. Moreover, assume that for every $b > 0$ there exists $M = M(b) > 0$ such that $\xi(\varphi) \le M$ whenever $\#\varphi < b$. Then, \begin{align} \label{gilbLowEq} \liminf_{n \uparrow \infty} \frac{1}{n^d}\log \mbb P( H_n < a) \ge -\inf_{\mbb Q:\, \mbb Q^o[\xi] < a}h(\mbb Q). \end{align} \end{theorem} However, many score functions are neither $r$-dependent nor increasing, or not even monotone. A prime example is the sum of power-weighted edge lengths in the $k$-nearest neighbor graph, see Section~\ref{sec-examples}. Still, this example and many other score functions are stabilizing, $R$-bounded and weakly decreasing in the following sense. First, a score function $\xi$ is \emph{stabilizing} if there exists a $\mbb P^o$-almost surely finite measurable \emph{stabilization radius} $R:\ \mathbf N_o \to [0, \infty]$, such that $\{R(X) \le r\}$ is measurable with respect to $X \cap B_r$ for every $r \ge 0$ and $$\mbb P^o\big( \xi(X) = \xi(X \cap B_{R(X)})\big) = 1.$$ In words, $\xi(X)$ does not depend on the configuration outside the ball $B_{R(X)}$. We call $R$ {\em decreasing} if $R(\varphi\cup\{x\})\le R(\varphi)$ for all $\varphi\in \mathbf N_o$ and $x\in \mbb R^d$. Second, $\xi$ is \emph{$R$-bounded} if for every $\delta > 0$ and sufficiently large $M=M(\delta) \ge 1$, $$\mbb P^o\big(\{R(X) \le M\} \cap \{\xi(X) \ge \delta M^d\}\big) = 0.$$ Loosely speaking, the score function is negligible compared to the $d$th power of the stabilization radius. Third, $\xi$ is \emph{weakly decreasing} if $$ \mbb P\big(\#\{y\in X\colon \xi(X\cup\{o\} - y)>\xi(X - y)\}\le k\big) = 1 $$ holds for some $k \ge 1$. In words, for all but at most $k$ points of a configuration, adding a new point to the configuration decreases the score function value of the point. Finally, we need to ensure that sprinkling a sparse configuration of Poisson points yields control on the stabilization radii of the points in a box. More precisely, we assume that the stabilization radius is \emph{regular} in the following sense. Let $X^{+, M}$ denote a Poisson point process with intensity $M^{-d}$ that is independent of $X$. Then, we assume that there exists $K_0 > 0$ with the following property. For every $\delta > 0$ there exist $M_0 = M_0(\delta) \ge 1$ and $n_0=n_0(\delta) \geq 1$ such that for all $M \ge M_0$ and $n \geq n_0$, $$\mbb P\big(\{ X^{+, M}(Q_n) \le K_0 (n/M)^d \} \cap E_n^{M, +}|X \big) \ge \exp(-\delta n^d)$$ holds almost surely. Here, for $\varphi\in\mathbf N$ and any measurable subset $A\subset \mbb R^d$, we write $\varphi(A)=\#\{x\in \varphi\colon x\in A\}$ for the number of points of $\varphi$ contained in $A$, and $$E_n^{M, +} = \max_{X_i \in (X \cup X^{+, M})\cap Q_n} R\big((X \cup X^{+, M}) - X_i\big) \le M$$ denotes the event that after the sprinkling, the stabilization radii of all points in $Q_n$ are at most $M$. Here is the corresponding main result. \begin{theorem}[Lower bound for stabilizing scores] \label{neighbThm} Let $a > 0$ and $\xi$ be a weakly-decreasing $R$-bounded score function with a decreasing and regular radius of stabilization. Then, \eqref{gilbLowEq} remains true. \end{theorem} \section{Examples}\label{sec-examples} In this section, we discuss how to apply the results announced in Section \ref{introSec} to a variety of examples arising in geometric probability. More precisely, Sections \ref{rggSec}, \ref{vorSec} and \ref{knnSec} are devoted to characteristics for the random geometric graph, the Voronoi tessellation, $k$-nearest neighbor graphs and relative neighborhood graphs, respectively. \subsection{Clique counts and power-weighted edge lengths in random geometric graphs} \label{rggSec} As a first simple application of our results, consider the set $$ C_k(\varphi) = C_{k, t}(\varphi)= \big\{\{x_1,\dots, x_k\}\subset\varphi\colon x_1=o \text{ and }|x_i - x_j|<t \text{ for all $i \ne j$} \big\} $$ of \emph{$k$-cliques} associated to the origin in the geometric graph on $\varphi \in \mathbf N_o$ with connectivity radius $t > 0$. Then, for $k \ge 2$ and $\alpha \ge 0$, the score functions $$ \xi_k(\varphi) = \frac 1k \#C_k(\varphi) \qquad\text{ and }\qquad \xi'_\alpha(\varphi) = \frac 12\sum_{\substack{x \in \varphi\colon |x| < t}}|x|^\alpha $$ count the number of $k$-cliques containing the origin and the power-weighted edge lengths at the origin, respectively. Note that $\xi_k$ and $\xi'_\alpha$ are $t$-dependent and increasing. Additionally, if $\#\varphi < b$, then $\xi_k(\varphi) \le k^{-1}b^{k - 1}$ and $\xi'_\alpha(\varphi) \le t^\alpha b$. Hence Theorems \ref{upThm} and \ref{gilbThm} are applicable. Further examples arise in the context of topological data analysis. More precisely, the number of $k$-cliques containing the origin is precisely the number of $k$-simplices of the Vietoris--Rips complex containing the origin. Similar arguments also apply to the \v{C}ech complex, the second central simplicial complex in topological data analysis. We refer the reader to \cite[Section 2.5]{chazal} for precise definitions and further properties. \subsection{Intrinsic volumes of Voronoi cells} \label{vorSec} Recall the definition of the Voronoi cell at the origin of a locally-finite configuration $\varphi\in\mathbf N_o$, i.e., $$C_o(\varphi) = \{x \in \mbb R^d:\,|x| \le \inf_{y \in \varphi}|x - y|\}.$$ Recall that since $C_o(\varphi)$ is a convex body, its \emph{intrinsic volumes} $v_0(C_o), v_1(C_o), \dots, v_d(C_o)$ can be computed. They are key characteristics of a convex set, e.g., $v_1$, $v_{d - 1}$ and $v_d$ are proportional to the mean width, the surface area and the volume, respectively. We refer the reader to \cite[Section 14.2]{sWeil} for a precise definition and further properties. In particular, considering $v_1$ in dimension $d = 2$, the associated characteristic $n^dH_n$ becomes the total edge length of the Voronoi graph, so that we obtain a link to the setting studied in \cite[Section 2.4.1]{yukLDP}. Due to the intricate geometry, deriving a full large deviation principle even for a strictly concave function of the edge length was only achieved for a Poisson point process that is restricted to a lattice instead of living in the entire Euclidean space. This example illustrates that even in situations where understanding the large-deviation upper tails requires a delicate geometric analysis, the lower tails may be more accessible. More precisely, consider the score functions $$ \xi_k(\varphi) = v_k(C_o(\varphi)) $$ and note that $\xi_k^r(\varphi) = v_k\big(C_o(\varphi)\cap B_r\big)$ is a $4r$-dependent, pointwise increasing approximation of $\xi_k(\varphi)$. Hence, the upper bound of Theorem \ref{upThm} applies. For the lower bound, the conditions of Theorem \ref{neighbThm} can be satisfied using the following definitions. The radius of stabilization is described in \cite[Section 6.3]{gaussLim}: Take any collection $\{S_i\}_{i\in I}$ of cones with apex at the origin and angular radius $\pi/12$ whose union covers $\mbb R^d$, where $I = I(d) \in \mbb N$. Let $S_i^+$ denote the cone that has the same apex and symmetry hyperplane as $S_i$ and has the larger angular radius $\pi/6$. Then, we define the stabilization radius \begin{align}\label{StabRadVor} R(\varphi) = 2\max_{i\in I}\min_{x\in \varphi\cap S_i^+}|x|, \end{align} as twice the radius at which the origin has a neighbor in every extended cone. In particular, both $R$ and $\xi_k$ are decreasing. Since $C_o(\varphi) \subset B_{R(\varphi)}$, we deduce that $$\xi_k(\varphi) \le v_k(B_{R(\varphi)}) = R(\varphi)^k v_k(B_1).$$ In particular, $\xi_k$ is $R$-bounded for $k < d$. Finally, we define for a suitable constant $L = L(d) \ge 1$ the event \begin{equation} A^M_n=\{X^{+, M}(Q_{M/L}(z)) = 1\text{ for all }z\in (M/L)\mbb Z^d\cap Q_{2n}\} \end{equation} that $X^{+, M}$ has precisely one point in each sub-box from an $M/L$-partition of the box $Q_{2n}$. It follows from the definition of $R$ that the event $E_n^{M, +}$ occurs whenever $A^M_n$ occurs, provided that $L$ is chosen sufficiently large. Moreover, setting $K_0 = (2L)^d$, we deduce that $X^{+, M}(Q_n) \le K_0 (n/M)^d$ under $A^M_n$. Hence, it remains to establish the asserted lower bound on the probability $\mbb P(A^M_n)$. Fixing $\delta > 0$ and invoking the independence property of the Poisson point process yields that $$\mbb P(A^M_n) = \mbb P(X^{+, M}(Q_{M/L}) = 1)^{(2nL/M)^d} = {\rm e}^{-(2n/M)^d}L^{-(2nL/M)^d} \ge {\rm e}^{-\delta n^d},$$ provided that $M = M(\delta)$ is sufficiently large. Summarizing the above findings, we deduce that Theorem \ref{neighbThm} can be applied to get the lower bound on the rate function. \subsection{Power-weighted edge counts in $k$-nearest neighbor graphs and relative neighborhood graphs}\label{knnSec} Finally, we elucidate how to apply Theorem \ref{neighbThm} to the power-weighted edge count of two central graphs in computational geometry, namely the $k$-nearest neighbor graph and the relative neighborhood graph. As we shall see, in contrast to the Voronoi example presented in Section \ref{vorSec}, we encounter here score functions that are weakly decreasing but not decreasing. A full large deviation principle for the total edge length of the $k$-nearest neighbor graph is described in \cite[Section 2.3]{yukLDP}, and we believe that the proof should extend to power-weighted edge lengths with a power strictly less than $d$. Nevertheless, we apply here our approach towards the large-deviation lower tails as it can be directly adapted to the bidirectional $k$-nearest neighbor graph, the relative neighborhood graph and possibly further graphs. In the \emph{undirected $k$-nearest neighbor graph}, $\xi$ expresses the powers of distances between any point and the origin, such that at least one of them belongs to the set of $k$ nearest neighbors of the other one. To be more precise, \begin{align}\label{k-radius} \mathfrak R_k(\varphi) =\inf \{ r>0 \colon \varphi(B_r) \geq k + 1 \} \end{align} defines the \emph{$k$-nearest neighbor radius} of $o$ in $\varphi \in \mathbf N_o$. Then, for some $\alpha \ge 0$, the score function corresponding to the sum of power-weighted edge lengths of the $k$-nearest neighbor graph is defined via \begin{align*} \xi_{k, \alpha}(\varphi) = \frac12 \sum_{\substack{x \in \varphi\colon |x| \le \mathfrak R_k(\varphi) \vee \mathfrak R_k(\varphi-x)}}|x|^\alpha. \end{align*} In particular, we recover the number of edges by setting $\alpha = 0$. As noted in \cite[Section 6.3]{gaussLim}, to construct a radius of stabilization we can proceed as in \eqref{StabRadVor} except for replacing $\min_{x\in \varphi\cap S^+_i}|x|$ by the distance of the $k$th closest point from the origin in $\varphi \cap S_i^+$. Hence, $\xi_{k, \alpha}$ becomes stabilizing with a decreasing stabilization radius. In the same vein, a minor adaptation of the arguments in Section \ref{vorSec} yield the regularity and $R$-boundedness for $\alpha < d$. In order to apply Theorem~\ref{neighbThm} for the lower bound, it remains to verify the following. \begin{lemma} \label{lemma-wd_undirknng} $\xi_{k, \alpha}$ is weakly decreasing. \end{lemma} \begin{proof} Let us call $\varphi\in\mathbf N$ \emph{nonequidistant} if for all $y,z,v,w \in \varphi$, $|y-z| = |v-w| > 0$ implies $\{ y,z \}=\{ v, w \}$. First note that for any $x\in\mbb R^d$, under $\mbb P$, almost all configurations $\varphi\cup\{x\}$ are nonequidistant. We claim that for any nonequidistant configuration $\varphi\cup\{x\}$, we have for all but at most $k$ points $y \in \varphi$ that \[ \xi_k(\varphi \cup \{ x \} -y) \leq \xi_k(\varphi - y). \numberthis\label{NNdecrease} \] Indeed, for $y \in \varphi$, let us define the set of $k$ nearest neighbors of $y$ in $\varphi$ as follows \[ k\mathrm{NN}(\varphi,y)=\big(B_{\mathfrak R_k(\varphi-y)}(y) \cap \varphi\big) \setminus \{ y \}. \] Now, if $y \in k\mathrm{NN}(\varphi\cup \{ x \},x)$, then possibly $\xi_k(\varphi \cup \{ x \} -y) > \xi_k(\varphi - y)$. We claim that else \eqref{NNdecrease} holds. Indeed, if $y \notin k\mathrm{NN}(\varphi\cup \{ x \},x)$, then there are two possibilities. If $x \in k\mathrm{NN}(\varphi\cup \{ x \},y)$, then $x$ replaced precisely one neighbor $z$ of $y$ and is closer to $y$ than $z$. More precisely, note that $|x-y|\leq \mathfrak R_k(\varphi\cup\{x\}-y) \leq \mathfrak R_k(\varphi-y)$. Hence, there exists $z \in k\mathrm{NN}(\varphi,y)$ such that $|z-y|=\mathfrak R_k(\varphi-y)$ and $z \notin k\mathrm{NN}(\varphi \cup \{ x \},y)$, the neighbor of $y$ that is replaced by $x$. Additionally, for any $w \in k\mathrm{NN}(\varphi,y) \setminus \{ z \}$ also $w \in k\mathrm{NN}(\varphi \cup \{ x \},y)$. Further, also for any $v\in\varphi$ such that $y\ink\mathrm{NN}(\varphi\cup\{x\},v)$ we have $y\ink\mathrm{NN}(\varphi,v)$. Hence, \[ \xi_k(\varphi \cup \{ x \} -y ) - \xi_k(\varphi-y)\le |x-y|^{\alpha} - |z-y|^{\alpha} \leq 0, \] which is \eqref{NNdecrease}. The other possibility is that $x \notin k\mathrm{NN}(\varphi\cup \{ x \},y)$. Then the addition of $x$ can only remove edges that were present due to the fact that some other point had $y$ as a neighbor. In this case, $\xi(\varphi \cup \{ x \}-y)=\xi(\varphi-y)$ unless there exists $z \in \varphi$ such that $y \in k\mathrm{NN}(\varphi,z)$ but $y \notin k\mathrm{NN}(\varphi \cup \{ x \},z)$, which must be due to the property that $x \in k\mathrm{NN}(\varphi \cup \{x\},z)$. So again, the addition of $x$ can only remove such an edge and hence again~\eqref{NNdecrease} holds for $y$. \end{proof} Note that the approach presented above also applies to further graphs studied in computational geometry. The most immediate adaptation concerns the \emph{bidirectional $k$-nearest neighbor graph}, see~\cite{BB08}, where in the definition of the score function, we replace $\mathfrak R_k(\varphi) \vee \mathfrak R_k(\varphi - x)$ by $\mathfrak R_k(\varphi) \wedge \mathfrak R_k(\varphi - x)$. Not only can we take the same radius of stabilization, but also Lemma~\ref{lemma-wd_undirknng} remains valid. As a third example, we showcase the \emph{relative neighborhood graph}. Here, for $\alpha\ge 0$ and $\varphi\in\mathbf N_o$ the score function is given by \begin{align*} \xi_{\rm RN}(\varphi) = \frac12 \sum_{\substack{x \in \varphi\colon \varphi\cap B_{|x|}(o)\cap B_{|x|}(x)=\emptyset}}|x|^{\alpha}. \end{align*} The {\em relative neighborhood graph} is a sub-graph of the Delaunay tessellation, and in fact we can reuse the radius of stabilization from Section \ref{vorSec}. Finally, proving the analog of Lemma \ref{lemma-wd_undirknng} reduces to the observation that the degree of every node in the relative neighborhood graph is bounded by a constant $K=K(d)$, see \cite[Section IV]{RNG}. What remains to be verified is that $\xi_{\rm RN}$ is weakly decreasing. \begin{lemma}\label{lemma-almostincreasing_RN} $\xi_{\rm RN}$ is weakly decreasing. \end{lemma} \begin{proof} We claim that for any nonequidistant configuration $\varphi\cup\{x\}$ with $\varphi\in\mathbf N$, for all but at most $K$ points $y \in \varphi$, \[ \xi_{\rm RN}(\varphi \cup \{ x \} -y) \leq \xi_{\rm RN}(\varphi - y)\numberthis\label{RNdecrease} \] holds. Indeed, for $y \in \varphi$, let us define the set of relative neighbors of $y$ in $\varphi$ as follows \[ \mathrm{RN}(\varphi,y):=\{z\in \varphi\setminus\{y\}\colon \varphi\cap B_{|z-y|}(y)\cap B_{|z-y|}(z)=\emptyset\}, \] and note that $z\in \mathrm{RN}(\varphi,y)$ if and only if $y\in\mathrm{RN}(\varphi,z)$. In particular, $\#\mathrm{RN}(\varphi,y)\le K$ for any $y\in \varphi$. So, if $y \in \mathrm{RN}(\varphi\cup \{ x \},x)$, then possibly $\xi_{\rm RN}(\varphi \cup \{ x \} -y) > \xi_{\rm RN}(\varphi - y)$. But if $y \notin \mathrm{RN}(\varphi\cup \{ x \},x)$, then \begin{align*} &\xi_{\rm RN}(\varphi \cup \{ x \} -y ) - \xi_{\rm RN}(\varphi-y)\\ &=\tfrac{1}{2}\sum_{z\in \varphi-y}|z-y|^\alpha\Big(\mathbbmss{1}\{z\in \mathrm{RN}(\varphi\cup\{x\},y)\}-\mathbbmss{1}\{z\in \mathrm{RN}(\varphi,y)\}\Big)\le0, \end{align*} as asserted. \end{proof} \section{Proofs}\label{sec-proofs} In this section we provide the proofs of the main theorems. \subsection{Proof of Theorem \ref{upThm}} \label{upSec} The proof of the upper bound relies on the level-3 large deviation principle for the Poisson point process from \cite[Theorem 3.1]{georgii2}. \begin{proof}[Proof of Theorem \ref{upThm}] Replacing $\xi^ r$ by $\xi^ r \wedge r$ if necessary, we may assume that $\xi^ r$ is bounded above by $r$. Then, $\xi^ r$ is a bounded local observable, so that by the contraction principle \cite[Theorem 4.2.10]{dz98} and \cite[Theorem 3.1]{georgii2}, $$\limsup_{n \uparrow \infty} \frac1{n^d} \log \mbb P(H_n \le a) \le \limsup_{n \uparrow \infty} \frac1{n^d} \log \mbb P(H_n^{\xi^ r} \le a) \le -\inf_{\mbb Q:\, \mbb Q^o[\xi^ r] \le a}h(\mbb Q).$$ Hence, it suffices to show that $$-\lim_{r \uparrow \infty}\inf_{\mbb Q:\, \mbb Q^o[\xi^ r] \le a}h(\mbb Q) \le -\inf_{\mbb Q:\, \mbb Q^o[\xi] \le a}h(\mbb Q).$$ Let $\{\mbb Q_k\}_{k \ge 1}$ be a family of stationary point processes such that $\mbb Q_k^o[\xi^k] \le a$ and $$\lim_{k \uparrow \infty}h(\mbb Q_k) = \lim_{r \uparrow \infty}\inf_{\mbb Q:\, \mbb Q^o[\xi^ r] \le a}h(\mbb Q).$$ Let $\mbb Q_*$ be a subsequential limit of $\{\mbb Q_k\}_{k \ge 1}$. To simplify the presentation, we may assume $\mbb Q_*$ to be the limit of $\{\mbb Q_k\}_{k \ge 1}$. Then, by monotone convergence, $$\mbb Q_*^o[\xi] \le \lim_{r \uparrow \infty}\mbb Q_*^o[\xi^ r] = \lim_{r \uparrow \infty}\lim_{k \uparrow \infty}\mbb Q_k^o[\xi^r]\le \limsup_{k \uparrow \infty}\mbb Q_k^o[\xi^k] \le a.$$ Since the specific relative entropy $h$ is lower semicontinuous, we arrive at $$\liminf_{k \uparrow \infty} h(\mbb Q_k) \ge h(\mbb Q_*) \ge \inf_{\mbb Q:\, \mbb Q^o[\xi] \le a}h(\mbb Q),$$ as asserted. \end{proof} \subsection{Proof of Theorem~\ref{gilbThm}} To prove Theorem \ref{gilbThm}, we consider the truncation $\xi^M=\xi \wedge M$ of the original increasing and $r$-dependent score function $\xi$ at a large threshold $M > 1$ and write $H_n^M=H_n^{\xi^M}$. In comparison to the arguments in Section \ref{upSec}, the proof of the lower bound is more involved, since we can no longer replace $\mbb P(H_n \le a)$ by $\mbb P(H_n^M \le a)$. Instead, we rely on a sprinkling approach. For this method to work, we need that the total number of points in pathological areas is small with high probability. More precisely, we say that a point $X_i \in X$ is \emph{$b$-dense} if $X(Q_r(X_i)) > b$ and write $$N_{b, n}=N_{b,n}(X) = \#\{X_i \in X\cap Q_n:\, \text{$X_i$ is $b$-dense}\}$$ for the total number of $b$-dense points in $Q_n$. Then, $b$-dense points are indeed rare. \begin{lemma}[Rareness of $b$-dense points] \label{rareDenseLem} Let $\delta > 0$. Then, $$ \limsup_{b \uparrow \infty}\limsup_{n \uparrow \infty}\frac1{n^d} \log \mbb P(N_{b, n} > \delta n^d) = -\infty.$$ \end{lemma} In the second step, we remove all $b$-dense points through the coupling. That is, we let $X^{-, \e}$ be an independent thinning of $X$ with survival probability $1 - \varepsilon$. Furthermore, we let $X^{+, \e}$ be an independent Poisson point process with intensity $\varepsilon > 0$. Then, the coupled process $$X^\e = X^{-, \e} \cup X^{+, \e}$$ is again a Poisson point process with intensity 1. Now, let $$E_{b, n} = \{X^{+, \e} \cap Q_n = \emptyset\} \cap \{X^{-, \e} \cap Q_n \text{ has no $b$-dense points}\}$$ be the event that $X^{+, \e}$ has no points in $Q_n$ and that $X^{-, \e}$ does not contain any $b$-dense points in $Q_n$. \begin{lemma}[Removal of $b$-dense points] \label{remDenseLem} Let $b, n, \varepsilon > 0$. Then, $\mbb P$-almost surely, $$\mbb P(E_{b,n}|X) \ge \exp(-\varepsilon n^d + {N_{b, n}}\log(\varepsilon)).$$ \end{lemma} Before showing Lemmas \ref{rareDenseLem} and \ref{remDenseLem}, we illustrate how they enter the proof of \eqref{gilbLowEq}. \begin{proof}[Proof of Theorem \ref{gilbThm}] Let $M > 0$. Then, by \cite[Theorem 3.1]{georgii2}, $$ \liminf_{n \uparrow \infty} \frac1{n^d} \log \mbb P(H_n^M< a) \ge -\inf_{\mbb Q:\, \mbb Q^o[\xi^M] < a }h(\mbb Q) \ge -\inf_{\mbb Q:\, \mbb Q^o[\xi] < a }h(\mbb Q). $$ Hence, it remains to show that \begin{align} \label{gilbLowEq2} \liminf_{n \uparrow \infty} \frac1{n^d} \log \mbb P(H_n < a) \ge \liminf_{M \uparrow \infty}\liminf_{n \uparrow \infty} \frac1{n^d} \log \mbb P(H_n^M < a). \end{align} Let $b, \delta, \varepsilon > 0$ be arbitrary. Now, since $\xi$ is increasing, $$\mbb P(H_n < a) = \mbb P(H_n(X^\e) < a) \ge \mbb P(\{H_n^{M(b)} < a\} \cap E_{b, n}) = \mbb E\big[\mathbbmss{1}\{H_n^{M(b)} < a \}\mbb P[E_{b,n}\,|\,X]\big].$$ Thus, by Lemma \ref{remDenseLem}, \begin{align*} \mbb P(H_n < a) &\ge \exp(-\varepsilon n^d)\mbb E\big[\mathbbmss{1}\{H_n^{M(b)} < a \}\varepsilon^{N_{b, n}}\big]\\ &\ge \exp\big((\delta \log(\varepsilon) - \varepsilon) n^d\big)\mbb P(H_n^{M(b)} < a) - \mbb P(N_{b, n} > \delta n^d). \end{align*} Since $X$ and $X^\e$ share the same distribution, Lemma \ref{rareDenseLem} allows us to choose $b = b(\delta) > 0$ sufficiently large such that $$\liminf_{n \uparrow \infty} \frac1{n^d}\log\mbb P(H_n < a) \ge \delta \log(\varepsilon) - \varepsilon + \liminf_{n \uparrow \infty}\frac1{n^d} \log\mbb P(H_n^{M(b)} < a).$$ Hence, sending $\varepsilon\da0$, $\delta \downarrow 0$, and $b \uparrow \infty$ concludes the proof of \eqref{gilbLowEq2}. \end{proof} \begin{proof}[Proof of Lemma \ref{rareDenseLem}] Consider a subdivision of $Q_n$, for sufficiently large $n \ge 1$, into sub-boxes $Q_a(z_i)=z_i+Q_a$ of side length $a>r$ where $z_i\in a\mbb Z^d$. Let $N_i = X(Q_a(z_i))$ be the number of points in the $i$th sub-box and $N'_i =X(Q_{3a}(z_i))$ be the number of points the $i$th sub-box plus its adjacent sub-boxes. Then, $N_{b,n}\le N_{b,n}''$, where $$N_{b,n}'' = \sum_{i\in a\mbb Z^d\cap Q_n}N_i\mathbbmss{1}\{N'_i>b\},$$ so that by the exponential Markov inequality, for all $t> 0$, \begin{align*} \log \mbb P(N_{b, n} > \delta n^d)&\le \log \mbb P(N_{b, n}'' > \delta n^d)\le -\delta t n^d+\log \mbb E[\exp(tN_{b, n}'')]. \end{align*} Since the random variables $N_i\mathbbmss{1}\{N'_i>b\}$ and $N_j\mathbbmss{1}\{N'_j>b\}$ are independent whenever $\Vert z_i - z_j\Vert_\infty\ge 3$, we have $3^d$ regular sub-grids of $a\mbb Z^d$ containing independent random variables $N_i\mathbbmss{1}\{N'_i>b\}$. Thus, using H\" older's inequality, independence and the dominated convergence theorem, we arrive at \begin{align*} \limsup_{b \uparrow \infty}\limsup_{n \uparrow \infty}\frac{1}{n^d}\log \mbb E[\exp(tN_{b,n}'')]\le\frac{1}{(3a)^{d}}\limsup_{b \uparrow \infty} \log \mbb E\big[\exp(3^dtN_o\mathbbmss{1}\{N'_o>b\})\big] =\frac{1}{(3a)^{d}}. \end{align*} Since $t > 0$ was arbitrary, we conclude the proof. \end{proof} \begin{proof}[Proof of Lemma \ref{remDenseLem}] First, since $X^{+, \e}$ and $X^{-, \e}$ are independent, it suffices to compute $$\mbb P(X^{+, \e} \cap Q_n = \emptyset\,|\,X)\qquad \text{ and }\qquad \mbb P(X^{-, \e} \cap Q_n \text{ has no $b$-dense points}\,|\,X)$$ separately. The void probabilities for a Poisson point process give that $$\mbb P(X^{+, \e} \cap Q_n = \emptyset\,|\,X) = \exp(- \varepsilon n^d).$$ Next, since $X^{-, \e}$ is an independent thinning of $X$ with probability $\varepsilon$, we arrive at $$\mbb P(X^{-, \e} \cap Q_n \text{ has no $b$-dense points}\,|\,X) \ge \varepsilon^{N_{b, n}},$$ as asserted. \end{proof} \subsection{Proof of Theorem \ref{neighbThm}}\label{sec-stabproof} In order to prove the lower bound for stabilizing score functions, we use sprinkling to regularize sub-regions that are not sufficiently stabilized. Let us define the approximation $$\xi^{\de, M}(\varphi) = \xi(\varphi \cap Q_M) \wedge \delta M^d$$ and write $H^{\de, M}_n = H_n^{\xi^{\de, M}}$. Similarly as before, we consider a coupling construction. Now, we let $X^{-, M}$ denote an independent thinning of $X$ with survival probability $1 - M^{-d}$ and $X^{+, M}$ an independent Poisson point process with intensity $M^{-d}$. Then, $$X^M = X^{-, M} \cup X^{+, M}$$ defines a unit-intensity Poisson point process. In this coupling, we consider events in which the sprinkling $X^{+, M}$ adds points wherever necessary to reduce the stabilization radius. More precisely, let $$E^M_n = \{X^{-, M}\cap Q_n = X\cap Q_n\} \cap \big\{X^{+, M}(Q_n) \le K_0 (n/M)^d\big\} \cap E^{M, +}_n.$$ As we shall prove below, the events $E^M_n$ occur with a high probability. \begin{lemma}[Sprinkling regularizes with high probability] \label{remreachinglem} Let $\delta > 0$ and $n\ge M\ge 1$ sufficiently large. Then, under the assumptions of Theorem \ref{neighbThm}, $\mbb P$-almost surely, \[\mbb P(E^M_n| X) \ge \exp\big(X(Q_n)\log(1-M^{-d}) - \delta n^d\big). \] \end{lemma} \begin{proof} Indeed, for given $X$, the event $\{X^{-, M}\cap Q_n=X\cap Q_n\}$ has probability $(1-M^{-d})^{X(Q_n)}$ and is independent of the event $\big\{X^{+, M}(Q_n) \le K_0 (n/M)^d\big\} \cap E^{M, +}_n$, which has probability at least $\exp(-\delta n^d)$. \end{proof} Now, we conclude the proof of Theorem~\ref{neighbThm}. \begin{proof}[Proof of Theorem~\ref{neighbThm}] Let $\delta>0$ and $M=M(\delta) > 1$ sufficiently large. Then, by $R$-boundedness, $$\mbb P(H_n < a) = \mbb P(H_n(X^M) < a) \ge \mbb P\big(\{H^{\de, M}_n(X^M) < a\} \cap E^M_n\big).$$ Moreover, under the event $E^M_n$, \begin{align*} H^{\de, M}_n(X^M) &=\frac1{n^d}\sum_{X_i \in X^{+, M}\cap Q_n} \xi^{\de, M}(X^M - X_i) +\frac1{n^d}\sum_{X_i \in X\cap Q_n} \xi^{\de, M}(X^M - X_i) \\ &\le K_0\delta +H^{\de, M}_n(X) + \frac1{n^d}\sum_{X_i \in X \cap Q_n} \big(\xi^{\de, M}(X^M - X_i) -\xi^{\de, M}(X - X_i)\big). \end{align*} Let us write $X^{M,0} = X$ and $X^{M,j+1} = X^{M,j}\cup\{X^{+, M}_j\}$ where $\{X^{+, M}_j\}_{1 \le j \le N(M)}$ is an arbitrary ordering of $X^{+, M}$. Then, since $\xi$ is weakly decreasing, \begin{align*} & \sum_{X_i \in X \cap Q_n} \big(\xi^{\de, M}(X^M - X_i) -\xi^{\de, M}(X-X_i)\big) \\ &\quad= \sum_{X_i \in X \cap Q_n}\sum_{j \le N(M)} (\xi^{\de, M}(X^{M,j} - X_i) -\xi^{\de, M}(X^{M,j-1}-X_i))\\ &\quad\le\delta M^d\sum_{j \le N(M)}\sum_{X_i \in X \cap Q_n}\mathbbmss{1}\big\{\xi^{\de, M}(X^{M,j} - X_i) >\xi^{\de, M}(X^{M,j-1}-X_i)\big\}\\ &\quad\le k \delta M^dN(M). \end{align*} Further note that $N(M) \le K_0(n/M)^d$, and thus we arrive at $$\mbb P(H_n(X^M) < a) \ge \mbb P\big(\{H^{\de, M}_n(X^M) < a\} \cap E^M_n\big) \ge \mbb P\big(\{H^{\de, M}_n(X) < a - 2kK_0 \delta\} \cap E^M_n\big).$$ Now, by conditioning on $X$ and applying Lemma~\ref{remreachinglem} for sufficiently large $n\ge M\ge 1$, \begin{align*} &\mbb P(H_n(X^M) < a) \ge \mbb E\big[\mathbbmss{1}\{H^{\de, M}_n(X) < a - 2kK_0 \delta \} \mbb P(E^M_n\, |\, X)\big]\\ &\quad\ge\mbb E\big[\mathbbmss{1}\{H^{\de, M}_n(X) < a - 2kK_0 \delta\}\exp\big(X(Q_n)\log(1-M^{-d})\big)\big] \exp(-\delta n^d). \end{align*} Moreover, for any $c > 0$, \begin{align*} &\mbb E\big[\mathbbmss{1}\{H^{\de, M}_n(X) < a - 2kK_0 \delta\}\exp\big(X(Q_n)\log(1-M^{-d})\big)\big]\\ &\quad\ge\exp\big(cn^d\log(1-M^{-d})\big)\mbb P(\{H^{\de, M}_n(X) < a - 2kK_0 \delta\}\cap\{X(Q_n)<cn^d\}), \end{align*} where for the first factor, \begin{align*} \liminf_{M\uparrow\infty}\frac1{n^d}\log\big(\exp\big(cn^d\log(1-M^{-d})\big)\big) = \liminf_{M\uparrow\infty}c\log(1-M^{-d}) = 0. \end{align*} Now, for the second factor, \begin{align*} \mbb P\big(\{H^{\de, M}_n(X) < a - 2kK_0 \delta\}\cap\{X(Q_n)<cn^d\}\big) &\ge \mbb P(H^{\de, M}_n(X) < a - 2kK_0 \delta) \\ &\phantom= -\mbb P(X(Q_n)\ge cn^d), \end{align*} where for large $c$ the second summand plays no role in the large deviations. Applying \cite[Theorem 3.1]{georgii2} on the local bounded observable $\xi^{\de, M}$ yields that \begin{align*} \liminf_{n \uparrow \infty}\frac1{n^d}\log\mbb P\big(H^{\de, M}_n(X) < a - 2kK_0 \delta\big) \ge -\inf_{\mbb Q:\, \mbb Q^o[\xi^{\de, M}] < a - 2kK_0 \delta} h(\mbb Q). \end{align*} Finally, if $\mbb Q^o[\xi] < a$, then $\limsup_{M \uparrow \infty}\mbb Q^o[\xi^{\delta,M}] < a - 2kK_0 \delta$ for a sufficiently small $\delta>0$, so that $$\liminf_{M \uparrow \infty} \Big( -\inf_{\mbb Q:\, \mbb Q^o[\xi^{\delta,M}] < a - 2kK_0 \delta} h(\mbb Q) \Big) \ge -\inf_{\mbb Q:\, \mbb Q^o[\xi] < a}h(\mbb Q),$$ as asserted. \end{proof} \normalsize \input{LowerLDPNetworks_ArXiv.bbl} \thanks{This work was co-funded by the German Research Foundation under Germany's Excellence Strategy MATH+: The Berlin Mathematics Research Center, EXC-2046/1 project ID: 390685689.} \end{document}
arXiv
Typing an 11 x 11 (or larger) Matrix [duplicate] How to use more than 10 tab stops in bmatrix or other amsmath matrix environments? 2 answers I would like to explicitly write out a n x n matrices in my paper, but once n \geq 11, I get compiling errors and it refuses to print the matrix. Here is what I have: \documentclass{standalone} \usepackage{amsmath,mathtools,amsthm} \usepackage{array} $$\begin{bmatrix*}[r] 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ -\fract{1}{2} & -\fract{1}{2} & \fract{1}{2}\sqrt{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -\fract{1}{2}\sqrt{2} & \fract{1}{2}\sqrt{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & -\fract{1}{2}\sqrt{2} & \fract{1}{2}\sqrt{2} & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -\fract{1}{2}\sqrt{2} & \fract{1}{2}\sqrt{2}& 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & -\fract{1}{2}\sqrt{2} & \fract{1}{2}\sqrt{2} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & -\fract{1}{2}\sqrt{2} & \fract{1}{2}\sqrt{2} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\fract{1}{2}\sqrt{2} & \fract{1}{2}\sqrt{2} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\fract{1}{2}\sqrt{2} & \fract{1}{2} & \fract{-1}{2}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix*} $$ I do not see any errors for this 11 x 11 matrix. However, when I type this in a blank standalone document, I get an error that says: "Missing $ inserted. $$\begin{bmatrix*}[r]" Is there another way to write an 11 x 11 (or larger) matrix so that it compiles? Or is it impossible to type matrices this large into LaTeX? (If it helps, I am using writelatex.com to compile my thesis.) Student4LifeStudent4Life marked as duplicate by Werner, Svend Tveskæg, user13907, Guido, user31729 Nov 18 '14 at 20:52 You need to reset the counter variable MaxMatrixCols. Its default value is 10. If you need, say, 15 cols, issue the instruction \setcounter{MaxMatrixCols}{15}. (See also footnote 2 on page 10 of the user guide of the amsmath package.) – Mico Nov 18 '14 at 19:08 Some additional observations. There is no \fract command; I assume you meant to write \frac. Similarly, there's no bmatrix* environment; try bmatrix instead. The use of $$ to enter and exit displaymath mode in a LaTeX document is deprecated; see Why is \[ ... \] preferable to $$ ... $$? for more information on this subject. Finally, you're getting the error message about the missing $ because you're trying to use the standalone document class without setting the option preview. – Mico Nov 18 '14 at 19:15 Thank you!! I didn't know there's a max amount of columns allowed for matrices. This is incredibly helpful! Yes, I did mean \frac, and not \fract. I'm glad you pointed it out. bmatrix* allows me to align right [r] or left [l]. If I use bmatrix, then some how it does not understand that I want a specific alignment. It will think [r] is in the first row first column and it will do a default center alignment. – Student4Life Nov 18 '14 at 19:31 As for the preview option you mentioned: for future reference, how might I add that? – Student4Life Nov 18 '14 at 19:32 \documentclass[preview]{standalone}. About the bmatrix* environment: You're absulutely correct, it does exist (as it's defined in the the mathtools package). – Mico Nov 18 '14 at 19:35 Summarizing some of my earlier comments: Use the standalone package with the option preview in order to avoid getting an error message about a missing $ symbol. Don't use $$ in a LaTeX document to start and end displaymath mode, as it's quite deprecated. See Why is \[ ... \] preferable to $$ ... $$? for more information on this subject. The matrix environments of the amsmath and mathtools environments work with the counter variable MaxMatrixCols. Its default value is 10; if you have a matrix with, say, 15 columns, issue the instruction \setcounter{MaxMatrixCols}{15}. There is no \fract command; use \frac instead. For the matrix at hand, I actually wouldn't use any \frac instructions. Instead, I'd use inline-style math notation, i.e., I'd write \sqrt{2}/2, etc. That way, the fractional expressions won't become too tiny to read with ease. \documentclass[preview]{standalone} \usepackage{geometry,mathtools} \setcounter{MaxMatrixCols}{11} \begin{bmatrix*}[r] -1/2 & -1/2 & \sqrt{2}/2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & -\sqrt{2}/2 & \sqrt{2}/2 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & -\sqrt{2}/2 & \sqrt{2}/2 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -\sqrt{2}/2 & \sqrt{2}/2& 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & -\sqrt{2}/2 & \sqrt{2}/2 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & -\sqrt{2}/2 & \sqrt{2}/2 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\sqrt{2}/2 & \sqrt{2}/2 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -\sqrt{2}/2 & 1/2 & -1/2\\ \end{bmatrix*} MicoMico maybe use sfrac{}{} of the xfrac package for the fractions ;) looks quite nice. – MaxNoe Nov 18 '14 at 20:19 Not the answer you're looking for? Browse other questions tagged matrices or ask your own question. How to use more than 10 tab stops in bmatrix or other amsmath matrix environments? Why is \[ … \] preferable to $$ … $$? Error with large matrix Typing Block matrix between texts than in separated picture? Matrix in a matrix Tridiagonal matrix Alignment of Matrix Equations Making matrix elements larger vertically Aligning a matrix within a matrix Underbracing matrix Preventing autosizing of matrix A = matrix(B) = matrix(C) partitioned matrix
CommonCrawl
\begin{definition}[Definition:Colatitude (Terrestrial)] Let $J$ be a point on Earth's surface that is not one of the two poles $N$ and $S$. Let $\phi$ denote the latitude of $J$. The '''colatitude''' of $J$ is the (spherical) angle $90 \degrees - \phi$, that is: :if $J$ is in the northern hemisphere of Earth, the '''colatitude''' is the (spherical) angle $\sphericalangle NOJ$ :if $J$ is in the southern hemisphere of Earth, the '''colatitude''' is the (spherical) angle $\sphericalangle SOJ$. :500px \end{definition}
ProofWiki
How many whole numbers between 99 and 999 contain exactly one 0? Numbers with exactly one zero have the form $\_ 0 \_$ or $\_ \_ 0$, where the blanks are not zeros. There are $(9\cdot1\cdot9)+(9\cdot9\cdot1) = 81+81 = \boxed{162}$ such numbers.
Math Dataset
\begin{document} \title{ {Quasiisometries of negatively curved homogeneous manifolds associated with Heisenberg groups}} \author{Xiangdong Xie\footnote{Partially supported by NSF grant DMS--1265735.}} \date{ } \maketitle \begin{abstract} We study quasiisometries between negatively curved homogeneous manifolds associated with diagonalizable derivations on Heisenberg algebras. We classify these manifolds up to quasiisometry, and show that all quasiisometries between such manifolds (except when they are complex hyperbolic spaces) are almost similarities. We prove these results by studying the quasisymmetric maps on the ideal boundary of these manifolds. \end{abstract} {\bf{Keywords.}} quasiisometry, quasisymmetric map, negatively curved homogeneous manifolds, Heisenberg groups. {\small {\bf{Mathematics Subject Classification (2010).}} 22E25, 30L10, 20F65. \setcounter{section}{0} \setcounter{subsection}{0} \section{Introduction}\label{s0} In this paper, we study quasiisometries between negatively curved homogeneous manifolds associated with Heisenberg groups. We establish quasiisometric rigidity and quasiisometric classification results for those manifolds associated with diagonalizable derivations. Let $H_n$ be the $n$-th Heisenberg group and $\mathcal{H}_n$ its Lie algebra. We shall identify $H_n$ and $\mathcal{H}_n$ via the exponential map $\exp: {\mathcal H}_n\rightarrow H_n$. Let $A: \mathcal{H}_n \rightarrow \mathcal{H}_n$ be a derivation, that is, $A$ is a linear map satisfying $A[X,Y]=[AX, Y]+[X, AY]$ for all $X, Y\in {\mathcal H}_n$. Define an action of $\mathbb R$ on $H_n$ by: $$t\cdot x =e^{tA} x\;\; \text{for}\;\; x\in {\mathcal H}_n={ H}_n, \; t\in \mathbb R.$$ Then one can form the semi-direct product $G_A=H_n\rtimes \mathbb R$. When the eigenvalues of $A$ have positive real parts, the group $G_A$ admits a left invariant Riemannian metric with negative sectional curvature \cite{H}. In the case when $A$ is the standard derivation with eigenvalues $1$ and $2$, the manifold $G_A$ is isometric to the complex hyperbolic space. Assume $A: \mathcal{H}_n \rightarrow \mathcal{H}_n$ is a diagonalizable derivation. Suppose $A$ has positive eigenvalues $0<\alpha_1<\cdots <\alpha_k<\alpha_{k+1}$. Let $U_i$ be the eigenspace associated with $\alpha_i$. Then we have ${\mathcal H}_n=U_1\oplus \cdots \oplus U_k\oplus U_{k+1}$. Every element $x\in {\mathcal H}_n$ can be written as $x=x_1+\cdots + x_k+x_{k+1}$ with $x_i\in U_i$. By the above discussion, the group $G_A$ has a left invariant Riemannian metric with negative sectional curvature. The ideal boundary $\partial G_A$ can be naturally identified with (the one point compactification of ) the Heisenberg group $H_n$. Fix a norm $|\cdot|$ on each $U_i$. The parabolic visual quasimetric $d_A$ on $H_n={\mathcal H}_n$ can be described as follows: $d_A(p, q)=||(-p)* q||_A$ for $p, q\in {\mathcal H}_n$, where the norm $||\cdot||_A$ on ${\mathcal H}_n$ is given by: $$||x_1+\cdots + x_k+x_{k+1}||_A= \sum_{i=1}^{k+1} |x_i|^{\frac{1}{\alpha_i}}.$$ Similarly let $B: \mathcal{H}_n \rightarrow \mathcal{H}_n$ be a diagonalizable derivation with positive eigenvalues $0<\beta_1<\cdots< \beta_l<\beta_{l+1}$. Let $W_i$ be the eigenspace of $\beta_i$. The parabolic visual quasimetric $d_B$ on $H_n={\mathcal H}_n$ is similarly defined: $d_B(p, q)=||(-p)* q||_B$ for $p, q\in {\mathcal H}_n$, where the norm $||\cdot||_B$ on ${\mathcal H}_n$ is given by: $$||y_1+\cdots + y_l+y_{l+1}||_B= \sum_{i=1}^{l+1} |y_i|^{\frac{1}{\beta_i}}.$$ A map $f: X\rightarrow Y$ between two quasimetric spaces is called an \emph{almost similarity} if there are constants $L\ge 1$ and $C\ge 0$ such that $L\cdot d(x_1, x_2)-C\le d(f(x_1), f(x_2))\le L\cdot d(x_1, x_2)+C$ for all $x_1, x_2\in X$ and $d(y, f(X))\le C$ for all $y\in Y$. \begin{Th}\label{main1} Let $A, B$ be diagonalizable derivations with positive eigenvalues, and $G_A$ and $G_B$ the associated groups. If $k\ge 2$, then every quasiisometry $f: G_A \rightarrow G_B$ is an almost similarity. \end{Th} \begin{Th}\label{main2} Let $A, B$ be diagonalizable derivations with positive eigenvalues, and $G_A$ and $G_B$ the associated groups. Then $ G_A$ and $ G_B$ are quasiisometric if and only if $k=l$, $\text{dim}(U_i)=\text{dim}(W_i)$, and there is some $\lambda>0$ such that $\alpha_i=\lambda \beta_i$ for $1\le i\le k$. \end{Th} Theorems \ref{main1} and \ref{main2} generalize the main results in \cite{SX} from the Euclidean group case to the Heisenberg group case. The general case for Euclidean groups were solved in \cite{X}. The general case for the Heisenberg groups remains open. The strategy of the proof is the same as in \cite{SX}, that is, we study quasisymmetric maps on the ideal boundary. In fact, we shall prove the following results for quasisymmetric maps. \begin{Th}\label{main3} Let $A, B$ be diagonalizable derivations with positive eigenvalues. If $k\ge 2$ and $F: (H_n, d_A) \rightarrow (H_n, d_B) $ is a quasisymmetry, then $F$ is biLipschitz from $(H_n, d_A) $ to $(H_n, d_B^{\frac{\beta_1}{\alpha_1}}) $. \end{Th} \begin{Th}\label{main4} Let $A, B$ be diagonalizable derivations with positive eigenvalues. Then $ (H_n, d_A)$ and $(H_n, d_B)$ are quasisymmetric if and only if $k=l$, $\text{dim}(U_i)=\text{dim}(W_i)$, and there is some $\lambda>0$ such that $\alpha_i=\lambda \beta_i$. \end{Th} The claims in Theorem \ref{main1} and Theorem \ref{main3} do not hold if $k=1$. In this case, the manifold $G_A$ is the complex hyperbolic space and $(H_n, d_A)$ is biLipschitz to the Heisenberg group with the Carnot metric. It is known that there are non-biLipschitz quasiconformal maps on the Heisenberg groups \cite{B}. Furthermore, the claims in Theorem \ref{main1} and Theorem \ref{main3} are equivalent, and so there are quasiisometries of the complex hyperbolic space that are not almost similarities. Our results concern the quasiisometric rigidity and quasiisometric classification of negatively curved solvable Lie groups $N\rtimes \mathbb R$. The first result in this area is Pansu's rigidity theorem \cite{P} for the quarternionic hyperbolic spaces and Cayley plane. The case $N=\mathbb R^n$ was solved in \cite{X}. In this paper we treat the case $N=H_n$, but only for diagonalizable derivations. In \cite{X1}, \cite{X2} and \cite{X3}, we proved the quasiisometric rigidity theorem for $N\rtimes \mathbb R$ for many Carnot groups $N$, where $\mathbb R$ acts on $N$ by the standard dilations on Carnot groups. All these results belong to the larger project of quasiisometric rigidity and quasiisometric classification of focal hyperbolic groups \cite{C}. In this context, Dymarz \cite{D} recently obtained results similar to our Theorem \ref{main1} and Theorem \ref{main3} for mixed type focal hyperbolic groups. In Section \ref{prelimi} we recall the definitions of various maps. In Section \ref{derivation} we study the structure of diagonalizable derivations on $\mathcal H_n$. In Section \ref{metriconb} we define the homogeneous manifolds associated with the Heisenberg groups, and then study the visual quasimetric on their ideal boundary. In Section \ref{foliation} we show that every quasisymmetric map preserves a foliation. In Section \ref{leaf} we show the restriction of a quasisymmetric map to a leaf is biLipschitz. Finally in Section \ref{proofs} we finish the proofs of the main theorems. \noindent {\bf{Acknowledgment}}. {This work was initiated while the author was attending the workshop \lq\lq Interactions between analysis and geometry" at IPAM, University of California at Los Angeles from March to June 2013. I would like to thank IPAM for financial support, excellent working conditions and conducive atmosphere.} \section{Some basic definitions}\label{prelimi} In this section we recall some basic definitions. Let $K\ge 1$ and $C>0$. A bijection $F:X\rightarrow Y$ between two quasimetric spaces is called a $(K,C)$-\emph{quasisimilarity} if \[ \frac{C}{K}\, d(x,y)\le d(F(x), F(y))\le C\,K\, d(x,y) \] for all $x,y \in X$. When $K=1$, we say $F$ is a \emph{similarity}. It is clear that a map is a quasisimilarity if and only if it is a biLipschitz map. The point of using the notion of quasisimilarity is that sometimes there is control on $K$ but not on $C$. Let $L\ge 1$ and $A\ge 0$. A map $f: X\rightarrow Y$ between two metric spaces is a $(L, A)$-quasiisometry if \newline (1) for all $ x_1, x_2\in X$: $$d(x_1, x_2)/L -A\le d(f(x_1), f(x_2))\le L\cdot d(x_1, x_2)+A;$$ (2) $d(y, f(X))\le A$ for all $y\in Y$. \newline The map $f$ is called a quasiisometry if it is a $(L, A)$-quasiisometry for some $L\ge 1$, $A\ge 0$. Let $\eta: [0,\infty)\rightarrow [0,\infty)$ be a homeomorphism. A bijection $F:X\to Y$ between two quasimetric spaces is \emph{$\eta$-quasisymmetric} if for all distinct triples $x,y,z\in X$, we have \[ \frac{d(F(x), F(y))}{d(F(x), F(z))}\le \eta\left(\frac{d(x,y)}{d(x,z)}\right). \] If $F: X\rightarrow Y$ is an $\eta$-quasisymmetry, then $F^{-1}: Y\rightarrow X$ is an $\eta_1$-quasisymmetry, where $\eta_1(t)=(\eta^{-1}(t^{-1}))^{-1}$. See \cite{V}, Theorem 6.3. A map $F: X\to Y$ is quasisymmetric if it is $\eta$-quasisymmetric for some $\eta$. Let $g: X_1\rightarrow X_2$ be a bijection between two quasimetric spaces such that for any $p\in X_1$, $d(x,p)\rightarrow 0$ if and only if $d(g(x), g(p))\rightarrow 0$. We define for every $x\in X_1$ and $r>0$, \begin{align*} L_g(x,r)&=\sup\{d(g(x), g(x')): d(x,x')\le r\},\\ l_g(x,r)&=\inf\{d(g(x), g(x')): d(x,x')\ge r\}, \end{align*} and set \[ L_g(x)=\limsup_{r\rightarrow 0}\frac{L_g(x,r)}{r}, \ \ l_g(x)=\liminf_{r\rightarrow 0}\frac{l_g(x,r)}{r}. \] Then \begin{equation}\label{e5} L_{g^{-1}}(g(x))=\frac{1}{l_g(x)} \ \text{ and }\ l_{g^{-1}}(g(x))=\frac{1}{L_g(x)} \end{equation} for any $x\in X_1$. If $g$ is an $\eta$-quasisymmetry, then $L_g(x,r)\le \eta(1)l_g(x, r)$ for all $x\in X_1$ and $r>0$. Hence if in addition \[ \lim_{r\rightarrow 0}\frac{L_g(x,r)}{r}\ \ {\text{or}} \ \ \lim_{r\rightarrow 0}\frac{l_g(x,r)}{r} \] exists, then \[ 0\le l_g(x)\le L_g(x)\le \eta(1) l_g(x)\le \infty. \] \section{Diagonalizable derivations on $\mathcal H_n$}\label{derivation} In this section we study the structure of diagonalizable derivations on $\mathcal H_n$. Let $A: \mathcal{H}_n\rightarrow \mathcal{H}_n$ be a diagonalizable derivation. Suppose $A$ has positive eigenvalues $0<\alpha_1< \cdots < \alpha_k<\alpha_{k+1}$. Let $U_i$ be the eigenspace of $\alpha_i$. Then we have ${\mathcal H}_n=U_1\oplus \cdots \oplus U_k\oplus U_{k+1}$ and $A(v)=\alpha_i v$ for $v\in U_i$. For $x\in {\mathcal H}_n$, we write $x=x_1+\cdots + x_k+x_{k+1}$ with $x_i\in U_i$. Fix a non-zero $e\in U_{k+1}$, and denote $m_i=\text{dim}(U_i)$. \begin{Le}\label{structure} The following hold:\newline (1) $m_{k+1}=1$ and $U_{k+1}=\mathcal Z(\mathcal H_n)$ is the center of ${\mathcal H}_n$;\newline (2) $[U_i, U_j]=0$ if $i+j\not=k+1$;\newline (3) $[U_i, U_{k+1-i}]\not=0$;\newline (4) $\alpha_i+\alpha_{k+1-i}=\alpha_{k+1}$ for all $1\le i\le k$;\newline (5) $m_i=m_{k+1-i}$ for $1\le i\le k$; \newline (6) for $i<(k+1)/2$, there exist a basis $e_1, \cdots, e_{m_i}$ for $U_i$ and a basis $\eta_1, \cdots, \eta_{m_i}$ for $U_{k+1-i}$ such that $[e_s, \eta_t]=\delta_{st} e$;\newline (7) if $i=k+1-i$, then $m_i=2 k_i$ is even, and there is a basis $e_1, \eta_1, \cdots, e_{k_i}, \eta_{k_i}$ of $U_i$ such that $[e_s, \eta_t]=\delta_{st} e$, $[e_s, e_t]=[\eta_s, \eta_t]=0$ for all $s,t$. \end{Le} \begin{proof} (1). Let $X\in U_{k+1}$ be arbitrary. Since $A$ is a derivation, for any $ i$ and any $Y\in U_i$, we have $A[X,Y]=[AX, Y]+[X, AY]=\alpha_{k+1}[X, Y]+\alpha_i[X, Y]=(\alpha_{k+1}+\alpha_i)[X, Y]$. As $\alpha_{k+1}$ is the largest eigenvalue of $A$ and $\alpha_i+\alpha_{k+1}>\alpha_{k+1}$, we must have $[X, Y]=0$. This implies that $U_{k+1}\subset \mathcal Z(\mathcal H_n)$. Since $U_{k+1}$ is non-trivial and $\mathcal Z(\mathcal H_n)$ has dimension $1$, they must agree. (2), (3) and (4). We claim that for each $1\le i\le k$, there exists a unique $j$ such that $[U_i, U_j]\not=0$. First of all, there exists at least one such $j$ since otherwise $[U_i, {\mathcal H}_n]=0$ and so $U_i\subset U_{k+1}$. Now suppose there exist $j_1\not=j_2$ such that $[U_i, U_{j_1}]\not=0$, $[U_i, U_{j_2}]\not=0$. Then there are $X_1, X_2\in U_i$ and $Y_1\in U_{j_1}$, $Y_2\in U_{j_2}$ such that $[X_1, Y_1]\not=0$ and $[X_2, Y_2]\not=0$. Since $[{\mathcal H}_n, {\mathcal H}_n]=U_{k+1}$, we have $[X_1, Y_1]=a_1 e$ and $[X_2, Y_2]=a_2 e$ for some $a_1, a_2\not=0$. Since $A$ is a derivation, we have $$A[X_1,Y_1]=[AX_1, Y_1]+[X_1, AY_1]=\alpha_i [X_1, Y_1]+\alpha_{j_1} [X_1, Y_1]=(\alpha_i+\alpha_{j_1})[X_1, Y_1],$$ which implies that $\alpha_i+\alpha_{j_1}=\alpha_{k+1}$. Similarly by considering $A[X_2, Y_2]$ we obtain $\alpha_i+\alpha_{j_2}=\alpha_{k+1}$. It follows that $\alpha_{j_1}=\alpha_{j_2}$, a contradiction. We shall denote by $j_i$ the unique $j$ such that $[U_i, U_j]\not=0$. The preceding paragraph shows that $\alpha_s+\alpha_{j_s}=\alpha_{k+1}$ for all $s$. Since $\alpha_1<\alpha_2<\cdots <\alpha_k$, we see that $s<t$ implies $j_s>j_t$. It is now easy to see that $j_i=k+1-i$. Hence (2), (3) and (4) hold. (5). Suppose $m_i>m_{k+1-i}$ for some $i$. Let $L(U_{k+1-i}, U_{k+1})$ be the vector space of linear maps from $U_{k+1-i}$ to $U_{k+1}$. Define a linear map $g: U_i\rightarrow L(U_{k+1-i}, U_{k+1})$ by $g(X)=\text{ad}(X)|_{U_{k+1-i}}$, where $ad(X): {\mathcal H}_n\rightarrow {\mathcal H}_n$ is given by $ad(X)Y=[X, Y]$ for $Y\in {\mathcal H}_n$. Since $U_{k+1}$ is $1$-dimensional and $\text{dim}(U_i)>\text{dim}(U_{k+1-i})$, the kernel of $g$ is non-trivial. Hence there is some $X\in U_i\backslash\{0\}$ such that $[X, U_{k+1-i}]=0$. Now it follows from (2) that $[X, {\mathcal H}_n]=0$, which is impossible in ${\mathcal H}_n$. Therefore $m_i=m_{k+1-i}$ for all $i$. (6). Let $e_1\in U_i$ be a nonzero vector. Then there is some $\eta_1\in U_{k+1-i}$ such that $[e_1, \eta_1]\not=0$. After multiplying $\eta_1$ by a nonzero constant, we may assume $[e_1, \eta_1]=e$. Now let $e_2\in U_i\cap \text{ker}(\text{ad} (\eta_1))$ be a nonzero vector and as above pick $\eta_2\in U_{k+1-i}\cap \text{ker}(\text{ad} (e_1))$ such that $[e_2, \eta_2]=e$. Inductively we pick $$e_s\in U_i\cap \text{ker}(\text{ad} (\eta_1))\cap\cdots \cap \text{ker}(\text{ad} (\eta_{s-1}))$$ and $$\eta_s\in U_{k+1-i}\cap \text{ker}(\text{ad} (e_1))\cap\cdots \cap \text{ker}(\text{ad} (e_{s-1}))$$ such that $[e_s, \eta_s]=e$. By the choice we have $[e_s, \eta_t]=\delta_{st} \, e$ for all $s, t$. (7). The proof is similar to that of (6). First pick $e_1, \eta_1\in U_i$ such that $[e_1, \eta_1]=e$. Then inductively pick $$e_s, \eta_s\in U_i\cap \bigcap_{t=1}^{s-1} \left(\text{ker}(\text{ad} (e_t))\cap \text{ker}(\text{ad} (\eta_{t}))\right)$$ such that $[e_s, \eta_s]=e$. In this way we get a basis satisfying all the conditions in (7). \end{proof} \section{Quasimetric on the ideal boundary}\label{metriconb} The goal of this Section is to show that the quasimetric $d_A$ defined in the Introduction is biLipschitz equivalent with a metric when the smallest eigenvalue of $A$ is at least $1$, see Lemma \ref{bilip2d}. This result will be needed in Section \ref{foliation} for the application of Tyson's theorem (Theorem 1.4, \cite{T}): Tyson's theorem does not apply to general quasimetric spaces. Let $H_n$ be the $n$-th Heisenberg group and $\mathcal{H}_n$ its Lie algebra. If we identify $\mathcal{H}_n$ with $\mathbb R^{2n}\times \mathbb R=\mathbb R ^{2n+1}$, and if $e_i$, $1\le i\le 2n+1$ denote the standard basis of $\mathbb R^{2n+1}$, then the only non-trivial Lie bracket relations are $[e_{2i-1}, e_{2i}]=e_{2n+1}$, $1\le i\le n$. We shall identify $H_n$ and $\mathcal{H}_n$ via the exponential map and the group operation on $H_n$ shall be given by the BCH formula: $$X*Y=X+Y+\frac{1}{2}[X, Y]\;\;\; {\text for}\; \;X, Y\in {\mathcal H}_n.$$ Let $A: \mathcal{H}_n\rightarrow \mathcal{H}_n$ be a derivation, i.e., a linear map such that $$A[X, Y]=[AX, Y]+[X, AY]$$ for all $X, Y\in \mathcal{H}_n$. Then one can define an action of $\mathbb R$ on $H_n$: $$\mathbb R\times H_n \rightarrow H_n$$ $$(t, x)\rightarrow e^{tA} x.$$ We denote the corresponding semi-direct product by $G_A=H_n\rtimes_A \mathbb R$. Then $G_A$ is a solvable Lie group. Recall that the group operation in $G_A$ is given by: $$(g, t_1)\cdot (h, t_2)=(g* e^{t_1 A} h, \; t_1 + t_2).$$ By Heintze's result (\cite{H}), if the eigenvalues of $A$ have positive real parts, then there is a left invariant Riemannian metric on $G_A$ with negative sectional curvature. Since any two left invariant Riemannian metrics are biLipschitz equivalent, $G_A$ is Gromov hyperbolic with any left invariant Riemannian metric. In this paper we only consider the case when $A$ is diagonalizable. Let $A: \mathcal{H}_n \rightarrow \mathcal{H}_n$ be a diagonalizable derivation with positive eigenvalues. Denote by $0<\alpha_1< \cdots < \alpha_k<\alpha_{k+1}$ the eigenvalues of $A$, and $U_i$ the eigenspace of $\alpha_i$. Then $U_{k+1}=\mathcal Z(\mathcal H_n)$ and ${\mathcal H}_n=U_1\oplus \cdots \oplus U_k\oplus U_{k+1}$. Fix a nonzero $e\in U_{k+1}$. We choose a vector space basis $\mathcal B_i$ of $U_i$ for $1\le i\le n$ supplied by Lemma \ref{structure} (6), (7). Let $\mathcal B=\cup_{i=1}^k \mathcal B_i\cup \{e\}$. On $T_e G_A$ we choose the inner product such that $\mathcal B\cup \{\frac{\partial}{\partial t}\}$ is orthonormal. Let $g$ be the associated left invariant Riemannian metric on $G_A$. For each $g\in H_n$, the map $\gamma_g: \mathbb R\rightarrow G_A$, $\gamma_g(t)=(g,t)$ is a geodesic. We call such a geodesic a vertical geodesic. It can be checked that all vertical geodesics are asymptotic as $t\rightarrow +\infty$. Hence they define a point $\xi_0$ in the ideal boundary $\partial G_A$. The sets $H_n\times\{t\}$ ($t\in \mathbb R$) are horospheres centered at $\xi_0$, and $b: G_A\rightarrow \mathbb R$, $b(x,t)=t$ is a Busemann function associated to $\xi_0$. Each geodesic ray in $G_A$ is asymptotic to either an upward oriented vertical geodesic or a downward oriented vertical geodesic. The upward oriented vertical geodesics are asymptotic to $\xi_0$ and the downward oriented vertical geodesics are in 1-to-1 correspondence with $H_n$. Hence $\partial G_A\backslash\{\xi_0\}$ can be naturally identified with $H_n$. On $T_e H_n=\mathcal H_n$ we fix the inner product such that $\mathcal B$ is orthonormal. Let $|\cdot|$ be the norm on $\mathcal H_n=(U_1\oplus \cdots \oplus U_k)\oplus U_{k+1}$ induced by this inner product. For $x=x_1+\cdots +x_k+x_{k+1}$ with $x_i\in U_i$, define $$||x||_A=\sum_{i=1}^{k+1}|x_i|^{\frac{1}{\alpha_i}}$$ and $$||x||=|x_{k+1}|^{\frac{1}{2}}+\sum_{i=1}^{k}|x_i|.$$ For any $p,q \in \mathcal H_n$, let $d_A(p,q)=||(-p)*q||_A$ and $d_0(p,q)=||(-p)*q||$. Notice that $f(x)=a^x$ is a non-increasing function if $0\le a\le 1$. Suppose the smallest eigenvalue $\alpha_1$ of $A$ satisfies $\alpha_1\ge 1$. Then $||x||_A\ge ||x||$ whenever $||x||_A\le 1$. Hence $d_A(p,q)\ge d_0(p,q)$ whenever $d_A(p,q)\le 1$. Consider the grading $\mathcal H_n=V_A\oplus \mathcal Z(\mathcal H_n)$, where $V_A=U_1\oplus \cdots\oplus U_k$. For $t\in \mathbb R$, let $\delta_t: \mathcal H_n \rightarrow \mathcal H_n$ be the usual dilation on Heisenberg groups given by $\delta_t(v_1)=e^t v_1$ for $v_1\in V_A$ and $\delta_t(v_2)=e^{2t} v_2$ for $v_2\in \mathcal Z(\mathcal H_n)$. It is well known that $d_0$ is biLipschitz equivalent with every Carnot metric on $H_n$ associated with the above grading. We fix such a Carnot metric $d_C$ on $H_n$. Then there is a constant $L\ge 1$ such that $$\frac{1}{L}\cdot d_0(p,q)\le d_C(p,q)\le L\cdot d_0(p,q)$$ for any $p,q\in H_n$. It follows from the definition of $||\cdot||_A$ and $d_A$ that \begin{equation}\label{e1} d_A(g*x, g*y)=d_A(x,y)\;\;\; \text{for all}\;\;\; x,y, g\in H_n \end{equation} and $||e^{tA}x||_A=e^t\cdot ||x||_A$ for all $x\in H_n$ and all $t\in \mathbb R$. Since $e^{tA}$ is an automorphism of $H_n$, we have \begin{equation}\label{e2} d_A(e^{tA}x, e^{tA}y)=e^t\cdot d_A(x,y)\;\;\; \text{for all}\;\;\; x,y\in H_n \;\;\;\text{and all} \;\;\;t\in \mathbb R. \end{equation} \begin{Le}\label{bilip2d} If the smallest eigenvalue $\alpha_1$ of $A$ satisfies $\alpha_1\ge 1$, then the quasimetric $d_A$ is biLipschitz equivalent with a metric on $H_n$. \end{Le} \begin{proof} For any two points $p,q\in H_n$, define $$\tilde d_A(p,q)=\inf\{\sum_{i=1}^m d_A(p_{i-1}, p_i): m\ge 1 \;\;\text{and}\;\; p_i\in H_n\;\; \text{with}\;\; p_0=p, \; p_m=q\}.$$ We observe that $\tilde d_A$ also satisfies (\ref{e1}) and (\ref{e2}). Let $S=\{q\in H_n: d_A(0, q)=1\}$ be the unit sphere with respect to $d_A$. \newline {\bf{Claim:}} if $\alpha_1\ge 1$, then there is some $c$ satisfying $0<c<\frac{1}{2}$ such that for all $q\in S$: $$c \le \tilde d_A(0,q)\le 1.$$ Since both $d_A$ and $\tilde d_A$ satisfy (\ref{e1}) and (\ref{e2}), the claim implies $c\cdot d_A(p,q) \le \tilde d_A(p,q)\le d_A(p,q)$ for all $p,q\in H_n$. It follows that $\tilde d_A$ is a metric on $H_n$ and that $d_A$ is biLipschitz equivalent with the metric $\tilde d_A$. We next prove the claim. Clearly we have $\tilde d_A(p,q)\le d_A(p,q)$ for all $p,q\in H_n$. So we only need to prove the first inequality. If $\tilde d_A(0,q)\ge \frac{1}{2}$, then we are done. Now assume $\tilde d_A(0,q)<\frac{1}{2}$. Let $p_0, p_1, \cdots, p_m$ be a finite sequence of points in $H_n$ such that $p_0=0$, $p_m=q$ and $\sum_{i=1}^m d_A(p_{i-1}, p_i)<\frac{1}{2}$. Then $d_A(p_{i-1}, p_i)<\frac{1}{2}$ for each $i$. Since $\alpha_1\ge 1$, we have $d_A(p_{i-1}, p_i)\ge d_0(p_{i-1}, p_i)$. It follows that $$\sum_{i=1}^m d_A(p_{i-1}, p_i)\ge \sum_{i=1}^m d_0(p_{i-1}, p_i)\ge \frac{1}{L} \sum_{i=1}^m d_C(p_{i-1}, p_i)\ge \frac{1}{L} d_C(0, q)\ge \frac{a}{L}, $$ where $a=\min\{d_C(0, q): q\in S\}$. Since $d_C(0, \cdot)$ is continuous and $S$ is a compact subset in $H_n$ disjoint from $0$, we must have $a>0$. Now it is clear that the claim holds for $c=\min\{1/2, a/L\}$. \end{proof} \section{Quasisymmetric maps preserve a foliation}\label{foliation} In this Section we show that quasisymmetric maps $F: (H_n, d_A)\rightarrow (H_n, d_B)$ send a foliation on $ (H_n, d_A)$ to a foliation on $ (H_n, d_B)$. Let $A: \mathcal{H}_n \rightarrow \mathcal{H}_n$ be a diagonalizable derivation with positive eigenvalues $0<\alpha_1< \cdots < \alpha_k<\alpha_{k+1}$. We will use the notation from the previous section. In particular, $\mathcal B$ is the basis of $\mathcal H_n$ constructed in the last section. Let $m$ be the Lesbegue measure on ${\mathcal H}_n$ with respect to this basis. Then $m$ is invariant under left translations, as the Jacobian matrix of the left translations with respect to the basis have determinant $1$. Furthermore, the automorphism $e^{At}$ has matrix representation given by a block diagonal matrix $[e^{\alpha_1 t}I_{m_1}, \cdots, e^{\alpha_k t}I_{m_k}, e^{(\alpha_1+\alpha_k)t}I_1]$, where $I_m$ is the $m\times m$ identity matrix. Lemma \ref{structure} (4), (5) imply that the determinant of $e^{At}$ equals $e^{t(n+1)(\alpha_1+\alpha_k)}$. Hence, for any metric ball $B(x, r)$ in $(H_n, d_A)$ with radius $r=e^t$, we have $m(B(x, r))=m(B(o, e^t))=m(e^{At}B(o, 1))=r^{(n+1)(\alpha_1+\alpha_k)}\cdot m(B(o,1))$. In particular, $m$ is Ahlfors $Q$-regular with $Q=(n+1)(\alpha_1+\alpha_k)$. Set $V_A=U_1\oplus \cdots \oplus U_k$. Now define a quasimetric $D_A$ on $V_A$ by: $$D_A(x_1+\cdots +x_k, \, y_1+\cdots +y_k)=\sum_{i=1}^k |x_i-y_i|^{\frac{1}{\alpha_i}}.$$ Let $\pi: (\mathcal{H}_n, d_A) \rightarrow (V_A, D_A) $ be the natural projection given by $\pi(x+ z)=x$ for $x\in V_A$, $z\in \mathcal Z(\mathcal H_n)$. We observe that $\pi$ is a $1$-Lipschitz map. When $k\ge 2$, Lemma \ref{structure} (2) implies $[U_1, U_1]=0$. So $U_1$ is a Lie subalgebra of ${\mathcal H}_n$. We will abuse notation and also use $U_1$ to denote the connected Lie subgroup of $H_n$ with Lie algebra $U_1$. \begin{Le}\label{rec} Suppose $k\ge 2$ and $\alpha_1=1$. Then every rectifiable curve in $(H_n, d_A)$ is contained in some left coset of $U_1$. \end{Le} \begin{proof} Let $\gamma: [0,1]\rightarrow (H_n, d_A)$ be a rectifiable curve. Since $\pi: (H_n, d_A) \rightarrow (V_A, D_A)$ is $1$-Lipschitz, the curve $\pi\circ \gamma$ is a rectifiable curve in $(V_A, D_A)$. Since $\alpha_i>1$ for each $i\ge 2$, there is no non-trivial rectifiable curve in $(U_i,\, |\cdot|^{\frac{1}{\alpha_i}})$ for $i\ge 2$. Hence there are $x_i\in U_i$ for each $i\ge 2$ such that $\pi\circ \gamma$ lies in $U_1\times \{x_2\}\times \cdots \times \{x_k\}$ and so $\gamma$ lies in $U_1\times \{x_2\}\times \cdots \times \{x_k\}\oplus \mathcal Z({\mathcal H}_n)$. Define a metric $D'$ on $U_1\times \mathcal Z({\mathcal H}_n)$ as follows: $$D'((x_1, z), (x'_1, z'))=|x'_1-x_1|+|z'-z|^{\frac{1}{1+\alpha_k}}.$$ Let $$f: (U_1\times \{x_2\}\times \cdots \times \{x_k\}\oplus \mathcal Z({\mathcal H}_n), d_A) \rightarrow (U_1\times \mathcal Z({\mathcal H}_n), D') $$ be defined by $f(x_1+ x_2+ \cdots+ x_k +x_{k+1})=(x_1, x_{k+1}+\frac{1}{2}[x_1, x_k])$. Then it is easy to check that $f$ is an isometry. In $ (U_1\times \mathcal Z({\mathcal H}_n), D') $ the rectifiable curves lie in subsets of the form $U_1\times \{p\}$ with $p\in \mathcal Z({\mathcal H}_n)$. It follows that the only rectifiable curves in $(H_n, d_A)$ lie in subsets of the form $$\left\{(x_1+ x_2+ \cdots+ x_k+ (p-\frac{1}{2}[x_1, x_k]))| x_1\in U_1\right\}=(x_2+\cdots +x_k+ p)* U_1, $$ where $x_i\in U_i$, $2\le i\le k$ and $p\in \mathcal Z({\mathcal H}_n)$ are fixed. These subsets are exactly the left cosets of $U_1$. \end{proof} Now let $B: \mathcal{H}_n\rightarrow \mathcal{H}_n$ be another diagonalizable derivation with positive eigenvalues $0<\beta_1< \beta_2<\cdots < \beta_l<\beta_{l+1}$. Let $W_j$ be the eigenspace of $\beta_j$. Then we have $W_{l+1}=\mathcal Z({\mathcal H}_n)$ and ${\mathcal H}_n=V_B\oplus W_{l+1}$, where $V_B= W_1\oplus \cdots \oplus W_l$. As in the case of $||\cdot||_A$ and $d_A$, we fix a basis for $\mathcal H_n$ supplied by Lemma \ref{structure} and define norm $||\cdot||_B$ and quasimetric $d_B$ on $\mathcal H_n$. For $y\in {\mathcal H}_n$, we write $y=y_1+\cdots + y_l+y_{l+1}$ with $y_j\in W_j$. Then $$||y_1+\cdots +y_l+y_{l+1}||_B= \sum_{j=1}^{l+1} |y_j|^{\frac{1}{\beta_j}}.$$ The visual quasimetric $d_B$ on $H_n={\mathcal H}_n$ is given by: $d_B(p, q)=||(-p)*q||_B$. \begin{Prop}\label{prefo} Let $F: (H_n, d_A)\rightarrow (H_n, d_B)$ be an $\eta$-quasisymmetric map for some $\eta$. Suppose $k\ge 2$. Then $l\ge 2$, $\text{dim}(U_1)=\text{dim}(W_1)$ and $F$ maps left cosets of $U_1$ to left cosets of $W_1$. \end{Prop} \begin{proof} By replacing $d_A$ and $d_B$ with suitable powers, we may assume $\alpha_1+\alpha_k=\beta_1+\beta_l$ and $\min\{\alpha_1, \beta_1\}=1$. Then $(H_n, d_A)$ and $(H_n, d_B)$ have the same Hausdorff dimension $Q=(n+1)(\alpha_1+\alpha_k)$. By considering $F^{-1}$ instead of $F$ if necessary we may assume $\alpha_1=1$. So $\beta_1\ge 1$. By Lemma \ref{bilip2d}, $(H_n, d_A)$ and $(H_n, d_B)$ are biLipschitz equivalent to metric spaces. We claim that $\beta_1=1$. Suppose $\beta_1>1$. Then Lemma \ref{rec} and its proof show that there is no non-trivial rectifiable curve in $(H_n, d_B)$. In particular, every curve family in $(H_n, d_B)$ has $Q$-modulus $0$. On the other hand, fix a nonzero vector $v\in U_1$. Since $\alpha_1=1$, the definition of $d_A$ implies that the left translates of the segment $\sigma:=\{tv: t\in [0,1]\}$ are rectifiable. Let $U\subset {\mathcal H}_n$ be a hyperplane transversal to the direction $v$. By a classical calculation, the family of curves $\Gamma:=\{g\cdot \sigma: g\in U\}$ has positive $Q$-modulus. Since $(H_n, d_A)$ and $(H_n, d_B)$ are quasisymmetric and have the same Hausdorff dimension $Q>1$, by Tyson's theorem (Theorem 1.4 in \cite{T}), $F(\Gamma)$ also has positive $Q$-modulus, contradicting the above observation. Hence $\beta_1=1$. Then we also have $\alpha_k=\beta_l$. We remark that Tyson's theorem holds only for metric spaces. Since the quasimetric spaces $(H_n, d_A)$ and $(H_n, d_B)$ are biLipschitz equivalent with metric spaces, we can still apply Tyson's theorem. We next claim that for any left translate $g\cdot \sigma$ of the segment $\sigma$ as above, the image $F(g\cdot \sigma)$ lies in a left coset of $W_1$. Since any two points in a left coset of $U_1$ can be joined by a segment of the form $g\cdot \sigma$, the claim implies that $F$ maps every left coset of $U_1$ into a left coset of $W_1$. The same argument applied to $F^{-1}$ shows $F^{-1}$ maps left cosets of $W_1$ into left cosets of $U_1$. Hence the image of a left coset of $U_1$ under $F$ is a left coset of $W_1$. Next we prove the claim. Suppose $F(g* \sigma)$ is not contained in any left coset of $W_1$. By continuity of $F$, there is an open subset $U$ containing $g$ such that for any $g'\in U$, the image $F(g'* \sigma)$ also does not lie in any left coset of $W_1$. By Lemma \ref{rec}, $F(g'* \sigma)$ is not rectifiable. So the $Q$-modulus of the curve family $F(\Gamma)$ is $0$, where $\Gamma=\{g'* \gamma: g'\in U\}$. On the other hand, as indicated above, the $Q$-modulus of $\Gamma$ is positive, contradicting Tyson's theorem. Hence the claim holds. \end{proof} \section{Restriction to a leaf}\label{leaf} In this Section we show that the restriction of a quasisymmetric map $F: (H_n, d_A)\rightarrow (H_n, d_B)$ to a left coset of $U_1$ is a quasisimilarity. For the rest of this Section, let $A, B: \mathcal{H}_n \rightarrow \mathcal{H}_n$ be diagonalizable derivations with positive eigenvalues. Denote by $0<\alpha_1< \cdots < \alpha_k<\alpha_{k+1}$ the eigenvalues of $A$, and $U_i$ the eigenspace of $\alpha_i$. Then we have $U_{k+1}=\mathcal Z({\mathcal H}_n)$ and ${\mathcal H}_n=U_1\oplus \cdots \oplus U_k\oplus \mathcal Z({\mathcal H}_n)$. Similarly let $0<\beta_1< \beta_2<\cdots < \beta_l<\beta_{l+1}$ be the eigenvalues of $B$, $W_j$ be the eigenspace of $\beta_j$. Then we have $W_{l+1}=\mathcal Z({\mathcal H}_n)$ and ${\mathcal H}_n=W_1\oplus \cdots \oplus W_l\oplus \mathcal Z({\mathcal H}_n)$. Without loss of generality, we may assume $\alpha_1=\beta_1=1$. Fix a non-zero $e\in \mathcal Z({\mathcal H}_n)$. We choose norms on $\mathcal H_n$ and define quasimetrics $d_A$ and $d_B$ on $H_n$ as in Section \ref{metriconb}. In particular, we have $|e|=1$. Note that the Cauchy-Schwartz inequality implies $|[x_1, x_k]|\le |x_1|\cdot |x_k|$ for any $x_1\in U_1$, $x_k\in U_k$. Now let $F: (H_n, d_A)\rightarrow (H_n, d_B)$ be an $\eta$-quasisymmetric map for some $\eta$. By Proposition \ref{prefo}, $F$ maps left cosets of $U_1$ to left cosets of $W_1$. Recall that $F^{-1}$ is $\eta_1$-quasisymmetric, where $\eta_1(t)=(\eta^{-1}(t^{-1}))^{-1}$. Without loss of generality we may assume $\eta(1)\ge 1$. Then we also have $\eta_1(1)\ge 1$. The proof of the following Lemma is similar to that of Lemma 5.1 in \cite{X1}. But the calculations are different. \begin{Le}\label{key} Let $L$ be a left coset of $U_1$ and denote $L'=F(L)$. Suppose $p,q\in L$ are such that $l_F(p)> C_1\cdot L_F(q)$ with $C_1=102\cdot \eta_1(1)$. Let $c:\mathbb R\rightarrow L$ be the parametrization of the line through $p$, $q$ such that $c(0)=p$ and $c(1)=q$. Then $$ \text{L}_F(c(\lambda))\le 2{(\eta_1(1))^2}\left(\frac{2}{|\lambda|}\right)^{\frac{1}{1+\alpha_k}}\cdot \text{L}_F(q)$$ for all $|\lambda|\ge 1$; in particular, $L_F(c(\lambda))\le 3{(\eta_1(1))^2} L_F(q)$. \end{Le} \begin{proof} Denote $p'=F(p)$ and $q'=F(q)$. The assumption implies $l_{F^{-1}}(q')> C_1\cdot L_{F^{-1}}(p')$. Let $\{r_j\}$ be an arbitrary sequence of positive reals such that $r_j\rightarrow 0$. Then $$\liminf_{j\rightarrow \infty}\frac{l_{F^{-1}}(q', r_j)}{r_j}> C_1 \cdot \limsup_{j\rightarrow \infty}\frac{L_{F^{-1}}(p', r_j)}{r_j}.$$ We shall look at the image of the left coset $r^{1+\beta_l}_j e+L'$ of $W_1$ under $F^{-1}$. Recall $e\in \mathcal Z({\mathcal H}_n)$ is a fixed element with $|e|=1$. By Lemma \ref{prefo}, $L_j:=F^{-1}(r^{1+\beta_l}_j e+L')$ is a left coset of $U_1$. Denote $p'_j=r^{1+\beta_l}_j e+p'$ and $q'_j=r^{1+\beta_l}_j e+q'$. Notice that $d_B(p', p'_j)=r_j$ and $d_B(q', q'_j)=r_j$. So we have $$\frac{d_A(q, F^{-1}(q'_j))}{r_j}\ge \frac{l_{F^{-1}}(q', r_j)}{r_j}$$ and $$\frac{d_A(p, F^{-1}(p'_j))}{r_j}\le \frac{L_{F^{-1}}(p', r_j)}{r_j}.$$ Let $p_j, q_j\in L_j$ be a point on $L_j$ nearest to $p$ and $q$ respectively. Notice that $p'_j$ is the point on $r^{1+\beta_l}_j e+L'$ nearest to $p'$. Hence $d_B(p', p'_j)\le d_B(p', F(p_j))$. Since $F^{-1}$ is $\eta_1$-quasisymmetric, we have $d_A(p, F^{-1}(p'_j))\le \eta_1(1) d_A(p, p_j)$. Similarly we have $d_A(q, F^{-1}(q'_j))\le \eta_1(1) d_A(q, q_j)$. It follows that \begin{equation}\label{e9} \frac{d_A(q, q_j)}{r_j}\ge \frac{1}{\eta_1(1)}\cdot \frac{d_A(q, F^{-1}(q'_j))}{r_j}\ge \frac{1}{\eta_1(1)}\cdot \frac{l_{F^{-1}}(q', r_j)}{r_j} \end{equation} and $$\frac{d_A(p, p_j)}{r_j}\le \frac{d_A(p, F^{-1}(p'_j))}{r_j}\le \frac{L_{F^{-1}}(p', r_j)}{r_j}.$$ Therefore \begin{align*} \liminf_{j\rightarrow \infty} \frac{d_A(q, q_j)}{r_j}\ge \liminf_{j\rightarrow \infty} \frac{1}{\eta_1(1)}\cdot \frac{l_{F^{-1}}(q', r_j)}{r_j} &\ge \frac{C_1}{\eta_1(1)}\cdot \limsup_{j\rightarrow \infty} \frac{L_{F^{-1}}(p', r_j)}{r_j}\\ & \ge {102}\cdot \limsup_{j\rightarrow \infty} \frac{d_A(p, p_j)}{r_j}. \end{align*} Hence $$\frac{d_A(p, p_j)}{d_A(q, q_j)}\le \frac{1}{101}$$ for all sufficiently large $j$. Next we shall look at $d_A(p, p_j)$ and $d_A(q, q_j)$. Notice that $L=q*U_1=q*\{t: t\in U_1\}$. Write $q_j=q*(\tilde{x}_1+\tilde{x}_2+\cdots +\tilde{x}_k+ \tilde{z})$ with $\tilde{x}_i\in U_i$ and $\tilde{z}\in \mathcal Z({\mathcal H}_n)$. Although the $\tilde{x}_i$'s and $\tilde{z}$ depend on $r_j$, we shall suppress the dependence to simplify the notation. Then $L_j=q_j*U_1=q_j*\{t: t\in U_1\}$. An arbitrary point on $L_j$ has the form $$q_j*t'=q*(\tilde{x}_1+\tilde{x}_2+\cdots +\tilde{x}_k+ \tilde{z})*t' =q*\left((t'+\tilde{x}_1)+\tilde{x}_2+\cdots +\tilde{x}_k+(\tilde{z}+\frac{1}{2}[\tilde{x}_k, t'])\right).$$ Since $q_j$ is a point on $L_j$ nearest to $q$, we see that \begin{align*} d_A(q, \, q_j*t') &=||(-q)*q_j*t'||_A \\ & =\left|\left|(t'+\tilde{x}_1)+\tilde{x}_2+\cdots +\tilde{x}_k+ (\tilde{z}+\frac{1}{2}[\tilde{x}_k, t'])\right|\right|_A\\ & =|t'+\tilde{x}_1|+ |\tilde{x}_2|^{\frac{1}{\alpha_2}}+\cdots + |\tilde{x}_k|^{\frac{1}{\alpha_k}}+ \left|\tilde{z}+\frac{1}{2}[\tilde{x}_k, t']\right|^{\frac{1}{1+\alpha_k}} \end{align*} achieves minimal when $t'=0$. Now write $p=q*t_0$ and $p_j=q_j*t_j\in L_j$ for some $t_0, t_j\in U_1$. Then we have \begin{align*} (-p)*p_j & =(-t_0)*(\tilde{x}_1+\tilde{x}_2+\cdots +\tilde{x}_k+ \tilde{z})*t_j\\ & =(\tilde{x}_1+t_j-t_0)+\tilde{x}_2+\cdots +\tilde{x}_k+ (\tilde{z}+\frac{1}{2}[\tilde{x}_k, t_0+t_j]). \end{align*} Hence $$d_A(p, p_j)=|\tilde{x}_1+t_j-t_0|+|\tilde{x}_2|^{\frac{1}{\alpha_2}}+\cdots + |{\tilde x}_k|^{\frac{1}{\alpha_k}}+ \left|\tilde{z}+\frac{1}{2}[\tilde{x}_k, t_0+t_j]\right|^{\frac{1}{1+\alpha_k}}.$$ To simplify notation set $a=|\tilde{x}_2|^{\frac{1}{\alpha_2}}+\cdots + |\tilde{x}_k|^{\frac{1}{\alpha_k}}$. We have $$d_A(q, q_j)=|\tilde{x}_1|+a+|\tilde{z}|^{\frac{1}{1+\alpha_k}}$$ and $$d_A(p, p_j)=|\tilde{x}_1+t_j-t_0|+a+\left|\tilde{z}+\frac{1}{2}[\tilde{x}_k, t_0+t_j]\right|^{\frac{1}{1+\alpha_k}}.$$ Since $\frac{d_A(p, p_j)}{d_A(q, q_j)}\le \frac{1}{101}$, we have $100a\le |\tilde{x}_1|+|\tilde{z}|^{\frac{1}{1+\alpha_k}}$. \noindent {\bf{Claim.}} $5|\tilde{x}_1|\le |\tilde{z}|^{\frac{1}{1+\alpha_k}}$. Suppose the contrary. Then $|\tilde{z}|^{\frac{1}{1+\alpha_k}}< 5 |\tilde{x}_1|$. Now $$|\tilde{x}_k|^{\frac{1}{\alpha_k}}\le a\le \frac{|\tilde{x}_1|+|\tilde{z}|^{\frac{1}{1+\alpha_k}}}{100}\le \frac{(5+1)|\tilde{x}_1|}{100}\le \frac{1}{10}|\tilde{x}_1|, $$ so $$|\tilde{x}_k|\le \frac{1}{10^{\alpha_k}}|\tilde{x}_1|^{\alpha_k}.$$ It follows that $|[\tilde{x}_k, \tilde{x}_1]|\le \frac{1}{10^{\alpha_k}}|\tilde{x}_1|^{1+\alpha_k}.$ Now \begin{align*} d_A(q,\, q_j*(-\tilde{x}_1))& =a+\left|\tilde{z}-\frac{1}{2}[\tilde{x}_k, \tilde{x}_1]\right|^{\frac{1}{1+\alpha_k}}\\ & \le a+|\tilde{z}|^{\frac{1}{1+\alpha_k}} + \left(\frac{1}{2}|[\tilde{x}_k, \tilde{x}_1]|\right)^{\frac{1}{1+\alpha_k}}\\ & \le a+|\tilde{z}|^{\frac{1}{1+\alpha_k}} +\left(\frac{1}{2\cdot 10^{\alpha_k}}\right)^{\frac{1}{1+\alpha_k}} |\tilde{x}_1|\\ & < d_A(q, q_j), \end{align*} contradicting the fact that $q_j$ is a point on $L_j$ nearest to $q$. Hence the claim holds. The above claim together with the estimate on $a$ implies \begin{equation}\label{e10} d_A(q, q_j)\le2 |\tilde{z}|^{\frac{1}{1+\alpha_k}}. \end{equation} So \begin{equation}\label{e11} |\tilde{x}_k|^{\frac{1}{\alpha_k}}\le a \le d_A(p, p_j)\le \frac{1}{101 } d_A(q, q_j) \le \frac{2}{101 } |\tilde{z}|^{\frac{1}{1+\alpha_k}}. \end{equation} Now let $u=\tilde{x}_1+t_j-t_0$. Then $|u|\le d_A(p, p_j)\le \frac{2}{101 } |\tilde{z}|^{\frac{1}{1+\alpha_k}}$. It follows that $|[\tilde{x}_k, u]|\le \left(\frac{2}{101}\right)^{1+\alpha_k} |\tilde{z}| $. Similarly \begin{equation}\label{e12} |[\tilde{x}_k, \tilde{x}_1]|\le \frac{1}{5}\cdot \left(\frac{2}{101}\right)^{\alpha_k} |\tilde{z}|. \end{equation} On the other hand, $$\left|\tilde{z}+[\tilde{x}_k, t_0]+\frac{1}{2}[\tilde{x}_k, u]- \frac{1}{2}[\tilde{x}_k, \tilde{x}_1]\right|^{\frac{1}{1+\alpha_k}} =\left|\tilde{z}+\frac{1}{2}[\tilde{x}_k, t_0+t_j]\right|^{\frac{1}{1+\alpha_k}}\le d_A(p, p_j)\le \frac{2}{101 } |\tilde{z}|^{\frac{1}{1+\alpha_k}} .$$ Now the triangle inequality implies \begin{equation}\label{e13} |\tilde{z}+[\tilde{x}_k, t_0]|\le \frac{1}{25} |\tilde{z}|\;\;\text{and}\;\; \frac{24}{25} |\tilde{z}| \le |[\tilde{x}_k, t_0]|\le \frac{26}{25} |\tilde{z}|. \end{equation} For $\lambda\in \mathbb R$, denote $w_\lambda=q*(\lambda t_0)\in L$. Let $w_{\lambda, j}\in L_j$ be a point on $L_j$ nearest to $w_\lambda$. Then $w_{\lambda, j}=q*(\tilde{x}_1+\tilde{x}_2+\cdots +\tilde{x}_k+ \tilde{z})*t_\lambda$ for some $t_\lambda\in U_1$. By a calculation similar to that of $d_A(p, p_j)$, we obtain $$d_A(w_\lambda, w_{\lambda,j})=|\tilde{x}_1+t_\lambda-\lambda t_0|+a+ \left|\tilde{z}+\frac{1}{2}[\tilde{x}_k, \lambda t_0+t_\lambda]\right|^{\frac{1}{1+\alpha_k}}.$$ Let $w=\tilde{x}_1+t_\lambda-\lambda t_0$. Then $\lambda t_0+t_\lambda=2 \lambda t_0+w-\tilde{x}_1$. Now suppose $|\lambda-1|\ge 1$. If $|w|\ge \sqrt{|\lambda-1|}\cdot |\tilde{z}|^{\frac{1}{1+\alpha_k}}$, then by (\ref{e10}) $$d_A(w_\lambda, w_{\lambda,j})\ge |w|\ge \frac{\sqrt{|\lambda-1|}}{2} d_A(q, q_j).$$ Assume now that $|w|\le \sqrt{|\lambda-1|}\cdot |\tilde{z}|^{\frac{1}{1+\alpha_k}}$. Then by (\ref{e11}), $|[\tilde{x}_k, w]|\le (\frac{2}{101})^{\alpha_k} \sqrt{|\lambda-1|} \cdot |\tilde z|$. It now follows from (\ref{e13}), (\ref{e12}), (\ref{e10}) and the triangle inequality that \begin{align*} d_A(w_\lambda, w_{\lambda,j}) &\ge \left|\tilde{z}+\frac{1}{2}[\tilde{x}_k, \lambda t_0+t_\lambda]\right|^{\frac{1}{1+\alpha_k}}\\ & = \left|\tilde{z}+\lambda [\tilde{x}_k, t_0]+\frac{1}{2}[\tilde{x}_k, w]-\frac{1}{2}[\tilde{x}_k, \tilde{x}_1] \right|^{\frac{1}{1+\alpha_k}}\\ & = \left|(\lambda-1) [\tilde{x}_k, t_0]+ (\tilde{z}+ [\tilde{x}_k, t_0])+\frac{1}{2}[\tilde{x}_k, w]-\frac{1}{2}[\tilde{x}_k, \tilde{x}_1] \right|^{\frac{1}{1+\alpha_k}}\\ &\ge \left(|\lambda-1|\cdot \frac{24}{25} |\tilde z| -\frac{1}{25}|\tilde z|-\frac{1}{2}\left(\frac{2}{101}\right)^{\alpha_k}\sqrt{|\lambda-1|}\cdot |\tilde z| -\frac{1}{10} \left(\frac{2}{101}\right)^{\alpha_k} \cdot |\tilde z| \right)^{\frac{1}{1+\alpha_k}}\\ & \ge \left(\frac{|\lambda-1|}{2}\right)^{\frac{1}{1+\alpha_k}}\cdot |\tilde{z}|^{\frac{1}{1+\alpha_k}}\\ & \ge \frac{1}{2} \left(\frac{|\lambda-1|}{2}\right)^{\frac{1}{1+\alpha_k}}\cdot d_A(q, q_j). \end{align*} In any case, if $|\lambda-1|\ge 1$, then \begin{equation}\label{e14} d_A(w_\lambda, w_{\lambda,j})\ge \frac{1}{2} \left(\frac{|\lambda-1|}{2}\right)^{\frac{1}{1+\alpha_k}}\cdot d_A(q, q_j). \end{equation} Let $b>0$ be a constant such that for every sequence $r_j\rightarrow 0$, the inequality $d_A(w_\lambda, w_{\lambda, j})\ge b \cdot d_A(q, q_j)$ holds for all sufficiently large $j$. Let $\tilde w_{\lambda, j}= r^{1+\beta_l}_j e + F(w_\lambda)\in r^{1+\beta_l}_j e+ L'$. By the quasisymmetric condition and (\ref{e9}) \begin{align*} \eta_1(1) \cdot \frac{l_{F^{-1}}( F(w_\lambda), r_j)}{r_j} & \ge \frac{d_A(w_\lambda,\, F^{-1}(\tilde w_{\lambda, j}))}{r_j} \\ & \ge \frac{d_A(w_\lambda, \, w_{\lambda, j})}{r_j}\\ &\ge b \cdot \frac{d_A(q, q_j)}{r_j}\ge \frac{b}{\eta_1(1)} \cdot \frac{l_{F^{-1}}(q', r_j)}{r_j}. \end{align*} Hence $$\liminf_{j\rightarrow\infty} \frac{l_{F^{-1}}( F(w_\lambda), r_j) }{r_j} \ge \frac{b}{(\eta_1(1))^2} \cdot \liminf_{j\rightarrow\infty} \frac{l_{F^{-1}}(q', r_j)}{r_j}\ge \frac{b}{(\eta_1(1))^2} \cdot l_{F^{-1}}(q').$$ Since this holds for every sequence $r_j\rightarrow 0$, we have $l_{F^{-1}}(F(w_\lambda))\ge \frac{b}{(\eta_1(1))^2} \cdot l_{F^{-1}}(q')$. Therefore, \begin{equation}\label{e15} \text{L}_F(w_\lambda)\le \frac{(\eta_1(1))^2}{b}\cdot \text{L}_F(q). \end{equation} Combining (\ref{e14}) and (\ref{e15}) we see that the following holds for all $|\lambda-1|\ge 1$: $$ \text{L}_F(w_\lambda)\le 2{(\eta_1(1))^2}\left(\frac{2}{|\lambda-1|}\right)^{\frac{1}{1+\alpha_k}}\cdot \text{L}_F(q).$$ \end{proof} Recall the grading ${\mathcal H}_n=V_A\oplus \mathcal Z({\mathcal H}_n)$, where $V_A=U_1\oplus \cdots \oplus U_k$. Let $\pi_1: {\mathcal H}_n\rightarrow V_A$ be the projection with respect to the above grading. Let $L$ be a left coset of $U_1$. Notice that the restriction $\pi_1|_L$ is injective and $\pi_1(L)$ is an affine subspace of $V_A$. A subset $H\subset L$ is called a hyperplane of $L$ if $\pi_1(H)$ is a hyperplane of $\pi_1(L)$. Similarly, a subset $A\subset L$ is called a line in $L$ if $\pi_1(A)$ is a line in $\pi_1(L)$. \begin{Le}\label{halfspace} Let $L$ be a left coset of $U_1$. Suppose $p,q\in L$ are such that $l_F(p)> (C_2)^{2m} \text{L}_F(q)$, where $$C_2=\max\{102\eta_1(1)\eta(1), 3\eta(1)(\eta_1(1))^3\}$$ and $m=\dim(U_1)$. Then there is a hyperplane $H$ of $L$ passing through $q$ and one component $H_-$ of $L\backslash H$ such that $l_F(x)\le (C_2)^{2m} \text{L}_F(q)$ for all $x\in H_-$. \end{Le} \begin{proof} Let $S$ denote the space of directions of $L$ at $q$. We shall define two subsets $G$, $B$ of $S$. A point $s\in S$ lies in $G$ if $\text{L}_F(x)\le C_2^m \text{L}_F(q)$ for every $x\not=q$ in the direction of $s$. A point $s\in S$ lies in $B$ if $l_F(x)> C_2^{2m} \text{L}_F(q)$ for some $x\not=q$ in the direction of $s$. Clearly $G\cap B=\emptyset$. Let $s_1\in S$ be the direction of $p$, and $s_2\in S$ the point in $S$ opposite to $s_1$. Then $s_1\in B$ since $l_F(p)> C_2^{2m} \text{L}_F(q)$. Lemma \ref{key} implies $\text{L}_F(x)\le 3(\eta_1(1))^2 \text{L}_F(q)$ for any point $x\not=q$ such that $q\in xp$. Hence $s_2\in G$. Let $H(B)\subset S$ be the convex hull of $B$ in the sphere $S$. Then for any $y\in H(B)$, there are $m$ points $x_1, \cdots, x_m\in B$ such that $y$ lies in the spherical simplex $\Delta_1$ spanned by $x_1, \cdots, x_m$. Let $\Delta_i$ be the spherical simplex spanned by $x_i, \cdots, x_m$. Then there are $y_i\in \Delta_{i}$ with $y_1=y$ such that $y_i\in x_i y_{i+1}$. Since $x_i\in B$, there exists a point $p_i\not=q$ in the direction of $x_i$ such that $l_F(p_i)> (C_2)^{2m} \text{L}_F(q)$. Let $q_{m-1}$ be the unique point in the direction of $y_{m-1}$ such that $q_{m-1}\in p_{m-1}p_m$. Inductively, let $q_i$ be the unique point in the direction of $y_i$ such that $q_i\in p_iq_{i+1}$. We claim $\text{L}_F(x)> C_2^{2m-1} \text{L}_F(q)$ for every $x\in p_{m-1}p_m$; in particular, $\text{L}_F(q_{m-1})> C_2^{2m-1} \text{L}_F(q)$. Suppose not. Then $\text{L}_F(x)\le C_2^{2m-1} \text{L}_F(q)$ for some $x\in p_{m-1}p_m$. Since $l_F(p_{m-1})> C_2^{2m} \text{L}_F(q)$, we have $ l_F(p_{m-1})> C_2 \text{L}_F(x)$. Now Lemma \ref{key} implies $\text{L}_F(p_m)\le 3 (\eta_1(1))^2 \text{L}_F(x) \le C_2^{2m} \text{L}_F(q)$, contradicting $l_F(p_m)> C_2^{2m} \text{L}_F(q)$. By considering $q_i\in q_{i+1}p_i$ and using Lemma \ref{key} one inductively proves that $\text{L}_F(q_{i})> C_2^{m+i} \text{L}_F(q)$. In particular, $\text{L}_F(q_{1})> C_2^{m+1} \text{L}_F(q)$. Since $q_1$ is in the direction of $y_1=y$, we see that $H(B)\cap G=\emptyset$. Now $H(B)$ is a non-empty convex subset of the sphere $S$ and its complement is non-empty. It follows that there is an open hemisphere in its complement. Hence there is an open hemisphere in the complement of $B$. Now the Lemma follows from the definition of $B$. \end{proof} \begin{Le}\label{nonzero} Suppose $\dim(U_1)\ge 2$. Then for any bounded subset $X\subset L$, there exist two positive constants $M_1, M_2$ such that $\text{L}_F(x)\ge M_1$ and $l_F(x)\le M_2$ for all $x\in X$. \end{Le} \begin{proof} Let $X$ be a bounded subset of $L$. We first show that there is some $M_1>0$ such that $\text{L}_F(x)\ge M_1$ for all $x\in X$. Suppose there is a sequence of points $x_i\in X$ such that $\text{L}_F(x_i)\rightarrow 0$. Fix a point $p\in L$ such that $F|_L$ has non-singular differential at $p$. Such a point $p$ always exists: $(L, d_A)$ and $(F(L), d_B)$ are isometric to $\mathbb R^m$; since $\dim(U_1)\ge 2$ and $F|_L$ is quasisymmetric, $F|_L$ is a.e. differentiable and its differential is a.e. non-singular. The quasisymmetry condition implies $l_F(p)>0$. Then for all sufficiently large $i$ we have $l_F(p)>C^{2m}_2 \text{L}_F(x_i)$. By Lemma \ref{halfspace}, there is a hyperplane $H_i$ passing through $x_i$ and a component $H_{i,-}$ of $L\backslash H_i$ such that $l_F(x)\le C_2^{2m} \text{L}_F(x_i)$ for all $x\in H_{i,-}$. Since the sequence $x_i$ is bounded, a subsequence $H_{i_j,-}$ of the half spaces $H_{i,-}$ converges to an open half space $H_-$. Since every $x\in H_-$ lies in $H_{i_j,-}$ for all sufficiently large $j$ and $\text{L}_F(x_i)\rightarrow 0$, it follows that $l_F(x)=0$ for all $x\in H_-$. Since $F|_L: L\rightarrow F(L)$ is a.e. differentiable, we see that $F|_L$ has zero differential a.e. on the open set $H_-$ of $L$, which is impossible. As a quasisymmetric map, $F|_L: L\rightarrow F(L)$ maps bounded sets to bounded sets. So $F(X)$ is bounded. Now the first claim applied to $F^{-1}$ yields that there is a positive lower bound for $\text{L}_{F^{-1}}$ on $F(X)$. Now (\ref{e5}) implies that there is a positive upper bound for $l_F$ on $X$. \end{proof} It is clear from the definition of $d_A$ and $d_B$ that lines in the left cosets of $U_1$ and $W_1$ are rectifiable (recall we first normalized so that $\alpha_1=\beta_1=1$). \begin{Le}\label{quasisimi} For each left coset $L$ of $U_1$, there is some constant $C>0$ such that $F|_L $ is a $(C^{2m+2}_2, C)$-quasisimilarity, where $m=\dim(U_1)$ and $C_2$ is the constant in Lemma \ref{halfspace}. \end{Le} \begin{proof} First consider the case when $m=1$. Lemma \ref{rec} and the comment before Lemma \ref{quasisimi} imply that the left cosets of $U_1$ are the only rectifiable curves in $(H_n, d_A)$. Similarly the left cosets of $W_1$ are the only rectifiable curves in $(H_n, d_B)$. By the main result of \cite{BKR}, $F$ is absolutely continuous on a.e. left coset of $U_1$. Let $L$ be such a left coset. Since $F|_L: (L, d_A)\rightarrow (F(L), d_B) $ is a homeomorphism between lines (with the Euclidean metric), it is differentiable a.e. As $F|_L$ is absolutely continuous, it suffices to bound the differential in order to show that $F|_L $ is a quasisimilarity. We shall show that $l_F(p)\le C_1\cdot \text{L}_F(q)$ for any $p,q\in L$, where $C_1=102\eta_1(1)$. Suppose there are two points $p, q\in L$ such that $l_F(p)> C_1\cdot \text{L}_F(q)$. By Lemma \ref{key}, $\text{L}_F(x)\rightarrow 0$ as $d_A(p, x)\rightarrow \infty$ ($x\in L$). This implies $\text{L}_{F^{-1}}(y)\ge l_{F^{-1}}(y) \rightarrow \infty$ as $d_B(y, F(p))\rightarrow \infty$ ($y\in F(L)$). However, $l_F(p)> C_1 \text{L}_F(q)$ implies $l_{F^{-1}}(F(q))> C_1 \cdot \text{L}_{F^{-1}}(F(p))$. By Lemma \ref{key} again we obtain $\text{L}_{F^{-1}}(y)\rightarrow 0$, which is a contradiction. From now on we assume $m\ge 2$. Denote $L'=F(L)$. In this case, both $F|_L$ and $F^{-1}|_{L'}$ have the following properties: (1) absolutely continuous, (2) differentiable almost everywhere and the differential is almost everywhere nonsingular, (3) absolutely continuous on almost all curves. It follows that to show $F|_L$ is a $(C^{2m+2}_2, C)$ quasisimilarity, it suffices to show that there is a set of full measure $E\subset L$ such that $l_F(x)\le 3 (\eta_1(1))^2 C^{2m+1}_2 \text{L}_F(y)$ for all $x, y\in E$. We shall prove by contradiction. So suppose the above statement is not true. Then in particular there are two points $p, q\in L$ such that $l_F(p)> 3 (\eta_1(1))^2 C^{2m+1}_2 \text{L}_F(q)$. We observe that it suffices to show that there is a constant $b_0>0$ such that $l_F(x)\le b_0$ for all $x$ in a full measure subset of $L$: the condition $l_F(p)> 3 (\eta_1(1))^2 C^{2m+1}_2 \text{L}_F(q)$ implies that $l_{F^{-1}}(F(q))> 3 (\eta_1(1))^2 C^{2m+1}_2 \text{L}_{F^{-1}}(F(p))$. Then Lemma \ref{key} implies $\text{L}_{F^{-1}}(y)\rightarrow 0$ as $y\in L'$ goes to infinity along the line through $F(p)$ and $F(q)$. Fix a point $y_0$ such that $$\text{L}_{F^{-1}}(y_0)<\min\left\{\frac{1}{b_0 \eta_1(1) C_2^{2m}}, \; \frac{l_{F^{-1}}(F(q))}{C_2^{2m}}\right\}.$$ By Lemma \ref{halfspace}, there is a hyperplane $H'$ passing trough $y_0$ and a component $H'_-$ of $L'\backslash H'$ such that $ l_{F^{-1}}(y)< \frac{1}{b_0\eta_1(1)}$ for all $y\in H'_-$. Since $F^{-1}$ is differentiable a.e. on $L'$ and is $\eta_1$-quasisymmetric, we have $ \text{L}_{F^{-1}}(y)< \frac{1}{b_0}$ for a.e. $y\in H'_-$. It follows that $l_F(x)> b_0$ for a.e. $x\in F^{-1}(H'_-)$, contradicting the assumption that $l_F(x)\le b_0$ for all $x$ in a full measure subset of $L$. We next show that $l_F(x)$ is essentially bounded on $L$. Since $l_F(p)> 3 (\eta_1(1))^2 C^{2m+1}_2 \text{L}_F(q)$, by Lemma \ref{halfspace}, there is a hyperplane $H_1$ passing through $q$ and one component $H_{1,-}$ of $L\backslash H_1$ such that $$l_F(x)\le (C_2)^{2m} \text{L}_F(q)<\frac{1}{3(\eta_1(1))^2 C_2} l_F(p)$$ for all $x\in H_{1,-}.$ The quasisymmetry condition then implies $$L_F(x)\le \eta(1) l_F(x)\le \frac{\eta(1)}{3(\eta_1(1))^2 C_2} l_F(p)$$ for a.e. $x\in H_{1,-}.$ Let $\tau: L\rightarrow L$ be the geodesic symmetry about $p$, that is, for any $x\in L$, $\tau(x)$ is such that $p$ is the midpoint of $x\tau(x)$. Now Lemma \ref{key} implies that for a.e. $y\in \tau(H_{1,-})$ we have $$\text{L}_F(y)\le 3 (\eta_1(1))^2 \text{L}_F(\tau(y))\le 3 \eta(1)(\eta_1(1))^2 C_2^{2m} \text{L}_F(q). $$ If $p\in H_1$, then we are done since now $\text{L}_F(x)$ (and hence $l_F(x)$) is bounded on a full measure subset of $L\backslash H_1=H_1\cup \tau(H_1)$. So we assume $p\notin H_1$. Let $B_1$ be the part of a cylinder in $L$ between $H_1$ and $\tau(H_1)$ with center line passing through $p$ and perpendicular to the hyperplane $H_1$. Then $B_1$ is bounded. By Lemma \ref{nonzero} there are positive numbers $M_1, M_2$ such that $\text{L}_F(x)\ge M_1$ and $l_F(x)\le M_2$ for all $x\in B_1$. Since we assume $l_F(x)$ is not essentially bounded, there is some hyperplane $\tilde{H}_1$ parallel to $H_1$ such that (1) $F|_L$ is differentiable at some $q_1\in B_1\cap \tilde{H}_1$; (2) there is some $p_1\in \tilde{H}_1$ with $l_F(p_1)>C^{2m}_2\cdot \eta(1) M_2$. Since $F|_L$ is differentiable at $q_1$, we have $\text{L}_F(q_1)\le \eta(1) l_F(q_1)\le \eta(1) M_2$. So $l_F(p_1)>C^{2m}_2\cdot \text{L}_F(q_1)$. Now Lemma \ref{halfspace} and Lemma \ref{key} imply that there is a hyperplane $H_2$ passing through $q_1$ and a component $H_{2, -}$ of $L\backslash H_2$ such that $\text{L}_F(x)$ is essentially bounded from above on $H_{2, -}$ and $\tau_1(H_{2, -})$, where $\tau_1$ is the geodesic symmetry about $p_1$. If $p_1\in H_2$, then we are done as indicated above. So we assume $p_1\notin H_2$. In this case, $H_1$ and $H_2$ are not parallel. We proceed inductively and eventually find $m$ hyperplanes $H_1$, $H_2$, $\cdots$, $H_m$, $m$ half spaces $H_{i, -}$ and points $p_0=p, p_1, \cdots, p_{m-1}$ with the following properties: (1) $\text{L}_F(x)$ and hence $l_F(x)$ is essentially bounded from above on the union $Q:=\cup_i H_{i, -}\cup \cup_i \tau_{i-1}(H_{i, -})$, where $\tau_{i-1}$ ($\tau_0=\tau$) is the geodesic symmetry about the point $p_{i-1}$; (2) The complement of $Q$ in $L$ is compact. By Lemma \ref{nonzero}, $l_F(x)$ is uniformly bounded on $L\backslash Q$. It follows that $l_F(x)$ is essentially bounded on $L$, and we are done. \end{proof} \section{Proof of the main Theorems}\label{proofs} In this Section we finish the proofs of the theorems in the Introduction. We use the notation from Section \ref{leaf}, see the paragraphs before Lemma \ref{key}. Notice that $\bigoplus_{i=1}^{k-1}U_i\oplus \mathcal Z({\mathcal H}_n)$ is a Lie subalgebra (actually an ideal) of ${\mathcal H}_n$. Let $H$ denote the corresponding connected Lie subgroup of $H_n$. \begin{Le}\label{para} Suppose $k\ge 2$. Then two left cosets of $U_1$ lie in the same left coset of $H$ if and only if the Hausdorff distance between them is finite. \end{Le} \begin{proof} Let $L_1$, $L_2$ be two left cosets of $U_1$. After applying a left translation, we may assume $L_1=U_1$ and $L_2=g*U_1$ for some $g=x_1+\cdots +x_k+x_{k+1}\in \mathcal H_n$ with $x_i\in U_i$. Note that $L_1$ and $L_2$ lie in the same left coset of $H$ if and only if $x_k=0$. First assume $L_1$ and $L_2$ lie in the same left coset of $H$. Then $x_k=0$. For $t\in U_1$, Lemma \ref{structure} (2) implies $$(-t)*g* t=(-t)*(x_1+\cdots + x_{k-1} +x_{k+1})*t=x_1+\cdots + x_{k-1}+x_{k+1}. $$ We see that $d_A(t, g*t)=||(-t)*g*t||_A$ is independent of $t\in U_1$. Hence the Hausdorff distance between $L_1$ and $L_2$ is finite. Next we assume $L_1$ and $L_2$ lie in distinct left cosets of $H$. Then $x_k\not=0$. There exists $v\in U_1$ such that $[x_k, v]\not=0$. Let $t_1=a v$ with $a\in \mathbb R$. Let $t_2\in U_1$. We consider $d_A(t_1, \; g*t_2)$. Calculate $$(-av)*g*t_2=\left((x_1+t_2-av)+x_2+\cdots +x_k+(x_{k+1}+\frac{1}{2}[x_k, av+t_2])\right). $$ Suppose the Hausdorff distance between $L_1$ and $L_2$ is finite. Then there is some constant $C>0$ such that, for any $a\in \mathbb R$, there is some $t_2\in U_1$ satisfying \begin{align*} C\ge d_A(av, g*t_2) & =||(-av)*g*t_2||_A \\ &=|x_1+t_2-av|+\sum_{i=2}^k |x_i|^{\frac{1}{\alpha_i}}+\left|x_{k+1}+\frac{1}{2}[x_k, av+t_2]\right|^{\frac{1}{1+\alpha_k}}. \end{align*} Let $u=t_2-av$. Then $u$ is uniformly bounded and $t_2=av+u$. It follows that $[x_k, u]$ is uniformly bounded. As $[x_k, v]$ is fixed and nonzero, we see that $[x_k, av+t_2]=[x_k, u]+2a[x_k, v]$ is unbounded as $a\rightarrow \infty$. The contradiction shows that the Hausdorff distance between $L_1$ and $L_2$ is infinite. \end{proof} \begin{Le}\label{housk} For any $x_k, x'_k\in U_k$ and any $h\in H$, we have $$d_A(x_k*H, x'_k*H)=d_A(x_k*h, x'_k*H)=|x'_k-x_k|^{\frac{1}{\alpha_k}}.$$ \end{Le} \begin{proof} Write $h=x_1+\cdots +x_{k-1}+x_{k+1}$. For any $h'=x'_1+\cdots+ x'_{k-1}+x'_{k+1}\in H$, we have \begin{align*} d_A(x_k*h, x'_k*h') &=||(-h)*(-x_k)*x'_k*h'||_A\\ &=\left|\left| (x'_1-x_1)+\cdots +(x'_{k-1}-x_{k-1})+(x'_k-x_k)+ (x'_{k+1}-x_{k+1}+E) \right| \right|_A, \end{align*} where $$E=\frac{1}{2}[x'_k-x_k, x_1+x'_1]+\frac{1}{2} \left[-\sum_{i=1}^{k-1} x_i, \sum_{i=1}^{k-1} x'_i\right].$$ Now it is clear that $d_A(x_k*h, x'_k*h')\ge |x'_k-x_k|^{\frac{1}{\alpha_k}}$ for any $h'\in H$. Furthermore, the equality holds for $h'=x_1+\cdots +x_{k-1}+(x_{k+1}-[x'_k-x_k, x_1])$. The Lemma follows. \end{proof} The set of left cosets of $H$ in $H_n$ can be identified with $U_k$ via $x_k\rightarrow x_k*H$. Lemma \ref{housk} implies that this set equipped with the minimal distance is isometric to $(U_k, |\cdot|^{\frac{1}{\alpha_k}})$. Set $K=\bigoplus_{i=2}^{k-1}U_i\oplus \mathcal Z({\mathcal H}_n)$. A similar (and easier) calculation as in the proof of Lemma \ref{housk} yields the following: \begin{Le}\label{hous1} We have $d_A(g*U_1, \, g'*U_1)=d_A(g*x, \, g'*U_1)=d_A(g, g')$ for any $g, g'\in K$ and any $x\in U_1$. \end{Le} Lemma \ref{hous1} implies that the set of left cosets of $U_1$ in $H$ can be identified with $K$, and this set equipped with the minimal distance is isometric to $(K, d_A)$. The proof of the following Lemma is almost the same as that of Lemma 3.9 in \cite{X3}. So we omit the proof here. The main point is that different left cosets diverge sublinearly. \begin{Le}\label{bilip1} Suppose $\alpha_1=\beta_1=1$. Then there is a constant $C$ such that for any left coset $L$ of $U_1$, $F|_L$ is a $(K, C)$ quasisimilarity, where $K$ depends only on $\eta$. \end{Le} \noindent {\bf{Proof of Theorem \ref{main3}}.} Let $A, B$ be diagonalizable derivations with positive eigenvalues. Suppose $k\ge 2$. Let $F: (H_n, d_A) \rightarrow (H_n, d_B) $ be a quasisymmetric map. We shall use the notation from Section \ref{leaf}. Then $F: (H_n, d_1) \rightarrow (H_n, d_2) $ is $\eta$-quasisymmetric for some $\eta$, where $d_1=d_A^{{\alpha_1}}$ and $d_2=d_B^{{\beta_1}}$ . Lemma \ref{bilip1} implies that there is a constant $C$ such that for any left coset $L$ of $U_1$, $F|_L: (L, d_1) \rightarrow (F(L), d_2) $ is a $(K, C)$ quasisimilarity, where $K$ depends only on $\eta$. Let $p, q\in H_n$ be arbitrary. If they lie in the same left coset $L$ of $U_1$, then $d_2(F(p), F(q))\le CK d_1(p,q)$. Now suppose $p\in L_1$, $q\in L_2$. Pick $x\in L_1$ such that $d_1(p, x)=d_1(p, q)$. Then $$d_2(F(p), F(q))\le \eta(1)\cdot d_2(F(p), F(x))\le \eta(1) CK d_1(p, x)=\eta(1)CK d_1(p,q).$$ So we have an upper bound for $d_2(F(p), F(q))$. The same argument applied to $F^{-1}$ yields a lower bound for $d_2(F(p), F(q))$. Hence $F$ is biLipschitz. The theorem then follows. \qed \noindent {\bf{Proof of Theorem \ref{main4}}.} First we suppose $A$ and $B$ have the same invariants, that is, $l=k$, $\text{dim}(U_i)=\text{dim}(W_i)$ and there is some $\lambda>0$ such that $\beta_i=\lambda \alpha_i$. We need to show that $(H_n, d_A)$ and $(H_n, d_B)$ are quasisymmetric. After replacing $d_A$ with $d_A^{\alpha_1}$ and $d_B$ with $d_B^{\alpha_1}$, we may assume $\alpha_1=\beta_1=1$. Then $\beta_i=\alpha_i$. Fix some $e\in \mathcal Z({\mathcal H}_n)\backslash\{0\}$. By Lemma \ref{structure}, for $1\le i<(k+1)/2$, there is a basis $e_1, \cdots, e_{m_i}$ for $U_i$ and a basis $\eta_1, \cdots, \eta_{m_i}$ for $U_{k+1-i}$ such that $[e_s, \eta_t]=\delta_{st} e$; if $i=(k+1)/2$, then $m_i=2k_i$ is even and there is a basis $e_1, \eta_1, \cdots, e_{k_i}, \eta_{k_i}$ of $U_i$ such that $[e_s, \eta_t]=\delta_{st} e$, $[e_s, e_t]=[\eta_s, \eta_t]=0$. Similarly, for $1\le i< (k+1)/2$, there is a basis $e'_1, \cdots, e'_{m_i}$ for $W_i$ and a basis $\eta'_1, \cdots, \eta'_{m_i}$ for $W_{k+1-i}$ such that $[e'_s, \eta'_t]=\delta_{st} e$; and if $i=(k+1)/2$, then $m_i=2k_i$ is even and there is a basis $e'_1, \eta'_1, \cdots, e'_{k_i}, \eta'_{k_i}$ for $W_i$ such that $[e'_s, \eta'_t]=\delta_{st} e$, $[e'_s, e'_t]=[\eta'_s, \eta'_t]=0$. Now define a map $G: V_A \rightarrow V_B$ as follows. For each $i<(k+1)/2$, $G|_{U_i}$ is given by $G(\sum_s x_s e_s)=\sum_s x_s e'_s$; if $i>(k+1)/2$, define $G(\sum_s x_s \eta_s)=\sum_s x_s \eta'_s$; and if $i=(k+1)/2$, define $$G(\sum_s (x_s e_s+y_s \eta_s))=\sum_s (x_s e'_s+y_s \eta'_s).$$ Define $F:(H_n, d_A)\rightarrow (H_n, d_B)$ by $F(x+x_{k+1})=G(x)+x_{k+1}$, where $x\in V_A$ and $x_{k+1}\in \mathcal Z({\mathcal H}_n)$. It is now easy to check that $F$ is an isometry. Conversely, let $F: (H_n, d_A)\rightarrow (H_n, d_B)$ be a quasisymmetry. By Proposition \ref{prefo}, $k\ge 2$ if and only if $l\ge 2$; furthermore, in this case, $\dim(U_1)=\dim(W_1)$ and $F$ maps left cosets of $U_1$ to left cosets of $W_1$. The conclusion of the theorem clearly holds if $k=l=1$. So from now on we shall assume $k\ge 2$ and $l\ge 2$. After replacing $d_A$ with $d_A^{\alpha_1}$ and $d_B$ with $d_B^{\alpha_1}$, we may assume $\alpha_1=\beta_1=1$. By Theorem \ref{main3}, $F$ is biLipschitz. Lemma \ref{para} says two left cosets $L_1$ and $L_2$ of $U_1$ lie in the same left coset of $H:=\bigoplus_{i=1}^{k-1} U_i \oplus \mathcal Z({\mathcal H}_n)$ if and only if the Hausdorff distance between them is finite. The same is true for left cosets of $W_1$ and $\tilde H: =\bigoplus_{i=1}^{l-1} W_i \oplus \mathcal Z({\mathcal H}_n)$. It follows that $F$ maps left cosets of $H$ to left cosets of $\tilde H$. Now Lemma \ref{housk} and the remark after that Lemma imply that $F$ induces a biLipschitz map from $(U_k, |\cdot|^{\frac{1}{\alpha_k}}) $ to $(W_l, |\cdot|^{\frac{1}{\beta_l}}) $. From this we conclude that $\dim(U_k)=\dim(W_l)$ and $\alpha_k=\beta_l$. Now we consider the restriction of $F$ to a left coset of $H$, which is biLipschitz. This can be viewed as a biLipschitz map from $(H, d_A) $ to $(\tilde H, d_B)$. We also know that $F$ maps left cosets of $U_1$ to left cosets of $W_1$. Now Lemma \ref{hous1} and the remark after that Lemma imply that $F$ induces a biLipschitz map from $(K, d_A)$ to $(\tilde K, d_B)$, where $K=\bigoplus_{i=2}^{k-1} U_i \oplus \mathcal Z({\mathcal H}_n)$ and $\tilde K=\bigoplus_{i=2}^{l-1} W_i \oplus \mathcal Z({\mathcal H}_n)$. Now an induction argument finishes the proof of Theorem \ref{main4}. \qed Theorem \ref{main2} follows from Theorem \ref{main4} since $G_A$ and $G_B$ are quasiisometric if and only if $(H_n, d_A)$ and $(H_n, d_B)$ are quasisymmetric. Theorem \ref{main1} follows from Theorem \ref{main3} since any quasiisometry $f: G_A\rightarrow G_B$ induces a boundary map $\partial f: \partial G_A\rightarrow \partial G_B$, which is a quasisymmetric map, and $f$ is a almost similarity if and only if $\partial f$ is biLipschitz (after possibly snowflaking the metric $d_B$). For more details on these implications, the reader is referred to \cite{SX}. \addcontentsline{toc}{subsection}{References} \noindent Address: \noindent Xiangdong Xie: Dept. of Mathematics and Statistics, Bowling Green State University, Bowling Green, OH, U.S.A.\hskip .4cm E-mail: [email protected] \end{document}
arXiv
\begin{document} \title{Analytical Study of Certain Magnetohydrodynamic-$lpha$ Models} \begin{center} $^1$\textit{Department of Computer Science and Applied Mathematics \\ Weizmann Institute of Science \\ Rehovot 76100, Israel}\\ $^[email protected] \\ $^2$\textit{Department of Mathematics and\\ Department of Mechanical and Aerospace Engineering \\ University of California \\ Irvine, CA 92697-3875, USA} \\ [email protected] \textit{and} [email protected] \end{center} \begin{abstract} In this paper we present an analytical study of a subgrid scale turbulence model of the three-dimensional magnetohydrodynamic (MHD) equations, inspired by the Navier-Stokes-$\alpha $ (also known as the viscous Camassa-Holm equations or the Lagrangian-averaged Navier-Stokes-$\alpha $ model. Specifically, we show the global well-posedness and regularity of solutions of a certain MHD-$\alpha$ model (which is a particular case of the Lagrangian averaged magnetohydrodynamic-$\alpha$ model without enhancing the dissipation for the magnetic field). We also introduce other subgrid scale turbulence models, inspired by the Leray-$\alpha $ and the modified Leray-$\alpha $ models of turbulence. Finally, we discuss the relation of the MHD-$\alpha $ model to the MHD equations by proving a convergence theorem, that is, as the length scale $\alpha $ tends to zero, a subsequence of solutions of the MHD-$\alpha $ equations converges to a certain solution (a Leray-Hopf solution) of the three-dimensional MHD equations. \end{abstract} \textbf{Keywords:} subgrid scale models; turbulence models; magnetohydrodynamics; regularizing MHD; magnetohydrodynamic-$ \alpha $ model; Lagrangian-averaged magnetohydrodynamic-$ \alpha $ model; Leray-$\alpha$ model. \textbf{Mathematics Subject Classification:} 76D03, 76F20, 76F55, 76F65, 76W05. \section{Introduction} We consider the three-dimensional magnetohydrodynamic (MHD) equations for a homogeneous incompressible resistive viscous fluid subjected to a Lorentz force due to the presence of a magnetic field. The MHD involves coupling Maxwell's equations governing the magnetic field and the Navier-Stokes equations (NSE) governing the fluid motion. The system has the form \begin{align}\label{grp:MHD} \begin{split} & \frac{\partial \boldsymbol{v}}{\partial t}+\left( \boldsymbol{v}\cdot \nabla \right) \boldsymbol{v}-\nu \Delta \boldsymbol{v}+\nabla \pi + \frac{1}{2}\nabla \lnorm{ \boldsymbol{B}} ^{2}=\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{B}, \\ & \frac{\partial \boldsymbol{B}}{\partial t}+\left( \boldsymbol{v}\cdot \nabla \right) \boldsymbol{B}-\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{v}-\eta \Delta \boldsymbol{B} =0, \\ & \nabla \cdot \boldsymbol{v}=\nabla \cdot \boldsymbol{B}=0, \end{split} \end{align} where $\boldsymbol{v}\left( x,t\right) $, the fluid velocity field, $ \boldsymbol{B}\left( x,t\right) $, the magnetic field and $\pi$, the pressure, are the unknowns; $\nu >0$ is the constant kinematic viscosity and $\eta >0$ is the constant magnetic diffusivity. Current scientific methods and tools are unable to compute the turbulent behavior of three-dimensional (3D) fluids and magnetofluids analytically or via direct numerical simulation due to the large range of scales of motion that need to be resolved when the Reynolds number is high. For many purposes, it might be adequate to compute only certain statistical features of the physical phenomenon of turbulence and much effort is being made to produce reliable turbulence models that parameterize the average effects of the fluctuations on the averages, without calculating the former explicitly. Motivated by the remarkable performance of the Navier-Stokes-$\alpha$ (NS-$ \alpha $) (also known as the viscous Camassa-Holm equations (VCHE) or the Lagrangian-averaged Navier-Stokes-$\alpha $ (LANS-$\alpha $)) as a closure model of turbulence in infinite channels and pipes, whose solutions give excellent agreement with empirical data for a wide range of large Reynolds numbers \cite{a_CFHOTW99, a_CFHOTW99_ChanPipe,a_CFHOTW98}, the alpha subgrid scale models of turbulence have been extensively studied in recent years (see, e.g., \cite{a_HT05,a_CHOT05,a_ILT05,a_VTC05,a_CTV05,a_FHT02, a_FHT01, a_CFHOTW99, a_CFHOTW99_ChanPipe,a_CFHOTW98, a_MKSM03,a_LL06,a_LL03,a_L06}). A justification of the inviscid NS-$\alpha $ model can be found, for example, in \cite{a_CFHOTW99_ChanPipe,a_HMR98,a_H02_pA,a_MS03,a_C01}. An extension of the NS-$ \alpha $ model to the nondissipative MHD is given, e.g., in \cite{a_H02_ch}. The model was obtained from variational principles by modifying the Hamiltonian associated with the ideal MHD equations subject to the incompressibility constraint. Then the dissipation is introduced in an \textit{ad hoc} fashion in analogy to the NS-$\alpha $, following \cite{a_CFHOTW98,a_CFHOTW99,a_CFHOTW99_ChanPipe,a_FHT02}. Specifically, the flow Lagrangian of the ideal MHD is given by \begin{equation*} \mathcal L[\boldsymbol u,D,\boldsymbol B]= \int \left(\frac{1}{2} D |\boldsymbol u|^2 - \pi (D-1) - \frac{1}{2} |\boldsymbol B|^2\right)dx \end{equation*} with volume preservation for the pressure. Here the volume element $D(x,t)=\left(\det \left({\partial {X}}/{\partial a}\right)(a,t)\right)^{-1}$ at \mbox{$x=X(a,t)$}, where \mbox{${X}(a,t)$} is the Lagrangian fluid trajectory, ${\partial {X}}/{\partial t}(a,t)=\boldsymbol{u}(x,t)$ (see \cite{a_H99}). First, the Lagrangian is averaged and approximated using a form of Taylor’s hypothesis (see, e.g., \cite{a_H02_fD}) to obtain \begin{equation*} \bar{\mathcal L }= \int \left(\frac{1}{2} D \left(|\boldsymbol u|^2+\alpha^2|\nabla \boldsymbol u|^2\right) - \pi (D-1) - \frac{1}{2} \left(|\boldsymbol B|^2+\alpha_M^2|\nabla \boldsymbol B|^2\right)\right)dx, \end{equation*} then the Hamiltonian principle is applied (see, e.g., \cite{a_HMR98}) to produce an ideal MHD-$\alpha $ model (eq. \eqref{grp:LAMHD} with $\nu=\eta=0$). Adding viscosity and diffusivity provides the MHD-$\alpha $ (or the Lagrangian-averaged magnetohydrodynamic-$ \alpha $ (LAMHD-$ \alpha $)) model \begin{align}\label{grp:LAMHD} \begin{split} & \frac{\partial \boldsymbol{v}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{v}+\sum_{j=1}^{3}\boldsymbol{v}_{j}\nabla \boldsymbol{u}_{j}-\nu \Delta \boldsymbol{v}+\nabla p+ \sum_{j=1}^{3}(\boldsymbol{B_s})_{j}\nabla \boldsymbol{B}_{j}=\left( \boldsymbol{B_s}\cdot \nabla \right) \boldsymbol{B}, \\ & \frac{\partial \boldsymbol{B_s}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{B_s}-\left( \boldsymbol{B_s}\cdot \nabla \right) \boldsymbol{u}-\eta \Delta \boldsymbol{B} =0 , \\ & \boldsymbol{v}=\left( 1-\alpha ^{2}\Delta \right) \boldsymbol{u}, \qquad \boldsymbol{B}=\left( 1-\alpha_M ^{2}\Delta \right) \boldsymbol{B_s}, \\ & \nabla \cdot \boldsymbol{u}=\nabla \cdot \boldsymbol{v}=\nabla \cdot \boldsymbol{B_s}=\nabla \cdot \boldsymbol{B}=0, \end{split} \end{align} where $\boldsymbol{u}$ and $\boldsymbol{B_s}$ represent the unknown `filtered' fluid velocity and magnetic fields, respectively, $p$ is the unknown `filtered' pressure, and $\alpha >0, \, \alpha_M>0$ are lengthscale parameters that represent the width of the filters. At the limit $\alpha =0,\,\alpha_M=0$, we formally obtain the three-dimensional MHD equations. The LAMHD-$ \alpha $ model was investigated numerically in periodic boundary conditions in two \cite{a_MMP05_2D,a_GHMP06} and three \cite{a_MMP05_3D} space dimensions against direct numerical simulations. In \cite{a_GHMP06} the K\'arm\'an-Howarth theorem was extended to LAMHD-$ \alpha $ equations. The LAMHD-$ \alpha $ model was also studied in \cite{a_JR05} in the context of convection-driven plane layer geodynamo models. We tend to think about the $\alpha$ models as a numerical regularization of the underlying equation, which makes the nonlinearity milder, and hence the solutions of the modified equation are smoother. This is contrary to the hyperviscosity regularization \cite{a_L59} and nonlinear viscosity \cite{a_L70,b_L85,a_S63}, which lead to unnecessary extra dissipation of the energy of the system. To emphasize this numerical analysis point of view, we observe that recently a Leray-$\alpha$ model of the inviscid Burgers equation \begin{equation}\label{eq:Burgers} \frac{\partial \boldsymbol{v}}{\partial t} + \boldsymbol{v} \frac{\partial \boldsymbol{v}}{\partial x} =0, \end{equation} which is \begin{align}\label{eq:BurgersAlpha} \begin{cases} &\frac{\partial \boldsymbol{v}^\alpha}{\partial t} + \boldsymbol{u}^\alpha \frac{\partial \boldsymbol{v}^\alpha}{\partial x} =0, \\ & \boldsymbol{v}^\alpha=\boldsymbol{u}^\alpha-\alpha ^{2}\boldsymbol{u}^\alpha_{xx}, \end{cases} \end{align} has been introduced in \cite{A_BF06} and \cite{a_TTZ06}. Regular unique solutions of \eqref{eq:BurgersAlpha} exist globally and it was shown computationally in \cite{A_BF06} that the solutions of \eqref{eq:BurgersAlpha} converge to the unique entropy weak solution (see, e.g., \cite{a_O63,b_S94,a_TTZ06}) of \eqref{eq:Burgers}. Notice that there is no dissipation in \eqref{eq:BurgersAlpha}, and the $L^\infty$ norm of $\boldsymbol{v}^\alpha$ is preserved. On the other hand, the viscous regularizing approach, which is usually taken for the Burgers equation, is achieved by introducing an artificial viscosity term in \eqref{eq:Burgers} and obtaining the viscous Burgers equation \begin{equation}\label{eq:BurgersViscous} \frac{\partial \boldsymbol{v}^\varepsilon}{\partial t}-\varepsilon^2 \frac{\partial^2 \boldsymbol{v}^\varepsilon}{\partial x^2}+ \boldsymbol{v}^\varepsilon \frac{\partial \boldsymbol{v}^\varepsilon}{\partial x} =0. \end{equation} This model gives a smooth solution $\boldsymbol{v}^{\varepsilon}$, which converges in the appropriate norms to the unique entropy weak solution (see, e.g., \cite{a_O63}). However, the energy of $\boldsymbol{v}^{\varepsilon}$ is decaying in time at a much higher rate than the decay expected for the entropy weak solution. Hence, the advantage of introducing the Leray-$\alpha$ model \eqref{eq:BurgersAlpha} for Burgers equation. This simple example clarifies our numerical approach of why we insist on making the nonlinearity milder instead of adding additional viscous or hyperviscous terms. This approach has been discussed further in \cite{a_CLT06} in the context of Euler and Navier-Stokes equations. Filtering the magnetic field, as it is done in \cite{a_MMP05_3D,a_MMP05_2D,a_JR05}, is equivalent to introducing hyperdiffusivity for the filtered magnetic field $\boldsymbol{B_s}$, due to the term $-\eta\alpha_M^2 \Delta^2 \boldsymbol{B_s}$ in \eqref{grp:LAMHD}, which we think is unnecessary. Taking the numerical analysis point of view discussed above we prove the well-posedness of a certain MHD-$\alpha$ model without introducing extra dissipation for the magnetic field, i.e.~we filter only the velocity field, but not the magnetic field and obtain the following regularizing system of \eqref{grp:MHD} \begin{align}\label{grp:alphaMHD_intro} \begin{split} & \frac{\partial \boldsymbol{v}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{v}+\sum_{j=1}^{3}\boldsymbol{v}_{j}\nabla \boldsymbol{u}_{j}-\nu \Delta \boldsymbol{v}+\nabla p+\frac{1}{2}\nabla \lnorm{ \boldsymbol{B}} ^{2}=\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{B}, \\ & \frac{\partial \boldsymbol{B}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{B}-\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{u}-\eta \Delta \boldsymbol{B} =0 ,\\ & \boldsymbol{v}=\left( 1-\alpha ^{2}\Delta \right) \boldsymbol{u}, \qquad \alpha>0,\\ & \nabla \cdot \boldsymbol{u}=\nabla \cdot \boldsymbol{v}=\nabla \cdot \boldsymbol{B}=0, \end{split} \end{align} instead of the system \eqref{grp:LAMHD}. As $\alpha$ models are some sort of regularizing numerical schemes, we would like to make sure that they inherit some of the original properties of the 3D MHD equations. Formally, three ideal, i.e.~$\nu=\eta =0$, quadratic invariants of the system \eqref{grp:alphaMHD_intro} could be identified with the invariants of the original ideal 3D MHD equations under suitable boundary conditions, for instance, in rectangular periodic boundary conditions or in the whole space $\Real^3$. Namely, the energy \mbox{$E^{\alpha }=\frac{1}{2}\int_{\Omega }\left( \boldsymbol{v}\left( x\right) \cdot \boldsymbol{u}(x)+\lnorm{ \boldsymbol{B}(x)} ^{2}\right) dx$}, the cross helicity \mbox{$ H_{C}^{\alpha }=\frac{1}{2}\int_{\Omega }\boldsymbol{v}(x)\cdot \boldsymbol{B }(x)dx$}, and the magnetic helicity \mbox{$ H_{M}^{\alpha }=\frac{1}{2}\int_{\Omega }\boldsymbol{A}(x)\cdot \boldsymbol{B }(x)dx$}, where $ \boldsymbol{A}$ is the vector potential, so that \mbox{$\boldsymbol{B}= \nabla \times \boldsymbol{A}$}; and they reduce, as $\alpha \rightarrow 0$, to the ideal invariants of the MHD equations. There are other possible alpha subgrid scale models that can be shown to have global existence and uniqueness. For instance, inspired by the Leray-$\alpha $ \cite{a_CHOT05,a_CTV05,a_VTC05,a_HN03,a_GH03} and modified Leray-$\alpha $ \mbox{(ML-$ \alpha $)} \cite{a_ILT05} models of turbulence, we formulate similar MHD alpha models, we refer to them as Leray-$\alpha $-MHD and ML-$ \alpha $-MHD models, respectively. The Leray-$\alpha $ and ML-$ \alpha $ models of turbulence reduce to the same closure model for the Reynolds averaged Navier-Stokes equations in turbulent channels and pipes as the NS-$\alpha $ model under the corresponding symmetries \cite{a_CHOT05,a_CTV05,a_ILT05}, which, as we mentioned above, compares successfully with experimental data for a wide range of Reynolds numbers. This comparison means that the Leray-$\alpha $ and the ML-$\alpha $ models as well as NS-$\alpha $ equations could be equally used as subgrid scale models of turbulence. Specifically, we consider the following version of the three-dimensional Leray-$\alpha $ -MHD model \begin{align}\label{grp:Leray_alpha_MHD_intro} \begin{split} & \frac{\partial \boldsymbol{v}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{v}-\nu \Delta \boldsymbol{v}+\nabla p+\frac{1}{ 2}\nabla \lnorm{ \boldsymbol{B}} ^{2}=\left( \boldsymbol{B} \cdot \nabla \right) \boldsymbol{B}, \\ & \frac{\partial \boldsymbol{B}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{B}-\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{v}-\eta \Delta \boldsymbol{B} =0, \\ & \boldsymbol{v}=\left( 1-\alpha ^{2}\Delta \right) \boldsymbol{u}, \\ & \nabla \cdot \boldsymbol{u}=\nabla \cdot \boldsymbol{v}=\nabla \cdot \boldsymbol{B}=0. \end{split} \end{align} Formally, the term $\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{v }$ comes from requiring in the ideal ($\nu=0=\eta$) case the conservation of energy \mbox{$ E^{\alpha }=\frac{1}{2}\int_{\Omega }\left( \lnorm{ \boldsymbol{v} (x)} ^{2}+\lnorm{ \boldsymbol{B}(x)} ^{2}\right) dx$} (under suitable boundary conditions). While the requirement for the system to have an ideal invariant corresponding to the cross helicity \mbox{$ H_{C}^{\alpha }=\frac{1}{2}\int_{\Omega }\boldsymbol{v}(x)\cdot \boldsymbol{B }(x)dx $} leads to the term $\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{B}$. Contrary to the MHD-$\alpha $ model \eqref{grp:alphaMHD_intro}, where we establish the existence and uniqueness, for the 3D Leray-$\alpha $-MHD model \eqref{grp:Leray_alpha_MHD_intro} we are able to establish only the existence of weak solutions, as in the case for the original MHD equations \eqref{grp:MHD}. In this case, the term $\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{v}$ is problematic as in the usual 3D NSE and MHD. However, in the two dimensional case the existence and uniqueness of weak solutions can be shown (similarly to the proof given for the model \eqref{grp:alphaMHD_intro} in section 3) for the following 2D-Leray-$\alpha $-MHD model \begin{align}\label{grp:2D_Leray_alpha_MHD} \begin{split} & \frac{\partial \boldsymbol{v}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{v}-\nu \Delta \boldsymbol{v}+\nabla p+\frac{1}{ 2}\nabla \lnorm{ \boldsymbol{B}} ^{2}=\left( \boldsymbol{B} \cdot \nabla \right) \boldsymbol{B}, \\ & \frac{\partial \boldsymbol{B}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{B}-\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{u}-\eta \Delta \boldsymbol{B} =0, \\ & \boldsymbol{v}=\left( 1-\alpha ^{2}\Delta \right) \boldsymbol{u}, \\ & \nabla \cdot \boldsymbol{u}=\nabla \cdot \boldsymbol{v}=\nabla \cdot \boldsymbol{B}=0. \end{split} \end{align} For this system, due to the identity \mbox{$\int_{\Omega }\left( \boldsymbol{u}\cdot \nabla \boldsymbol{u}\right) \cdot \Delta \boldsymbol{u}=0 $} (for the periodic 2D case and divergence free $\boldsymbol{u}$), the ideal invariant corresponding to the energy is \mbox{$ E^{\alpha }=\frac{1}{2}\int_{\Omega }\left( \boldsymbol{v}\left( x\right) \boldsymbol{u}(x)+\lnorm{ \boldsymbol{B}(x)} ^{2}\right) dx$}. At the moment we are unable to find a conserved quantity in the ideal version of \eqref{grp:2D_Leray_alpha_MHD} that can be identified with a cross helicity. The mean-square magnetic potential, given by $\mathcal{A}=\frac{1}{2}\int_{\Omega } \lnorm{ \psi(x)} ^{2} dx$, where $\boldsymbol{B}=\nabla^{\perp}\psi$, is conserved in the ideal case. We note that it appears that there is no conserve quantity that could be identified with energy for the 3D version of \eqref{grp:2D_Leray_alpha_MHD}. The three-dimensional Modified-Leray-$ \alpha $-MHD model, for which the well-posedness can be proved in a similar way as for the model \eqref{grp:alphaMHD_intro}, is given by \begin{align} \begin{split}\label{grp:ML_alpha_MHD_intro} & \frac{\partial \boldsymbol{v}}{\partial t}+\left( \boldsymbol{v}\cdot \nabla \right) \boldsymbol{u}-\nu \Delta \boldsymbol{v}+\nabla p+\frac{1}{ 2}\nabla \lnorm{ \boldsymbol{B}} ^{2}=\left( \boldsymbol{B} \cdot \nabla \right) \boldsymbol{B}, \\ & \frac{\partial \boldsymbol{B}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{B}-\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{u}-\eta \Delta \boldsymbol{B} =0, \\ & \boldsymbol{v}=\left( 1-\alpha ^{2}\Delta \right) \boldsymbol{u}, \\ & \nabla \cdot \boldsymbol{u}=\nabla \cdot \boldsymbol{v}=\nabla \cdot \boldsymbol{B}=0, \end{split} \end{align} where the term $\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{u}$ comes from requiring the conservation of energy (in the ideal case, with periodic boundary conditions or in $\Real^3$) \mbox{$ E^{\alpha }=\frac{1}{2}\int_{\Omega }\left( \boldsymbol{v}\left( x\right) \boldsymbol{u}(x)+\lnorm{ \boldsymbol{B}(x)} ^{2}\right) dx$}. Also, the system conserves the magnetic helicity \mbox{$ H_{M}^{\alpha }=\frac{1}{2}\int_{\Omega }\boldsymbol{A}(x)\cdot \boldsymbol{B }(x)dx$}. At the moment we are unable to find a conserved quantity in the ideal version of \eqref{grp:ML_alpha_MHD_intro} which can be identified with a cross helicity. The main goal of this paper is to establish the global existence, uniqueness and regularity of solutions of the three-dimensional MHD-$\alpha $ equations \eqref{grp:alphaMHD_intro} subject to periodic boundary conditions (similar results also hold in $\Real ^3$). We emphasize again that we consider a version of the MHD alpha models, where only the velocity field is filtered, while the magnetic field remains unfiltered. We note that in the case of filtering the magnetic field, as in \eqref{grp:LAMHD}, one has hyperdiffusivity for the filtered magnetic field $\boldsymbol{B_s}$ and the proof of the existence and uniqueness of regular solutions of \eqref{grp:LAMHD} is deduced in a similar way. We start by introducing some preliminary background and the functional setting in section 2. In section 3 we show the global well-posedness of the MHD-$\alpha $ subgrid scale model of turbulence \eqref{grp:alphaMHD_intro}. We remark that using the Gevrey regularity techniques developed in \cite{a_FT89} (see also \cite{a_FT98}) one can show that the solution of the MHD-$\alpha$ model becomes instantaneously analytic in space and time. As a result of this Gevrey regularity, one deduces the existence of a dissipation range in the energy spectrum in which the energy decays exponentially fast as a function of the wavenumber, for $k$ larger than the dissipation length scale (see \cite{a_DT95}). One can also establish, in the forced case, the existence of a finite dimensional global attractor, a subject of future work. In section 4 we relate the solutions of the MHD-$\alpha $ equations to those of the 3D MHD as the length scale $\alpha $ tends to zero. Specifically, we prove that one can extract subsequences of weak solutions of the MHD-$ \alpha $ equations which converge as $\alpha $ $\rightarrow $ $0^{+}$ (in the appropriate sense) to a Leray-Hopf weak solution of the three-dimensional MHD equations \eqref{grp:MHD} on any time interval $[0,T]$, which satisfies the energy inequality \begin{equation*} \lnorm{ \boldsymbol{v}\left( t\right) } ^{2}+\lnorm{ \boldsymbol{B} \left( t\right) } ^{2} +2 \int_{t_0}^{t}\left( \nu\vnorm{ \boldsymbol{v}(s)} ^{2}+\eta\vnorm{ \boldsymbol{B} (s)} ^{2}\right)ds \leq \lnorm{ \boldsymbol{v}\left( t_0\right) } ^{2}+\lnorm{ \boldsymbol{B}\left( t_0\right) } ^{2} \end{equation*} for almost every $t_0\in[0,T]$ and all $t\in\left[t_0,T\right]$. Also, if the initial data is smooth a subsequence of solutions converges for a short interval of time, that depends on the initial data, $\nu$, $\eta$ and the domain, to the unique strong solution of the MHD equations on this interval. Thus the $\alpha$ models can be viewed as a regularizing numerical method. Section 5 contains a discussion summarizing our results. \section{Functional Setting and Preliminaries} Let $\Omega $ be the $L$-periodic three-dimensional box $\Omega =[0,L]^{3}$. We consider the following MHD-$\alpha $ subgrid scale turbulence model, which we introduced in \eqref{grp:alphaMHD_intro}, subject to periodic boundary condition with a basic domain $\Omega $, \begin{subequations} \label{grp:alphaMHD} \begin{align} & \frac{\partial \boldsymbol{v}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{v}+\sum_{j=1}^{3}\boldsymbol{v}_{j}\nabla \boldsymbol{u}_{j}-\nu \Delta \boldsymbol{v}+\nabla p+\frac{1}{2}\nabla \lnorm{ \boldsymbol{B}} ^{2}=\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{B}, \label{eq:alphaMHD:velocity} \\ & \frac{\partial \boldsymbol{B}}{\partial t}+\left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{B}-\left( \boldsymbol{B}\cdot \nabla \right) \boldsymbol{u}-\eta \Delta \boldsymbol{B} =0, \label{eq:alphaMHD:magneticField} \\ & \boldsymbol{v}=\left( 1-\alpha ^{2}\Delta \right) \boldsymbol{u}, \\ & \nabla \cdot \boldsymbol{u}=\nabla \cdot \boldsymbol{v}=\nabla \cdot \boldsymbol{B}=0, \label{eq:alphaMHD:divFreeCond} \\ & \boldsymbol{u}(x,0)=\boldsymbol{u}^{in}(x), \\ & \boldsymbol{B}(x,0)=\boldsymbol{B}^{in}(x), \end{align} \end{subequations} where $\boldsymbol{u}$ represents the unknown `filtered' fluid velocity vector, $p$ is the unknown `filtered' pressure, and $\alpha >0$ is a lengthscale parameter which represents the width of the filter. At the limit $\alpha =0$ we formally obtain the three-dimensional MHD equations \eqref{grp:MHD}, where $\boldsymbol{u}$ is the Eulerian velocity field and $p-\frac{1}{2}|\boldsymbol{u}|^{2}$ is the pressure. Notice that we chose to smooth only the velocity field and not the magnetic field, thus we do not introduce hyperdiffusivity for the magnetic field, as it is for the filtered magnetic field in \eqref{grp:LAMHD}. We consider initial values with zero spatial means, i.e., we assume that \begin{equation} \int_{\Omega }\boldsymbol{u}^{in}dx=\int_{\Omega }\boldsymbol{B}^{in}dx=0, \label{eq:zeroSpatialMean} \end{equation} then from \eqref{eq:alphaMHD:velocity} and \eqref{eq:alphaMHD:magneticField}, after integration by parts, using the spatial periodicity of the solution and the divergence free condition \eqref{eq:alphaMHD:divFreeCond} we have \mbox{$\left({d}/{dt}\right)\int_{\Omega }\boldsymbol{v}dx=0$}, \mbox{$\left({d}/{dt}\right) \int_{\Omega }\boldsymbol{B}dx=0$} and \mbox{$\left({d}/{dt}\right)\int_{\Omega }\boldsymbol{ u}dx=0$}. Namely, the spatial mean of the solution is invariant under time. Hence, by \eqref{eq:zeroSpatialMean}, $\int_{\Omega }\boldsymbol{v} dx=\int_{\Omega }\boldsymbol{u}dx=\int_{\Omega }\boldsymbol{B}dx=0$. Next, we introduce some notation and background following the mathematical theory of NSEs, see, for instance, \cite{b_CF88,b_T95,b_T84,b_L85}. Let $ L^{p}(\Omega )$ and $H^{m}(\Omega )$ denote the $L^{p}$ Lebesgue spaces and Sobolev spaces respectively. We denote by $\lnorm{ \cdot } $ the $L^{2}$-norm, and by $\left( \cdot ,\cdot \right) $ the $L^{2}$-inner product. Let $X$ be a linear subspace of integrable functions defined on the domain $\Omega $, we define $\dot{X}:=\{\varphi \in X:\int_{\Omega }\varphi (x)dx=0\}$ and {$\mathcal{V}=\{\varphi :\varphi \text{ is a vector valued trigonometric polynomial defined on }\Omega ,\text{ such that }\nabla \cdot \varphi =0\text{ and }\int_{\Omega }\varphi (x)dx=0\}$}. The spaces $H$ and $V$ are the closures of $\mathcal{V}$ in $L^{2}(\Omega )$ and in $H^{1}(\Omega )$ respectively; observe that $H^{\perp }$, the orthogonal complement of $H$ in $L^{2}(\Omega )$ is \{${\nabla p:p\in H^{1}(\Omega )}$\}. Let $P_{\sigma }:\dot{L}^{2}\left( \Omega \right) \rightarrow H$ be the Helmholtz-Leray projection, and \mbox{$ A=-P_{\sigma }\Delta $} be the Stokes operator with domain $ D(A)=(H^{2}(\Omega )\cap V)$. In the periodic boundary conditions $A=-\Delta |_{D(A)}$ is a self-adjoint positive operator with compact inverse. Hence the space $H$ has an orthonormal basis $ \{w_{j}\}_{j=1}^{\infty }$ of eigenfunctions of $A$, $Aw_{j}=\lambda _{j}w_{j}$, with $0<\lambda _{1}\leq \lambda _{2}\leq \ldots $, $\lambda _{j}\sim j^{2/d}L^{-2}$, see, e.g., \cite{b_CF88,a_M78}. One can show that $V=D\left( A^{1/2}\right) $. We denote \mbox{$\left( \left( \cdot ,\cdot \right) \right) =\left( A^{1/2}\cdot ,A^{1/2}\cdot \right) $} and $\vnorm{ \cdot } =\lnorm{ A^{1/2}\cdot } $ the inner product and the norm on $V$, respectively. Following the notation of the Navier-Stokes equations and those of \cite {a_FHT02}, we denote \begin{align*} B\left( \boldsymbol{u},\boldsymbol{v}\right) & =P_{\sigma }\left[ \left( \boldsymbol{u}\cdot \nabla \right) \boldsymbol{v}\right] ,\emph{\quad } \boldsymbol{u},\boldsymbol{v}\in \V, \\ \tilde{B}\left( \boldsymbol{u},\boldsymbol{v}\right) & =P_{\sigma }\left[ \left( \nabla \times \boldsymbol{v}\right) \times \boldsymbol{u}\right] , \emph{\quad }\boldsymbol{u},\boldsymbol{v}\in \V. \end{align*} Notice that \begin{equation*} \left( B\left( \boldsymbol{u},\boldsymbol{v}\right) ,\boldsymbol{w}\right) =-\left( B\left( \boldsymbol{u},\boldsymbol{w}\right) ,\boldsymbol{v}\right) ,\emph{\quad }\boldsymbol{u},\boldsymbol{v},\boldsymbol{w}\in \V, \end{equation*} and due to the identity \begin{equation}\label{eq:gen_3D_vector_id} \left( b\cdot \nabla \right) a+\sum_{j=1}^{3}a_{j}\nabla b_{j}=-b\times \left( \nabla \times a\right) +\nabla \left( a\cdot b\right) , \end{equation} \begin{equation*} \left( \tilde{B}\left( \boldsymbol{u},\boldsymbol{v}\right) ,\boldsymbol{w} \right) =\left( B\left( \boldsymbol{u},\boldsymbol{v}\right) ,\boldsymbol{w} \right) -\left( B\left( \boldsymbol{w},\boldsymbol{v}\right) ,\boldsymbol{u} \right) . \end{equation*} The definitions of $B\left( \boldsymbol{u},\boldsymbol{v}\right) $ and $ \tilde{B}\left( \boldsymbol{u},\boldsymbol{v}\right) $ and the above algebraic identities may be extended to larger spaces by the density of $\V $ in the appropriate space each time the corresponding trilinear forms are continuous. The extensions of the bilinear forms $B$ and $\tilde{B}$ (which we also denote $B$ and $\tilde{B}$) have the following properties \begin{lemma} \label{lemma:B_estimates} \mbox{} \begin{enumerate} \item \label{enu:lemma:BandBtilde_estimate_V_V_V}Let $X$ be either $B$ or $ \tilde{B}$. The operator $X$ can be extended continuously from $V\times V$ with values in $V^{\prime }$ (the dual space of $V$). In particular, for every $\boldsymbol{u},\boldsymbol{v},\boldsymbol{w}\in V$, \begin{equation} \abs{ \left\langle X\left( \boldsymbol{u},\boldsymbol{\boldsymbol{v}} \right) ,\boldsymbol{w}\right\rangle _{V^{\prime }}} \leq c\lnorm{ \boldsymbol{u}} ^{1/2}\vnorm{ \boldsymbol{u} } ^{1/2}\vnorm{ \boldsymbol{\boldsymbol{v}}} \vnorm{ \boldsymbol{w}} . \label{eq:BandBtilde_estimate_V_V_V} \end{equation} Moreover, \begin{equation} \left( B\left( \boldsymbol{u},\boldsymbol{v}\right) ,\boldsymbol{w}\right) =-\left( B\left( \boldsymbol{u},\boldsymbol{w}\right) ,\boldsymbol{v}\right) ,\emph{\quad }\boldsymbol{u},\boldsymbol{v},\boldsymbol{w}\in V, \label{eq:B_id1} \end{equation} which in turn implies that \begin{equation} \left( B\left( \boldsymbol{u},\boldsymbol{v}\right) ,\boldsymbol{v}\right) =0,\emph{\quad }\boldsymbol{u},\boldsymbol{v}\in V. \label{eq:B_id2} \end{equation} Also \begin{equation} \left( \tilde{B}\left( \boldsymbol{u},\boldsymbol{v}\right) ,\boldsymbol{w} \right) =\left( B\left( \boldsymbol{u},\boldsymbol{v}\right) ,\boldsymbol{w} \right) -\left( B\left( \boldsymbol{w},\boldsymbol{v}\right) ,\boldsymbol{u} \right) ,\emph{\quad }\boldsymbol{u},\boldsymbol{v},\boldsymbol{w}\in V, \label{eq:Btilda_id1} \end{equation} and hence \begin{equation} \left( \tilde{B}\left( \boldsymbol{u},\boldsymbol{v}\right) ,\boldsymbol{u} \right) =0,\emph{\quad }\boldsymbol{u},\boldsymbol{v}\in V. \label{eq:Btilda_id2} \end{equation} \item \label{enu:lemma:BandBtilde_estimate_DA_V_H} Furthermore, let $\boldsymbol{u} \in D(A),\boldsymbol{v}\in V,\boldsymbol{w}\in H$ and let $X$ be either $B$ or $\tilde{B}$ then \begin{equation} \abs{ \left( X\left( \boldsymbol{u},\boldsymbol{v}\right) , \boldsymbol{w}\right) } \leq c\vnorm{ \boldsymbol{u} } ^{1/2}\lnorm{ A\boldsymbol{u}} ^{1/2}\vnorm{ \boldsymbol{v}} \lnorm{ \boldsymbol{w}} . \label{eq:BandBtilde_estimate_DA_V_H} \end{equation} \item \label{enu:lemma:B_estimate_V_DA_H} Let $\boldsymbol{u} \in V,\boldsymbol{v}\in D(A),\boldsymbol{w}\in H$ then \begin{equation} \abs{ \left( B\left( \boldsymbol{u},\boldsymbol{v}\right) , \boldsymbol{w}\right) } \leq c\vnorm{ \boldsymbol{u} } \vnorm{ \boldsymbol{v}} ^{1/2}\lnorm{ A \boldsymbol{v}} ^{1/2}\lnorm{ \boldsymbol{w}} . \label{eq:B_estimate_V_DA_H} \end{equation} \item \label{enu:lemma:B_estimate_DA_H_V} Let $\boldsymbol{u}\in D(A), \boldsymbol{v}\in H$, $\boldsymbol{w}\in V$, then \begin{equation} \abs{ \langle B\left( \boldsymbol{u},\boldsymbol{v}\right) , \boldsymbol{w}\rangle _{V^{\prime }}} \leq c\vnorm{ \boldsymbol{u}} ^{1/2}\lnorm{ A\boldsymbol{u}} ^{1/2}\lnorm{ \boldsymbol{v}} \vnorm{ \boldsymbol{w} } . \label{eq:B_estimate_DA_H_V} \end{equation} \item \label{enu:lemma:Btilde_estimate_V_V_V} Let $\boldsymbol{u}, \boldsymbol{v},\boldsymbol{w}\in V$, then \begin{equation} \abs{ \langle \tilde{B}\left( \boldsymbol{u},\boldsymbol{v}\right) , \boldsymbol{w}\rangle _{V^{\prime }}} \leq c\vnorm{ \boldsymbol{u}} \vnorm{ \boldsymbol{v}} \lnorm{ \boldsymbol{w}} ^{1/2}\vnorm{ \boldsymbol{w}} ^{1/2}. \label{eq:Btilde_estimate_V_V_V} \end{equation} \item \label{enu:lemma:BandBtilde_estimate_H_V_D(A)} Let $\boldsymbol{u}\in H$, $\boldsymbol{\boldsymbol{v}}\in V,$ $\boldsymbol{w}\in D\left( A\right) $ and let $X$ be either $B$ or $\tilde{B}$ then \begin{equation} \abs{ \left\langle X\left( \boldsymbol{u},\boldsymbol{\boldsymbol{v}} \right) ,\boldsymbol{w}\right\rangle _{D\left( A\right) ^{\prime }}} \leq c\lnorm{ \boldsymbol{u}} \vnorm{ \boldsymbol{\boldsymbol{v}}} \vnorm{ \boldsymbol{w}} ^{1/2}\lnorm{ A\boldsymbol{w}} ^{1/2}. \label{eq:BandBtilde_estimate_H_V_D(A)} \end{equation} \item \label{enu:lemma:Btilde_estimate_V_H_D(A)}Let $\boldsymbol{u}\in V,$ $ \boldsymbol{\boldsymbol{v}}\in H,$ $\boldsymbol{w}\in D\left( A\right) $ then \begin{equation} \abs{ \left\langle \tilde{B}\left( \boldsymbol{u},\boldsymbol{ \boldsymbol{v}}\right) ,\boldsymbol{w}\right\rangle _{D\left( A\right) ^{\prime }}} \leq c\left( \lnorm{ \boldsymbol{u}} ^{1/2}\vnorm{ \boldsymbol{u}} ^{1/2}\lnorm{ \boldsymbol{ \boldsymbol{v}}} \lnorm{ A\boldsymbol{w}} +\lnorm{ \boldsymbol{\boldsymbol{v}}} \vnorm{ \boldsymbol{u}} \vnorm{ \boldsymbol{w}} ^{1/2}\lnorm{ A\boldsymbol{w} } ^{1/2}\right) , \label{eq:Btilde_estimate_V_H_D(A)} \end{equation} and hence by Poincar\'{e} inequality, \begin{equation} \abs{ \left\langle \tilde{B}\left( \boldsymbol{u},\boldsymbol{ \boldsymbol{v}}\right) ,\boldsymbol{w}\right\rangle _{D\left( A\right) ^{\prime }}} \leq c\left( \lambda _{1}\right) ^{-1/4}\vnorm{ \boldsymbol{u}} \lnorm{ \boldsymbol{\boldsymbol{v}}} \lnorm{ A\boldsymbol{w}} . \label{eq:Btilde_estimate_V_H_D(A)_short} \end{equation} \item \label{enu:lemma:Btilde_estimate_D(A)_H_V}Let $\boldsymbol{u}\in D\left( A\right) ,$ $\boldsymbol{\boldsymbol{v}}\in H,$ $\boldsymbol{w}\in V$ then \begin{equation} \abs{ \left\langle \tilde{B}\left( \boldsymbol{u},\boldsymbol{ \boldsymbol{v}}\right) ,\boldsymbol{w}\right\rangle _{V^{\prime }}} \leq c\left( \vnorm{ \boldsymbol{u}} ^{1/2}\lnorm{ A\boldsymbol{u}} ^{1/2}\lnorm{ \boldsymbol{ \boldsymbol{v}}} \vnorm{ \boldsymbol{w}} +\lnorm{ A \boldsymbol{u}} \lnorm{ \boldsymbol{\boldsymbol{v}}} \lnorm{ \boldsymbol{w}} ^{1/2}\vnorm{ \boldsymbol{w} } ^{1/2}\right) . \label{eq:Btilde_estimate_D(A)_H_V} \end{equation} \end{enumerate} In this lemma and throughout the paper $c$ denotes a generic scale invariant constant. \end{lemma} \begin{proof} The proof of \eqref{enu:lemma:BandBtilde_estimate_V_V_V} can be found, for example, in \cite{b_CF88, b_T84,b_T95} for $B$ and in \cite[Lemma 1\textit{(iii)}]{a_FHT02} for $\tilde{B}$. To prove \eqref{enu:lemma:BandBtilde_estimate_DA_V_H} we first consider the case where $\boldsymbol{u},\boldsymbol{v},\boldsymbol{w}\in \V$ \begin{align*} &\abs{ \left( B\left( \boldsymbol{u},\boldsymbol{v}\right) , \boldsymbol{w}\right) } =\abs{ \int_{\Omega }(\boldsymbol{u}\cdot \nabla )\boldsymbol{v}\cdot \boldsymbol{w} dx}, \\ &| ( \tilde{B}\left( \boldsymbol{u},\boldsymbol{v}\right) , \boldsymbol{w}) | =\abs{ \int_{\Omega }\boldsymbol{u}\times (\nabla \times \boldsymbol{v})\cdot \boldsymbol{w} dx}, \end{align*} hence \begin{align*} \abs{ \left( X\left( \boldsymbol{u},\boldsymbol{v}\right) , \boldsymbol{w}\right) } & \leq c\norm{ \boldsymbol{u}} _{L^{\infty }}\norm{ \nabla \boldsymbol{v}} _{L^{2}}\norm{ \boldsymbol{w}} _{L^{2}}. \end{align*} By Agmon's inequality in three-dimensional space, see, e.g., \cite{b_CF88}, \begin{equation*} \norm{ \phi } _{L^{\infty }}\leq \norm{ \phi } _{H^{1}}^{1/2}\norm{ \phi } _{H^{2}}^{1/2} \end{equation*} we obtain \begin{align*} \abs{ \left( X\left( \boldsymbol{u},\boldsymbol{v}\right) , \boldsymbol{w}\right) } & \leq c\vnorm{ \boldsymbol{u}} ^{1/2}\lnorm{ A\boldsymbol{u} } ^{1/2}\vnorm{ \boldsymbol{v}} \lnorm{ \boldsymbol{w}} . \end{align*} Since $\V$ is dense in $D(A)$, $V$ and $H$ we conclude the proof of \eqref{enu:lemma:BandBtilde_estimate_DA_V_H}. To prove \eqref{enu:lemma:B_estimate_V_DA_H} we recall the following Sobolev-Ladyzhenskaya inequalities (see, e.g., \cite{b_CF88,b_L85}) in 3D \begin{align*} \norm{ \phi } _{L^{6}}& \leq c\vnorm{ \phi } , \\ \norm{ \phi } _{L^{3}}& \leq c\lnorm{ \phi } ^{1/2}\vnorm{ \phi } ^{1/2}, \end{align*} for ${\phi }\in \V$. Then we have \begin{align*} \abs{ \left( B\left( \boldsymbol{u},\boldsymbol{v}\right) , \boldsymbol{w}\right) } & =\abs{ \int_{\Omega }( \boldsymbol{u}\cdot \nabla )\boldsymbol{v}\cdot \boldsymbol{w} dx} \\ & \leq c\norm{ \boldsymbol{u}} _{L^{6}}\norm{ \nabla \boldsymbol{v}} _{L^{3}}\norm{ \boldsymbol{w} } _{L^{2}} \\ & \leq c\vnorm{ \boldsymbol{u}} \lnorm{ \nabla \boldsymbol{v} } ^{1/2}\vnorm{ \nabla \boldsymbol{v}} ^{1/2}\lnorm{ \boldsymbol{w}} \\ & \leq c\vnorm{ \boldsymbol{u}} \vnorm{ \boldsymbol{v} } ^{1/2}\lnorm{ A\boldsymbol{v}} ^{1/2}\lnorm{ \boldsymbol{w}} . \end{align*} The proof of \eqref{enu:lemma:B_estimate_DA_H_V} is a direct result of the \eqref{enu:lemma:BandBtilde_estimate_DA_V_H} due to the symmetry \eqref{eq:B_id1}. The proof of \eqref{enu:lemma:Btilde_estimate_V_V_V}, \eqref{enu:lemma:BandBtilde_estimate_H_V_D(A)}, \eqref{enu:lemma:Btilde_estimate_V_H_D(A)}, \eqref{enu:lemma:Btilde_estimate_D(A)_H_V} can be found in \cite[Lemma 1 \textit{(iii,iv,v,vi)}]{a_FHT02}. \end{proof} Using the above notations and the identity \eqref{eq:gen_3D_vector_id} we apply $P_{\sigma }$ to \eqref{grp:alphaMHD} to obtain, as for the case of the NSE, the equivalent system of equations (see, e.g., \cite{b_T84} and \cite{a_DL72}) \begin{subequations} \label{grp:alphaMHD:Projected} \begin{align} & \frac{d\boldsymbol{v}}{dt}+\tilde{B}\left( \boldsymbol{u},\boldsymbol{v} \right) +\nu A\boldsymbol{v}={B}\left( \boldsymbol{B},\boldsymbol{B}\right) , \label{eq:alphaMHD:Projected:velocity} \\ & \frac{d\boldsymbol{B}}{dt}+B\left( \boldsymbol{u},\boldsymbol{B}\right) - B\left( \boldsymbol{B},\boldsymbol{u}\right) +\eta A\boldsymbol{ B}=0, \label{eq:alphaMHD:Projected:magField} \\ & \boldsymbol{u}(0)=\boldsymbol{u}^{in}, \\ & \boldsymbol{B}(0)=\boldsymbol{B}^{in}. \end{align} \end{subequations} \begin{definition} Let $T>0$. A weak solution of \eqref{grp:alphaMHD:Projected} in the interval $[0,T]$, given \mbox{$\boldsymbol{u}\left( 0\right) =\boldsymbol{u}^{in} \in V$} (or equivalently $\boldsymbol{v}^{in}\in V^{\prime }$) and $ \boldsymbol{B}\left( 0\right) =\boldsymbol{B}^{in} \in H, $ is a pair of functions $\boldsymbol{u},\ \boldsymbol{B}$, such that \begin{equation*} \boldsymbol{u}\in C\left( \left[ 0,T\right] ;V\right) \cap L^{2}\left( \left[ 0,T\right] ;D\left( A\right) \right) \, \text{with} \, \frac{d\boldsymbol{u}}{dt}\in L^{2}\left( \left[ 0,T\right] ;H\right) \end{equation*} (or equivalently \mbox{$\boldsymbol{v}\in C\left( \left[ 0,T\right] ;V^{\prime }\right) \cap L^{2}\left( \left[ 0,T\right] ;H\right) $} with \mbox{$\frac{d \boldsymbol{v}}{dt}\in L^{2}\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) $}) and \begin{equation*} \boldsymbol{B}\in C\left( \left[ 0,T\right] ;H\right) \cap L^{2}\left( \left[ 0,T\right] ;V\right) \, \text{with} \, \frac{d\boldsymbol{B}}{dt}\in L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) , \end{equation*} satisfying \begin{subequations} \label{grp:alphaMHD:weakSol} \begin{align} & \left\langle \frac{d}{dt}\boldsymbol{\boldsymbol{v}},\boldsymbol{w} \right\rangle _{D\left( A\right) ^{\prime} }+\left\langle \tilde{B}\left( \boldsymbol{u},\boldsymbol{\boldsymbol{v}}\right) ,\boldsymbol{w} \right\rangle _{D\left( A\right) ^{\prime} }+\nu \left( \boldsymbol{v},A \boldsymbol{w}\right) =\left\langle B\left( \boldsymbol{B},\boldsymbol{B} \right) ,\boldsymbol{w}\right\rangle _{V ^{\prime} }, \label{eq:alphaMHD:weakSol_vel} \\ & \left\langle \frac{d}{dt}\boldsymbol{B},\boldsymbol{\xi }\right\rangle _{V^{\prime }}+\left( B\left( \boldsymbol{u},\boldsymbol{B}\right) , \boldsymbol{\xi }\right) -\left( B\left( \boldsymbol{B},\boldsymbol{u} \right) ,\boldsymbol{\xi }\right) +\eta \left( \left( \boldsymbol{B}, \boldsymbol{\xi }\right) \right) =0 \label{eq:alphaMHD:weakSol_magfld} \end{align} \end{subequations} for every $\boldsymbol{w}\in D\left( A\right) ,\ \boldsymbol{\xi }\in V$ and for almost every $t\in \left[ 0,T\right] $. The equation \eqref{grp:alphaMHD:weakSol} is understood in the following sense: for almost every \mbox{$t_{0},t\in \left[ 0,T\right] $} \begin{subequations} \label{grp:alphaMHD:weakSol_integralFormulation} \begin{align} & \left( \boldsymbol{\boldsymbol{v}}\left( t\right) ,\boldsymbol{w}\right) -\left( \boldsymbol{\boldsymbol{v}}\left( t_{0}\right) ,\boldsymbol{w} \right) +\int_{t_{0}}^{t}\left\langle \tilde{B}\left( \boldsymbol{u}\left( s\right) ,\boldsymbol{\boldsymbol{v}}\left( s\right) \right) ,\boldsymbol{w} \right\rangle _{D\left( A\right) ^{\prime }}ds+\nu \int_{t_{0}}^{t}\left( \boldsymbol{v}\left( s\right) ,A\boldsymbol{w}\right) ds \label{eq:alphaMHD:weakSol_integralFormulation_vel} \\ & \qquad \qquad =\int_{t_{0}}^{t}\left\langle B\left( \boldsymbol{B}\left( s\right) ,\boldsymbol{B}\left( s\right) \right) ,\boldsymbol{w}\right\rangle _{V^{\prime }}ds, \notag \\ & \left( \boldsymbol{B}\left( t\right) ,\boldsymbol{\xi }\right) -\left( \boldsymbol{B}\left( t_{0}\right) ,\boldsymbol{\xi }\right) +\int_{t_{0}}^{t}\left( B\left( \boldsymbol{u}\left( s\right) ,\boldsymbol{B} \left( s\right) \right) ,\boldsymbol{\xi }\right) ds-\int_{t_{0}}^{t}\left( B\left( \boldsymbol{B}\left( s\right) ,\boldsymbol{u}\left( s\right) \right) ,\boldsymbol{\xi }\right) ds \label{eq:alphaMHD:xeakSol_integralFormulation_magfld} \\ & \qquad \qquad +\eta \int_{t_{0}}^{t}\left( \left( \boldsymbol{B}\left( s\right) ,\boldsymbol{\xi }\right) \right) ds=0. \notag \end{align} \end{subequations} When $\boldsymbol{u}^{in}\in D(A)$ (or equivalently $\boldsymbol{v}^{in}\in H$) and $\boldsymbol{B}^{in}\in V$ we call a strong solution of \eqref{grp:alphaMHD:Projected} in the interval $[0,T]$ the solution that satisfies \begin{align*} \boldsymbol{B}\in C\left( \left[ 0,T\right] ;V\right) \cap L^{2}\left( \left[ 0,T\right] ;D(A)\right), \,\, \boldsymbol{u}\in C\left( \left[ 0,T\right] ;D(A)\right) \cap L^{2}( \left[ 0,T\right] ;D(A^{3/2})) \end{align*} (or equivalently $\boldsymbol{v}\in C\left( \left[ 0,T\right] ;H\right) \cap L^{2}\left( \left[ 0,T\right] ;D(V)\right)$). \begin{equation*} \end{equation*} \end{definition} \section{Global existence and uniqueness} In this section we show the global well-posedness of the MHD-$\alpha $ model \eqref{grp:alphaMHD} or equivalently \eqref{grp:alphaMHD:Projected}. \begin{theorem} \label{thm:alphaMHD:weakSol} Let $\boldsymbol{u}^{in}\in V,\,\boldsymbol{B} ^{in}\in H$. Then for any $T>0$ there exists a unique weak solution $ \boldsymbol{u},\boldsymbol{B}$ of \eqref{grp:alphaMHD:Projected} on $\left[ 0,T\right] $. Moreover, this solution satisfies \begin{equation*} \boldsymbol{u}\in L_{loc}^{\infty }\left( \left( 0,T\right] ;H^{3}\left( \Omega \right) \right), \end{equation*} as well as the energy equality \begin{multline}\label{eq:alphaMHD:weakSol_energyEquality} \lnorm{ \boldsymbol{u}\left( t\right) } ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}\left( t\right) } ^{2}+\lnorm{ \boldsymbol{B} \left( t\right) } ^{2} +2 \int_{t_0}^{t}\left( \nu(\vnorm{ \boldsymbol{u}(s)} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_(s)} ^{2})+\eta\vnorm{ \boldsymbol{B} (s)} ^{2}\right)ds \\ = \lnorm{ \boldsymbol{u}\left( t_0\right) } ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}\left( t_0\right) } ^{2}+\lnorm{ \boldsymbol{B}\left( t_0\right) } ^{2}, \qquad 0\leq t_0\leq t \leq T. \end{multline} \end{theorem} We use the Galerkin approximation scheme to prove the global existence and to establish the necessary \textit{a priori} estimates. Let $ \{w_{j}\}_{j=1}^{\infty }$ be an orthonormal basis of $H$ consisting of eigenfunctions of the operator $A$. Denote $H_m=\operatorname{span}\{w_1 ,\ldots, w_m\}$ and let $P_{m}$ be the $L^{2}$-orthogonal projection from $H$ onto $H_{m}$. The Galerkin approximation of \eqref{grp:alphaMHD:Projected} is the ordinary differential system \begin{subequations} \label{grp:alphaMHD:Galerkin} \begin{align} & \frac{d\boldsymbol{v}_{m}}{dt}+P_{m}\tilde{B}\left( \boldsymbol{u}_{m}, \boldsymbol{v}_{m}\right) +\nu Av_{m}=P_{m}B\left( \boldsymbol{B}_{m}, \boldsymbol{B}_{m}\right) \label{eq:alphaMHD:Galerkin:velocity} \\ & \frac{d\boldsymbol{B}_{m}}{dt}+P_{m}B\left( \boldsymbol{u}_{m},\boldsymbol{ B}_{m}\right) -P_{m}B\left( \boldsymbol{B}_{m},\boldsymbol{u}_{m}\right) +\eta A\boldsymbol{B}_{m}=0 \label{eq:alphaMHD:Galerkin:magField} \\ & \boldsymbol{v}_{m}=\boldsymbol{u}_{m}+\alpha ^{2}A\boldsymbol{u}_{m} \\ & \boldsymbol{u}_{m}\left( 0\right) =P_{m}\boldsymbol{u}^{in} \\ & \boldsymbol{B}_{m}\left( 0\right) =P_{m}\boldsymbol{B}^{in}. \end{align} \end{subequations} Since the nonlinear terms are quadratic, hence locally Lipschitz, then by the classical theory of ordinary differential equations, system \eqref{grp:alphaMHD:Galerkin} has a unique solution for a short interval of time $(-\tau _{m},T_{m})$. Our goal is to show that the solutions of \eqref{grp:alphaMHD:Galerkin} remains finite for all positive times, which implies that $T_{m}=\infty $. \subsection{ $H^{1}$-Estimate of $\boldsymbol{u}_{m}$, $L^{2}$-Estimate of $ \boldsymbol{B}_{m}$} We take the inner product of \eqref{eq:alphaMHD:Galerkin:velocity} with $ \boldsymbol{u}_{m}$ and the inner product of \eqref{eq:alphaMHD:Galerkin:magField} with $\boldsymbol{B}_{m}$ and use \eqref{eq:B_id2},\eqref{eq:Btilda_id2},\eqref{eq:B_id1} to obtain \begin{subequations} \begin{align} & \frac{1}{2}\frac{d}{dt}\left( \lnorm{ \boldsymbol{u}_{m}} ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}_{m}} ^{2}\right) +\nu \left( \vnorm{ \boldsymbol{u}_{m}} ^{2}+\alpha ^{2}\lnorm{ A \boldsymbol{u}_{m}} ^{2}\right) =\left( B\left( \boldsymbol{B}_{m}, \boldsymbol{B}_{m}\right) ,\boldsymbol{u}_{m}\right) , \label{eq:alphaMHD:velocity:inner_product_u_m} \\ & \frac{1}{2}\frac{d}{dt}\lnorm{ \boldsymbol{B}_{m}} ^{2}+\eta \vnorm{ \boldsymbol{B}_{m}} ^{2}=-\left( B\left( \boldsymbol{B} _{m},\boldsymbol{B}_{m}\right) ,\boldsymbol{u}_{m}\right) . \label{eq:alphaMHD:magField:inner_product_B_m} \end{align} \end{subequations} Now, by summing up \eqref{eq:alphaMHD:velocity:inner_product_u_m} and \eqref{eq:alphaMHD:magField:inner_product_B_m}, we have \begin{equation}\label{eq:alphaMHD:u+B} \frac{1}{2}\frac{d}{dt}\left( \lnorm{ \boldsymbol{u}_{m}} ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}_{m}} ^{2}+\lnorm{ \boldsymbol{B}_{m}} ^{2}\right) +\nu \left( \vnorm{ \boldsymbol{ u}_{m}} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}} ^{2}\right) +\eta \vnorm{ \boldsymbol{B}_{m}} ^{2}=0. \end{equation} We denote $\mu =\min \left\{ \nu ,\eta \right\} $ and obtain \begin{equation} \frac{1}{2}\frac{d}{dt}\left( \lnorm{ \boldsymbol{u}_{m}} ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}_{m}} ^{2}+\lnorm{ \boldsymbol{B}_{m}} ^{2}\right) +\mu \left( \vnorm{ \boldsymbol{ u}_{m}} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}} ^{2}+\vnorm{ \boldsymbol{B}_{m}} ^{2}\right) \leq 0. \label{eq:alphaMHD:u+B inequality} \end{equation} Using Poicar\'{e}'s inequality we get \begin{equation*} \frac{d}{dt}\left( \lnorm{ \boldsymbol{u}_{m}} ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}_{m}} ^{2}+\lnorm{ \boldsymbol{B} _{m}} ^{2}\right) +2\mu \lambda _{1}\left( \lnorm{ \boldsymbol{u }_{m}} ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}_{m}} ^{2}+\lnorm{ \boldsymbol{B}_{m}} ^{2}\right) \leq 0. \end{equation*} and then by Gronwall's inequality we obtain \begin{equation*} \lnorm{ \boldsymbol{u}_{m}\left( t\right) } ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}_{m}\left( t\right) } ^{2}+\lnorm{ \boldsymbol{B}_{m}\left( t\right) } ^{2}\leq e^{-2\mu \lambda _{1}t}\left( \lnorm{ \boldsymbol{u}_{m}\left( 0\right) } ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}_{m}\left( 0\right) } ^{2}+\lnorm{ \boldsymbol{B}_{m}\left( 0\right) } ^{2}\right) . \end{equation*} Hence \begin{equation}\label{eq:alphaMHD:H1_estimate} \lnorm{ \boldsymbol{u}_{m}\left( t\right) } ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}_{m}\left( t\right) } ^{2}+\lnorm{ \boldsymbol{B}_{m}\left( t\right) } ^{2}\leq k_{1}:=\lnorm{ \boldsymbol{u}^{in}} ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u} ^{in}} ^{2}+\lnorm{ \boldsymbol{B}^{in}} ^{2}, \end{equation} for all $t\geq 0$. This implies that $T_m=\infty$. Indeed, consider $[0,T_m^{max})$, the maximal interval of existence. Either $T_m^{max}=\infty$ and we are done, or $T_m^{max}<\infty$ and we have $\lim \sup_{t\rightarrow \left(T_m^{max}\right)^{-} } \left(\lnorm{ \boldsymbol{u}_{m}\left( t\right) } ^{2}+\lnorm{ \boldsymbol{B}_{m}\left( t\right) } ^{2}\right)=\infty$, a contradiction to \eqref{eq:alphaMHD:H1_estimate}. Hence we have global existence of $\boldsymbol u_m,\,\boldsymbol B_m$, and hereafter we take an arbitrary interval $[0,T]$. Integrating \eqref{eq:alphaMHD:u+B} over the interval $\left( s,t\right) $ and using the estimate \eqref{eq:alphaMHD:H1_estimate} we obtain that, for all $0\leq s\leq t$, \begin{equation} 2 \int_{s}^{t}\left( \nu(\vnorm{ \boldsymbol{u}_{m}(\tau)} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}(\tau)} ^{2})+\eta\vnorm{ \boldsymbol{B} _{m}(\tau)} ^{2}\right)d\tau \leq k_{1}. \label{eq:alphaMHD:integral_H2_estimate} \end{equation} \subsection{ $H^{2}$-Estimate of $\boldsymbol{u}_{m}$, $H^{1}$-Estimate of $ \boldsymbol{B}_{m}$} By taking the inner product of \eqref{eq:alphaMHD:Galerkin:velocity} with $A \boldsymbol{u}_{m}$ and the inner product of \eqref{eq:alphaMHD:Galerkin:magField} with $A\boldsymbol{B}_{m}$ we have \begin{subequations} \begin{align} & \frac{1}{2}\frac{d}{dt}\left( \vnorm{ \boldsymbol{u}_{m}} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}} ^{2}\right) +\nu \left( \lnorm{ A\boldsymbol{u}_{m}} ^{2}+\alpha ^{2}\lnorm{ A^{3/2}\boldsymbol{u}_{m}} ^{2}\right) =\left( B\left( \boldsymbol{B}_{m},\boldsymbol{B}_{m}\right) ,A\boldsymbol{u}_{m}\right) -\left( \tilde{B}\left( \boldsymbol{u}_{m},\boldsymbol{v}_{m}\right) ,A\boldsymbol{u}_{m}\right) , \label{eq:alphaMHD:velocity:inner_product_Au_m} \\ & \frac{1}{2}\frac{d}{dt}\vnorm{ \boldsymbol{B}_{m}} ^{2}+\eta \lnorm{ A\boldsymbol{B}_{m}} ^{2}=\left( B\left( \boldsymbol{B}_{m},\boldsymbol{u}_{m}\right) ,A\boldsymbol{B}_{m}\right) -\left( B\left( \boldsymbol{u}_{m},\boldsymbol{B}_{m}\right) ,A\boldsymbol{B}_{m}\right) . \label{eq:alphaMHD:magField:inner_product_AB_m} \end{align} \end{subequations} First, we estimate the nonlinear terms. By \eqref{eq:Btilde_estimate_V_V_V} we have \begin{equation} \abs{ \left( \tilde{B}\left( \boldsymbol{u}_{m},\boldsymbol{v} _{m}\right) ,A\boldsymbol{u}_{m}\right) } \leq c\left( \lambda _{1}^{-1}+\alpha ^{2}\right) \vnorm{ \boldsymbol{u}_{m}} \lnorm{ A\boldsymbol{u}_{m}} ^{1/2}\lnorm{ A^{3/2} \boldsymbol{u}_{m}} ^{3/2}. \label{eq:Btilde_estimate_um_vm_Aum} \end{equation} To bound the term $\abs{ \left( B\left( \boldsymbol{B}_{m}, \boldsymbol{B}_{m}\right) ,A\boldsymbol{u}_{m}\right) } $ we use \eqref{eq:B_estimate_DA_H_V} \begin{equation} \abs{ \left( B\left( \boldsymbol{B}_{m},\boldsymbol{B}_{m}\right) ,A \boldsymbol{u}_{m}\right) } \leq c\vnorm{ \boldsymbol{B} _{m}} ^{1/2}\lnorm{ A\boldsymbol{B}_{m}} ^{1/2}\lnorm{ \boldsymbol{B}_{m}} \lnorm{ A^{3/2}\boldsymbol{ u}_{m}} . \label{eq:B_estimate_Bm_Bm_Aum} \end{equation} By \eqref{eq:BandBtilde_estimate_DA_V_H} we have \begin{equation} \abs{ \left( B\left( \boldsymbol{B}_{m},\boldsymbol{u}_{m}\right) ,A \boldsymbol{B}_{m}\right) } \leq c\vnorm{ \boldsymbol{B} _{m}} ^{1/2}\vnorm{ \boldsymbol{u}_{m}} \lnorm{ A \boldsymbol{B}_{m}} ^{3/2} \label{eq:B_estimate_Bm_um_ABm} \end{equation} and by \eqref{eq:B_estimate_V_DA_H} \begin{equation} \label{eq:B_estimate_um_Bm_ABm} \abs{ \left( B\left( \boldsymbol{u}_{m},\boldsymbol{B}_{m}\right) ,A \boldsymbol{B}_{m}\right) } \leq c\vnorm{ \boldsymbol{B} _{m}} ^{1/2}\vnorm{ \boldsymbol{u}_{m}} \lnorm{ A \boldsymbol{B}_{m}} ^{3/2}. \end{equation} Now, summing up \eqref{eq:alphaMHD:velocity:inner_product_Au_m} and \eqref{eq:alphaMHD:magField:inner_product_AB_m}, we obtain \begin{multline} \label{eq:alphaMHD:sum:inner_product_A} \frac{1}{2}\frac{d}{dt}\left( \vnorm{ \boldsymbol{u}_{m}} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}} ^{2}+\vnorm{ \boldsymbol{B}_{m}} ^{2}\right) +\nu \left( |{A\boldsymbol{u}_{m}|} ^{2}+\alpha ^{2}|{A^{3/2}\boldsymbol{u}_{m}|}^{2}\right) +\eta |{A \boldsymbol{B}_{m}|}^{2} \\ =\left( B\left( \boldsymbol{B}_{m},\boldsymbol{B}_{m}\right) ,A\boldsymbol{u} _{m}\right) -\left( \tilde{B}\left( \boldsymbol{u}_{m},\boldsymbol{v} _{m}\right) ,A\boldsymbol{u}_{m}\right) +\left( B\left( \boldsymbol{B}_{m}, \boldsymbol{u}_{m}\right) ,A\boldsymbol{B}_{m}\right) -\left( B\left( \boldsymbol{u}_{m},\boldsymbol{B}_{m}\right) ,A\boldsymbol{B}_{m}\right) . \end{multline} By \eqref{eq:Btilde_estimate_um_vm_Aum}, \eqref{eq:B_estimate_Bm_Bm_Aum}, \eqref{eq:B_estimate_Bm_um_ABm} and \eqref{eq:B_estimate_um_Bm_ABm} and several applications of Young's inequality we reach \begin{multline}\label{eq:alphaMHD:u_m_inequality:H2} \frac{d}{dt}\left( \vnorm{ \boldsymbol{u}_{m}} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}} ^{2}+\vnorm{ \boldsymbol{B} _{m}} ^{2}\right) +\nu \left( |{A\boldsymbol{u}_{m}|}^{2}+\alpha ^{2}|{A^{3/2}\boldsymbol{u}_{m}|}^{2}\right) +\eta |{A\boldsymbol{B}_{m}|} ^{2} \\ \leq c(\alpha ^{2}\nu )^{-3}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{4}\vnorm{ \boldsymbol{u}_{m}} ^{4}|{A\boldsymbol{u}_{m}|} ^{2}+c(\alpha ^{2}\nu )^{-2}\eta ^{-1}\vnorm{ \boldsymbol{B} _{m}} ^{2}\lnorm{ \boldsymbol{B}_{m}} ^{4}+c\eta ^{-3}\vnorm{ \boldsymbol{B}_{m}} ^{2}\vnorm{ \boldsymbol{u} _{m}} ^{4}, \end{multline} Integrating over $(s,t)$ and using \eqref{eq:alphaMHD:H1_estimate}, \eqref{eq:alphaMHD:integral_H2_estimate} we obtain \begin{multline}\label{eq:alphaMHD:H2_estimate_of_u, with_u(s)} \vnorm{ \boldsymbol{u}_{m}\left( t\right) } ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}\left( t\right) } ^{2}+\vnorm{ \boldsymbol{B}_{m}\left( t\right) } ^{2} + {\int_{s}^{t}}\left( {\nu \left( |{A\boldsymbol{u}_{m}}\left( \tau \right) {| }^{2}+\alpha ^{2}|{A^{3/2}\boldsymbol{u}_{m}\left( \tau \right) |} ^{2}\right) +\eta |{A\boldsymbol{B}_{m}\left( \tau \right) |}^{2}}\right) d\tau \\ \leq \vnorm{ \boldsymbol{u}_{m}\left( s\right) } ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}\left( s\right) } ^{2}+\vnorm{ \boldsymbol{B}_{m}\left( s\right) } ^{2}+{K}_{1}, \end{multline} where we denote \begin{equation*} {K}_{1} := c\left( \left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{4}\nu ^{-4}\alpha ^{-12}+\eta ^{-2}\alpha ^{-4}\left( \nu ^{-2}+\eta ^{-2}\right) \right) k_{1}^{3}. \end{equation*} \begin{enumerate} \item Now, if $\boldsymbol{u}^{in}\in D\left( A\right) $, $\boldsymbol{B} ^{in}\in V$, we have \begin{multline} \vnorm{ \boldsymbol{u}_{m}\left( t\right) } ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}\left( t\right) } ^{2}+\vnorm{ \boldsymbol{B}_{m}\left( t\right) } ^{2}\\ +{\int_{0}^{t}}\left( {\nu \left( |{A\boldsymbol{u}_{m}}\left( \tau \right) {| }^{2}+\alpha ^{2}|{A^{3/2}\boldsymbol{u}_{m}\left( \tau \right) |} ^{2}\right) +\eta |{A\boldsymbol{B}_{m}\left( \tau \right) |}^{2}}\right) d\tau \\ \leq \vnorm{ {\boldsymbol{u}}^{in}} ^{2}+\alpha ^{2}\lnorm{ A \boldsymbol{u}^{in}} ^{2}+\vnorm{ \boldsymbol{B} ^{in}} ^{2}+{K}_{1}:={k}_{2}. \label{eq:alphaMHD:H2 and integral_H3 estimates,uin_in_D(A),Bin_in_V} \end{multline} \item Otherwise, if $\boldsymbol{u}^{in}\notin D\left( A\right) $, $ \boldsymbol{B}^{in}\notin V$, we integrate \begin{equation*} \vnorm{ \boldsymbol{u}_{m}\left( t\right) } ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}\left( t\right) } ^{2}+\vnorm{ \boldsymbol{B}_{m}\left( t\right) } ^{2} \leq \vnorm{ \boldsymbol{u}_{m}\left( s\right) } ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}\left( s\right) } ^{2}+\vnorm{ \boldsymbol{B}_{m}\left( s\right) } ^{2}+{K}_{1} \end{equation*} with respect to $s$ over $ (0,t)$ and use \eqref{eq:alphaMHD:integral_H2_estimate} to obtain \begin{equation*} t\left( \vnorm{ \boldsymbol{u}_{m}\left( t\right) } ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}\left( t\right) } ^{2}+\vnorm{ \boldsymbol{B}_{m}\left( t\right) } ^{2}\right) \leq \frac{1}{2\mu }k_{1}+{K}_{1}t, \end{equation*} hence for $t>0$ \begin{equation} {\vnorm{ \boldsymbol{u}_{m}\left( t\right) } ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{m}\left( t\right) } ^{2}+\vnorm{ {B}_{m}\left( t\right) } ^{2}\leq {K}_{1}+ \frac{1}{2t}\mu ^{-1}k_{1}:={k}_{2}\left( t\right) }, \label{eq:alphaMHD:H2_estimate_of_u,H1_estimate_of_B} \end{equation} and thus \begin{equation} {\int_{s}^{t}\left( {\nu \left( |{A\boldsymbol{u}_{m}}\left( \tau \right) {|} ^{2}+\alpha ^{2}|{A^{3/2}\boldsymbol{u}_{m}\left( \tau \right) |}^{2}\right) +\eta |{A\boldsymbol{B}_{m}\left( \tau \right) |}^{2}}\right) d\tau \leq 2 {K}_{1}{+\frac{1}{2t}k_{1}\mu ^{-1}}={K}_{1}+{k}_{2}\left( s\right) }. \label{eq:alphaMHD:integral_H3_estimate} \end{equation} \end{enumerate} \subsection{\protect $H^{3}$-Estimate of $\boldsymbol{u}_{m}$} We establish a uniform upper bound for the $H^{3}$-norm of $\boldsymbol{u}_{m}$ by providing the estimate for the vorticity \mbox{$\boldsymbol{q}_{m}=\nabla \times \boldsymbol{v}_{m}$}. The Galerkin approximation \eqref{eq:alphaMHD:Galerkin:velocity} is equivalent to \begin{equation*} \frac{d\boldsymbol{v}_{m}}{dt}+\nu A\boldsymbol{v}_{m}-P_{m}\left( \boldsymbol{u}_{m}\times \boldsymbol{q}_{m}\right) =P_{m}B\left( \boldsymbol{ B}_{m},\boldsymbol{B}_{m}\right) . \label{eq:alphaMHD:Galerkin:vorticity} \end{equation*} Taking the curl of the above equation we obtain \begin{equation}\label{eq:alphaMHD:q:curl} \frac{d\boldsymbol{q}_{m}}{dt}+\nu A\boldsymbol{q}_{m}-\nabla \times P_{m}\left( \boldsymbol{u}_{m}\times \boldsymbol{q}_{m}\right) =\nabla \times P_{m}B\left( \boldsymbol{B}_{m},\boldsymbol{B}_{m}\right) . \end{equation} We use that in periodic boundary conditions \begin{equation}\label{eq:gen_3D_vector_id_per_bnd_cond} \int_\Omega (\nabla \times \phi)\cdot \psi dx = \int_\Omega \phi \cdot (\nabla \times \psi) dx \end{equation} and for divergence free vectors \begin{equation}\label{eq:gen_3D_vector_div_free} \nabla \times(\phi \times \psi) =-(\phi\cdot\nabla)\psi+(\psi\cdot\nabla)\phi. \end{equation} Taking the inner product of \eqref{eq:alphaMHD:q:curl} with $\boldsymbol{q}_{m}$, using that $\nabla \cdot \boldsymbol{q}_{m}=0$ and the identities \eqref{eq:gen_3D_vector_id_per_bnd_cond}, \eqref{eq:gen_3D_vector_div_free} and \eqref{eq:B_id2}, we reach \begin{equation*} \frac{1}{2}\frac{d}{dt}\lnorm{ \boldsymbol{q}_{m}} ^{2}+\nu \vnorm{ \boldsymbol{q}_{m}} ^{2}=\left( B\left( \boldsymbol{q} _{m},\boldsymbol{u}_{m}\right) ,\boldsymbol{q}_{m}\right) +\left( B\left( \boldsymbol{B}_{m},\boldsymbol{B}_{m}\right) ,\nabla \times \boldsymbol{q} _{m}\right) . \end{equation*} We bound the right hand side using \eqref{eq:BandBtilde_estimate_V_V_V}, Young's inequality and \eqref{eq:alphaMHD:H1_estimate} \begin{align*} \abs{ \left( B\left( \boldsymbol{q}_{m},\boldsymbol{u}_{m}\right) , \boldsymbol{q}_{m}\right) } & \leq c\lnorm{ \boldsymbol{q} _{m}} ^{1/2}\vnorm{ \boldsymbol{u}_{m}} \vnorm{ \boldsymbol{q}_{m}} ^{3/2} \\ & \leq c\nu ^{-3}\alpha ^{-4}k_{1}^{2}\lnorm{ \boldsymbol{q} _{m}} ^{2}+\frac{\nu }{4}\vnorm{ \boldsymbol{q}_{m}} ^{2}\text{ } \end{align*} and by \eqref{eq:BandBtilde_estimate_DA_V_H} \begin{align}\label{eq:alphaMHD:vorticity_BtildaEstimate} \abs{ \left( B\left( \boldsymbol{B}_{m},\boldsymbol{B}_{m}\right) ,\nabla \times \boldsymbol{q}_{m}\right) } & \leq c\vnorm{ \boldsymbol{B}_{m}} ^{3/2}\lnorm{ A\boldsymbol{B} _{m}} ^{1/2}\vnorm{ \boldsymbol{q}_{m}} \\ & \leq c\nu ^{-1}\lnorm{ A\boldsymbol{B}_{m}} ^{2}+\vnorm{ \boldsymbol{B}_{m}} ^{6}+\frac{\nu }{4}\vnorm{ \boldsymbol{q} _{m}} ^{2}. \notag \end{align} Note that since $\nabla \cdot \boldsymbol{v}_{m}=0$ and due to the periodic boundary conditions we have \begin{equation*} \lnorm{ \boldsymbol{q}_{m}} =\lnorm{ \nabla \times \boldsymbol{v}_{m}} =\lnorm{ \nabla \boldsymbol{v} _{m}} =\vnorm{ \boldsymbol{v}_{m}} , \end{equation*} hence \begin{equation} \lnorm{ \boldsymbol{q}_{m}} ^{2}\leq \vnorm{ \boldsymbol{ \boldsymbol{u}}_{m}+\alpha ^{2}A\boldsymbol{\boldsymbol{u}}_{m}} ^{2}\leq \left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}\lnorm{ A^{3/2} \boldsymbol{\boldsymbol{u}}_{m}} ^{2}\text{.} \label{eq:alphaMHD:vorticityL2norm} \end{equation} Hence we obtain \begin{equation} \frac{1}{2}\frac{d}{dt}\lnorm{ \boldsymbol{q}_{m}} ^{2}+\frac{ \nu }{2}\vnorm{ \boldsymbol{q}_{m}} ^{2}\leq c\nu ^{-3}\alpha ^{-4}k_{1}^{2}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}\lnorm{ A^{3/2}\boldsymbol{\boldsymbol{u}}_{m}} ^{2}+c\nu ^{-1}\lnorm{ A \boldsymbol{B}_{m}} ^{2}+\vnorm{ \boldsymbol{B}_{m}} ^{6}. \label{eq:alphaMHD:H3 u+B inequality} \end{equation} In the following we denote by $c_{i}$ some constants depending on $\nu,\eta ,\alpha ,k_{1},\lambda _{1}$. Integrating over $\left( s,t\right) $ and using \eqref{eq:alphaMHD:integral_H3_estimate} and \eqref{eq:alphaMHD:H2_estimate_of_u,H1_estimate_of_B} we have \begin{equation} \lnorm{ \boldsymbol{q}_{m}\left( t\right) } ^{2}\leq \lnorm{ \boldsymbol{q}_{m}\left( s\right) } ^{2}+c_{0}\left( 2{K}_{1} +\frac{1}{2s}k_{1}\mu ^{-1}\right) +2\int_{s}^{t}\left( {K_{1}+\frac{1 }{2\tau }k_{1}\mu ^{-1}}\right) ^{3}d\tau . \label{eq:alphaMHD:vorticity_intermediate} \end{equation} We integrate this expression with respect to $s$ over $\left( \frac{t}{2} ,t\right) $, $t>0$ and use \eqref{eq:alphaMHD:integral_H3_estimate}, \eqref{eq:alphaMHD:vorticityL2norm} to obtain \begin{equation} {\lnorm{ \boldsymbol{q}_{m}\left( t\right) } ^{2}\leq \frac{1}{2 }K_{1}^{3}t+c_{1}+\frac{c_{2}}{t}+\frac{c_{3}}{t^{2}}} \label{eq:alphaMHD:H3_estimate_unbounded} \end{equation} For $t>\frac{1}{\nu \lambda _{1}}$ we integrate \eqref{eq:alphaMHD:vorticity_intermediate} with respect to $s$ over the interval $\left( t-\frac{1}{\nu \lambda _{1}},t\right) $. Note that, by applying also \eqref{eq:alphaMHD:integral_H3_estimate} and \eqref{eq:alphaMHD:vorticityL2norm}, we have \begin{equation} \lnorm{ \boldsymbol{q}_{m}\left( t\right) } ^{2}\leq c_{4}+c_{5} {\left( t-\frac{1}{\nu \lambda _{1}}\right) }^{-1}+c_{6}{\ln }\left( {1- \frac{1}{\nu \lambda _{1}t}}\right) ^{-1}. \label{eq:alphaMHD:H3_estimate_bounded} \end{equation} From \eqref{eq:alphaMHD:H3_estimate_unbounded} and \eqref{eq:alphaMHD:H3_estimate_bounded} we have, for $t>0$, \begin{equation} \label{eq:alphaMHD:H3_estimate} \lnorm{ \boldsymbol{q}_{m}\left( t\right) } ^{2}\leq k_{3}\left( t\right), \end{equation} where $k_{3}\left( t\right) $ has the following properties \begin{enumerate} \item $k_{3}\left( t\right) $ is finite for all $t>0$; \item $k_{3}\left( t\right) $ is independent of $m$; \item If either $\boldsymbol{\boldsymbol{u}}^{in}\notin D\left( A^{3/2}\right) $ or $\boldsymbol{B}^{in}\notin V$, then $k_{3}\left( t\right) $ depends on $\nu ,\eta ,\alpha ,\lnorm{ \boldsymbol{\boldsymbol{ u}}^{in}} ,\vnorm{ \boldsymbol{\boldsymbol{u}}^{in}} ,\lnorm{ \boldsymbol{B}^{in}} $ and \mbox{$\lim_{t\rightarrow 0^{+}}k_{3}\left( t\right) =\infty $}; \item $\lim \sup_{t\rightarrow \infty }k_{3}\left( t\right) =R^{2}<\infty $, $R^{2}$ depends on $\nu ,\eta ,\alpha $, but not on $\boldsymbol{\boldsymbol{ u}}^{in}$ and $\boldsymbol{B}^{in}$. \end{enumerate} Returning to \eqref{eq:alphaMHD:H3 u+B inequality} and integrating over $ \left(t,t+\tau\right)$, for $t>0$, $\tau\geq 0$ and using \eqref{eq:alphaMHD:H3_estimate} we obtain \begin{equation} \label{eq:alphaMHD:integral_H4_estimate} \nu \int_t^{t+\tau} \vnorm{ \boldsymbol{q}_m} ^2 \leq k_4(t,\tau), \end{equation} where $k_4(t,\tau)$ as a function of $t$ satisfies properties (i)-(iii) as $ k_3(t)$ above. \begin{remark} If $\boldsymbol{B}^{in}\in V$ and $\boldsymbol{u}^{in} \in D\left( A\right) $, then by \eqref{eq:alphaMHD:H2 and integral_H3 estimates,uin_in_D(A),Bin_in_V}, Young's and Poincar\'{e} inequalities we can bound \eqref{eq:alphaMHD:vorticity_BtildaEstimate} by \begin{equation*} \abs{ \left( B\left( \boldsymbol{B}_{m},\boldsymbol{B}_{m}\right) ,\nabla \times \boldsymbol{q}_{m}\right) } \leq c\nu ^{-1}\lambda _{1}^{-1/2}k_{2}\lnorm{ A\boldsymbol{B}_{m}} ^{2}+ \frac{\nu }{4}\vnorm{ \boldsymbol{q}_{m}} ^{2}. \end{equation*} Hence we have \begin{equation*} \frac{1}{2}\frac{d}{dt}\lnorm{ \boldsymbol{q}_{m}} ^{2}+\frac{ \nu }{2}\vnorm{ \boldsymbol{q}_{m}} ^{2}\leq c\nu ^{-3}\alpha ^{-4}k_{1}^{2}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}\lnorm{ A^{3/2}\boldsymbol{\boldsymbol{u}}_{m}} ^{2}+c\nu ^{-1}\lambda _{1}^{-1/2}k_{2}\lnorm{ A\boldsymbol{B}_{m}} ^{2} \end{equation*} and by integrating over $\left( 0,t\right) $ and using \eqref{eq:alphaMHD:H2 and integral_H3 estimates,uin_in_D(A),Bin_in_V} we obtain \begin{equation*} \lnorm{ \boldsymbol{q}_{m}\left( t\right) } ^{2}\leq \lnorm{ \boldsymbol{q}_{m}\left( 0\right) } ^{2}+c\nu ^{-4}\alpha ^{-6}k_{1}^{2}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}\left( {K}_{1}+{k}_{2}\right) +c\nu ^{-1}\lambda _{1}^{-1/2}k_{2}\eta ^{-1}\left( {K}_{1}+{k}_{2}\right). \end{equation*} If, additionally, $\boldsymbol{u}^{in}\in D\left( A^{3/2}\right) $, then using \eqref{eq:alphaMHD:vorticityL2norm}, we obtain \begin{equation} \lnorm{ \boldsymbol{q}_{m}\left( t\right) } ^{2}\leq \left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}\lnorm{ A^{3/2}\boldsymbol{u} ^{in}} ^{2}+c\nu ^{-4}\alpha ^{-6}k_{1}^{2}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}\left( {K}_{1}+{k}_{2}\right) +c\nu ^{-1}\lambda _{1}^{-1/2}k_{2}\eta ^{-1}\left( {K}_{1}+{k}_{2}\right) . \label{eq:alphaMHD:H3_estimate_uin_in_D(A3/2)} \end{equation} \end{remark} \subsection{Existence of weak solutions} Let us summarize our estimates. For any $T>0$ we have \begin{enumerate} \item From \eqref{eq:alphaMHD:H1_estimate} \begin{equation}\label{eq:alphaMHD:um_Linf(0,T;V)_bound} \norm{ \boldsymbol{u}_{m}} _{L^{\infty }\left( \left[ 0,T \right] ;H\right) }^{2}\leq {k_{1}}, \, \norm{ \boldsymbol{u}_{m}} _{L^{\infty }\left( \left[ 0,T \right] ;V\right) }^{2}\leq \frac{{k_{1}}}{{\alpha ^{2}}} \,\,\text{or}\,\, \norm{ \boldsymbol{v}_{m}} _{L^{\infty }\left( \left[ 0,T \right] ;V^{\prime }\right) }^{2}\leq \frac{{k_{1}}}{{\alpha ^{2}}}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}, \end{equation} \begin{equation}\label{eq:alphaMHD:Bm_Linf(0,T;H)_bound} \norm{ \boldsymbol{B}_{m}} _{L^{\infty }\left( \left[ 0,T \right] ;H\right) }^{2}\leq {k_{1}.} \end{equation} \item From \eqref{eq:alphaMHD:integral_H2_estimate} we have \begin{equation} \norm{ \boldsymbol{u}_{m}} _{L^{2}\left( \left[ 0,T \right] ;V\right) }^{2}\leq {\frac{{k_{1}}}{2{\nu }}}, \label{eq:alphaMHD:um_L2(0,T;V)_bound} \end{equation} \begin{equation} \norm{ \boldsymbol{u}_{m}} _{L^{2}\left( \left[ 0,T \right] ;D\left( A\right) \right) }^{2}\leq {\frac{{k_{1}}}{2{\nu }\alpha ^{2}}} \label{eq:alphaMHD:um_L2(0,T;D(A))_bound} \end{equation} or \begin{equation} \norm{ \boldsymbol{v}_{m}} _{L^{2}\left( \left[ 0,T \right] ;H\right) }^{2}\leq {\frac{{k_{1}}}{2{\nu }\alpha ^{2}}}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}, \label{eq:alphaMHD:vm_L2(0,T;H)_bound} \end{equation} and \begin{equation} \norm{ \boldsymbol{B}_{m}} _{L^{2}\left( \left[ 0,T \right] ;V\right) }^{2}\leq {\frac{{k_{1}}}{2{\eta }}} \label{eq:alphaMHD:Bm_L2(0,T;V)_bound}. \end{equation} \item From \eqref{eq:alphaMHD:H2_estimate_of_u,H1_estimate_of_B} we have for any $\tau \in \left( 0,T\right] $ \begin{equation*} \norm{ \boldsymbol{u}_{m}} _{L^{\infty }\left( \left[ \tau ,T\right] ;D\left( A\right) \right) }^{2}\leq \frac{{k}_{2}\left( \tau \right) }{\alpha ^{2}} \,\,\text{or}\,\, \norm{ \boldsymbol{v}_{m}} _{L^{\infty }\left( \left[ \tau ,T\right] ;H\right) }^{2}\leq \frac{{{k}_{2}\left( \tau \right) } }{{\alpha ^{2}}}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2} \end{equation*} and \begin{equation*} \norm{ \boldsymbol{B}_{m}} _{L^{\infty }\left( \left[ \tau ,T\right] ;V\right) }^{2}\leq {{k}_{2}\left( \tau \right)} , \end{equation*} where ${k}_{2}\left( \tau\right) \rightarrow \infty $ as $\tau \rightarrow 0^{+}$. \end{enumerate} Now we establish uniform estimates, in $m$, for $\frac{d\boldsymbol{u}_{m}}{ dt}$, $\frac{d\boldsymbol{v}_{m}}{dt}$. Let us recall \eqref{eq:alphaMHD:Galerkin:velocity}. We have, by \eqref{eq:alphaMHD:vm_L2(0,T;H)_bound}, \begin{equation*} \norm{ A\boldsymbol{v}_{m}} _{L^{2}\left( \left[ 0,T \right] ;D\left( A\right) ^{\prime }\right) }^{2}\leq {\frac{{k_{1}}}{2{\nu } \alpha ^{2}}}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}. \end{equation*} Also, by \eqref{eq:Btilde_estimate_V_H_D(A)_short}, \begin{equation*} \norm{ P_{m}\tilde{B}\left( \boldsymbol{u}_{m},\boldsymbol{ \boldsymbol{v}}_{m}\right) } _{D\left( A\right) ^{\prime }}\leq c\left( \lambda _{1}\right) ^{-1/4}\vnorm{ \boldsymbol{u}_{m}} \lnorm{ \boldsymbol{\boldsymbol{v}}_{m}} , \end{equation*} hence, applying \eqref{eq:alphaMHD:um_Linf(0,T;V)_bound} and \eqref{eq:alphaMHD:vm_L2(0,T;H)_bound}, \begin{equation*} \norm{ P_{m}\tilde{B}\left( \boldsymbol{u}_{m},\boldsymbol{ \boldsymbol{v}}_{m}\right) } _{L^{2}\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) }^{2}\leq c{\frac{{k_{1}^{2}}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}}{{\alpha ^{4}}{\nu }\lambda _{1}^{1/2}}.} \end{equation*} Additionally, by \eqref{eq:BandBtilde_estimate_H_V_D(A)}, we have \begin{equation*} \norm{ P_{m}B\left( \boldsymbol{B}_{m},\boldsymbol{B}_{m}\right) } _{D\left( A\right) ^{\prime }}\leq c\left( \lambda _{1}\right) ^{-1/4}\lnorm{ \boldsymbol{B}_{m}} \vnorm{ \boldsymbol{B}_{m}} , \end{equation*} therefore, using \eqref{eq:alphaMHD:Bm_Linf(0,T;H)_bound} and \eqref{eq:alphaMHD:Bm_L2(0,T;V)_bound}, we obtain \begin{equation*} \norm{ P_{m}B\left( \boldsymbol{B}_{m},\boldsymbol{B}_{m}\right) } _{L^{2}\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) }^{2}\leq c\frac{{k_{1}^{2}}}{{\eta }\lambda _{1}^{1/2}}. \end{equation*} Consequently, by \eqref{eq:alphaMHD:Galerkin:velocity} and the above \begin{equation} \norm{ \frac{d\boldsymbol{\boldsymbol{v}}_{m}}{dt}} _{L^{2}\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) }^{2}\leq c{\frac{{k_{1}^{2}}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2} }{\alpha ^{4}{\nu }\lambda _{1}^{1/2}}}+ {\frac{{k_{1}}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}}{2\alpha ^{2}}}+c\frac{{ k_{1}^{2}}}{{\eta }\lambda _{1}^{1/2}}:=K \label{eq:alphaMHD:dvm_dt_L2(0,T;D(A)')_bound} \end{equation} and, in particular, \begin{equation} \norm{ \frac{d\boldsymbol{\boldsymbol{u}}_{m}}{dt}} _{L^{2}\left( \left[ 0,T\right] ;H\right) }^{2}\leq \frac{K}{\alpha ^{4}}. \label{eq:alphaMHD:dum_dt_L2(0,T;H)_bound} \end{equation} Now we establish uniform estimates, in $m$, for $\frac{d\boldsymbol{B}_{m}}{ dt}$. Let us recall \eqref{eq:alphaMHD:Galerkin:magField}. We have, by \eqref{eq:alphaMHD:Bm_L2(0,T;V)_bound}, \begin{equation*} \norm{ A\boldsymbol{B}_{m}} _{L^{2}\left( \left[ 0,T \right] ;V^{\prime }\right) }^{2}\leq \frac{{k_{1}}}{2{\eta}}. \end{equation*} Also, by \eqref{eq:BandBtilde_estimate_V_V_V}, \begin{equation*} \norm{ P_{m}B\left( \boldsymbol{u}_{m},\boldsymbol{B}_{m}\right) } _{V^{\prime }}\leq c\left( \lambda _{1}\right) ^{-1/4}\vnorm{ \boldsymbol{u}_{m}} \vnorm{ \boldsymbol{B} _{m}} , \end{equation*} Hence, by \eqref{eq:alphaMHD:um_Linf(0,T;V)_bound} and \eqref{eq:alphaMHD:Bm_L2(0,T;V)_bound}, \begin{align*} \norm{ P_{m}B\left( \boldsymbol{u}_{m},\boldsymbol{B}_{m}\right) } _{L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) }^{2}&\leq c\frac{{k_{1}^{2}}}{2{\alpha ^{2}\eta }\lambda _{1}^{1/2}}. \end{align*} Similarly \begin{equation*} \norm{ P_{m}B\left( \boldsymbol{B}_{m},\boldsymbol{u}_{m}\right) } _{V^{\prime }}\leq c\left( \lambda _{1}\right) ^{-1/4}\vnorm{ \boldsymbol{B}_{m}} \vnorm{ \boldsymbol{u} _{m}} \end{equation*} and \begin{equation*} \norm{ P_{m}B\left( \boldsymbol{B}_{m},\boldsymbol{u}_{m}\right) } _{L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) }^{2}\leq c\frac{{k_{1}^{2}}}{2{\alpha ^{2}\eta}\lambda _{1}^{1/2}}. \end{equation*} Hence, from the above and \eqref{eq:alphaMHD:Galerkin:magField}, we have \begin{equation} \norm{ \frac{d\boldsymbol{B}_{m}}{dt}} _{L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) }^{2}\leq c\frac{{k_{1}^{2}}}{\alpha ^{2}{\eta }\lambda _{1}^{1/2}}+ \frac{{k_{1}}}{2}:=\tilde{K}. \label{eq:alphaMHD:dBm_dt_L2(0,T;V')_bound} \end{equation} From \eqref{eq:alphaMHD:um_L2(0,T;D(A))_bound} and \eqref{eq:alphaMHD:dum_dt_L2(0,T;H)_bound}, using Aubin's Compactness Lemma (see, for example, \cite[Lemma 8.4]{b_CF88},\cite{b_L69} or \cite{b_T84}), we may assume that there exists a subsequence $\boldsymbol{u}_{m^{\prime }}$ of $\boldsymbol{u}_{m}$ and $\boldsymbol{u}\in L^{2}\left( \left[ 0,T\right] ;D\left( A\right) \right) \cap C\left( \left[ 0,T\right] ;H\right) $ such that \begin{subequations} \begin{align} \boldsymbol{u}_{m^{\prime }}& \rightarrow \boldsymbol{u\qquad }\text{weakly in }L^{2}\left( \left[ 0,T\right] ;D\left( A\right) \right), \label{eq:alphaMHD:um_weakConvL2_D(A)} \\ \boldsymbol{u}_{m^{\prime }}& \rightarrow \boldsymbol{u}\qquad \text{ strongly in }L^{2}\left( \left[ 0,T\right] ;V\right) \text{ and } \label{eq:alphaMHD:um_strongConvL2_V} \\ \boldsymbol{u}_{m^{\prime }}& \rightarrow \boldsymbol{u}\qquad \text{ strongly in }C\left( \left[ 0,T\right] ;H\right) , \label{eq:alphaMHD:um_strongConvC_H} \end{align} \end{subequations} as $m^{\prime }\rightarrow \infty $. Moreover,\mbox{$\left({d}/{dt}\right) \boldsymbol{u}_{m^{\prime }}\rightarrow \left({d}/{dt}\right)\boldsymbol{u}$} weakly in $L^{2}\left( \left[ 0,T\right] ;H\right) $. Or equivalently, by \eqref{eq:alphaMHD:vm_L2(0,T;H)_bound} and \eqref{eq:alphaMHD:dvm_dt_L2(0,T;D(A)')_bound}, there exists a subsequence $ \boldsymbol{v}_{m^{\prime }}$ of $\boldsymbol{v}_{m}$ such that \begin{subequations} \begin{align} \boldsymbol{v}_{m^{\prime }}& \rightarrow \boldsymbol{v\qquad }\text{weakly in }L^{2}\left( \left[ 0,T\right] ;H\right) , \label{eq:alphaMHD:vm_weakConvL2_H} \\ \boldsymbol{v}_{m^{\prime }}& \rightarrow \boldsymbol{v\qquad }\text{ strongly in }L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) , \label{eq:alphaMHD:vm_strongConvL2_V'} \\ \boldsymbol{v}_{m^{\prime }}& \rightarrow \boldsymbol{v\qquad }\text{ strongly in }C\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) , \label{eq:alphaMHD:vm_strongConvC_D(A)'} \end{align} \end{subequations} \mbox{$\left({d}/{dt}\right)\boldsymbol{v}_{m^{\prime }}\rightarrow \left({d}/{dt}\right) \boldsymbol{v}$} weakly in $L^{2}\left( \left[ 0,T\right] ;D(A)^{\prime }\right) $, as $m^{\prime }\rightarrow \infty $, where $\boldsymbol{v=u}+\alpha ^{2}A\boldsymbol{u}$ is in $L^{2}\left( \left[ 0,T\right] ;H\right) \cap C\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) $. Also, by \eqref{eq:alphaMHD:Bm_L2(0,T;V)_bound} and \eqref{eq:alphaMHD:dBm_dt_L2(0,T;V')_bound}, there exists a subsequence $ \boldsymbol{B}_{m^{\prime }}$ of $\boldsymbol{B}_{m}$ and $\boldsymbol{B}\in L^{2}\left( \left[ 0,T\right] ;V\right) \cap C\left( \left[ 0,T\right] ;V^{\prime }\right) $ such that \begin{subequations} \begin{align} \boldsymbol{B}_{m^{\prime }}& \rightarrow \boldsymbol{B\qquad }\text{weakly in }L^{2}\left( \left[ 0,T\right] ;V\right) , \label{eq:alphaMHD:Bm_weakConvL2_V} \\ \boldsymbol{B}_{m^{\prime }}& \rightarrow \boldsymbol{B\qquad }\text{ strongly in }L^{2}\left( \left[ 0,T\right] ;H\right) , \label{eq:alphaMHD:Bm_strongConvL2_H} \\ \boldsymbol{B}_{m^{\prime }}& \rightarrow \boldsymbol{B\qquad }\text{ strongly in }C\left( \left[ 0,T\right] ;V^{\prime }\right) \label{eq:alphaMHD:Bm_strongConvC_V'} \end{align} \end{subequations} and \mbox{$\left({d}/{dt}\right)\boldsymbol{B}_{m^{\prime }}\rightarrow \left({d}/{dt}\right)\boldsymbol{B}$} weakly in $L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) $, as $m^{\prime }\rightarrow \infty $. Since $\boldsymbol{v}_{m^{\prime }}\rightarrow \boldsymbol{v}$ weakly in $ L^{2}\left( \left[ 0,T\right] ;H\right) $ and strongly in $L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) $ and $\boldsymbol{B}_{m^{\prime }}\rightarrow \boldsymbol{B}$ weakly in $L^{2}\left( \left[ 0,T\right] ;V\right) $ and strongly in $L^{2}\left( \left[ 0,T\right] ;H\right) $, then there exists a set $E\subset \left[ 0,T\right] $ of Lebesgue measure zero and a subsequence of $\boldsymbol{v}_{m^{\prime }}$, $\boldsymbol{B} _{m^{\prime }}$, which we relabel $\boldsymbol{v}_{m}$, $\boldsymbol{B}_{m}$ respectively, such that $\boldsymbol{v}_{m}\left( s\right) \rightarrow \boldsymbol{v}\left( s\right) $ weakly in $H$ and strongly in $V^{\prime }$ for every $s\in \left[ 0,T\right] \backslash E$, and $\boldsymbol{B} _{m}\left( s\right) \rightarrow \boldsymbol{B}\left( s\right) $ weakly in $V$ and strongly in $H$ for every $s\in \left[ 0,T\right] \backslash E$. Let $\boldsymbol{w}\in D\left( A\right) $, $\boldsymbol{\xi }\in V$, then by taking the inner product of \eqref{eq:alphaMHD:Galerkin:velocity} with $ \boldsymbol{w}$, and of \eqref{eq:alphaMHD:Galerkin:magField} with $ \boldsymbol{\xi }$ and integrating over the interval $\left[ t_{0},t\right] ,\,t,t_{0}\in \left[ 0,T\right] $, we have \begin{subequations} \label{grp:alphaMHD:weakFormulation_m} \begin{align} \left( \boldsymbol{\boldsymbol{v}}_{_{m}}\left( t\right) ,\boldsymbol{w} \right) -\left( \boldsymbol{\boldsymbol{v}}_{_{m}}\left( t_{0}\right) , \boldsymbol{w}\right) & +\int_{t_{0}}^{t}\left( \tilde{B}\left( \boldsymbol{u }_{_{m}}\left( s\right) ,\boldsymbol{\boldsymbol{v}}_{_{m}}\left( s\right) \right) ,P_{m}\boldsymbol{w}\right) ds \label{eq:alphaMHD:weakFormulation_m_vel} \\ & +\nu \int_{t_{0}}^{t}\left( \boldsymbol{v}_{_{m}}\left( s\right) ,A \boldsymbol{w}\right) ds=\int_{t_{0}}^{t}\left({B}\left( \boldsymbol{B }_{_{m}}\left( s\right) ,\boldsymbol{B}_{_{m}}\left( s\right) \right) ,P_{m} \boldsymbol{w}\right) ds, \notag \\ \left( \boldsymbol{B}_{_{m}}\left( t\right) ,\boldsymbol{\xi }\right) -\left( \boldsymbol{B}_{_{m}}\left( t_{0}\right) ,\boldsymbol{\xi }\right) & +\int_{t_{0}}^{t}\left( B\left( \boldsymbol{u}_{_{m}}\left( s\right) , \boldsymbol{B}_{_{m}}\left( s\right) \right) ,P_{m}\boldsymbol{\xi }\right) ds \label{eq:alphaMHD:weakFormulation_m_magfld} \\ & -\int_{t_{0}}^{t}\left( B\left( \boldsymbol{B}_{_{m}}\left( s\right) , \boldsymbol{u}_{_{m}}\left( s\right) \right) ,P_{m}\boldsymbol{\xi }\right) ds+\eta \int_{t_{0}}^{t}\left( \left( \boldsymbol{B}_{_{m}}\left( s\right) , \boldsymbol{\xi }\right) \right) ds=0. \notag \end{align} \end{subequations} First we consider \eqref{eq:alphaMHD:weakFormulation_m_vel}. Since $ \boldsymbol{v}_{m}\left( s\right) \rightarrow \boldsymbol{v}\left( s\right) $ weakly in $H,$ then for $t,t_{0}\in \left[ 0,T\right] \backslash E$ \begin{equation*} \left( \boldsymbol{\boldsymbol{v}}_{_{m}}\left( t\right) ,\boldsymbol{w} \right) -\left( \boldsymbol{\boldsymbol{v}}_{_{m}}\left( t_{0}\right) , \boldsymbol{w}\right) \rightarrow \left( \boldsymbol{\boldsymbol{v}}\left( t\right) ,\boldsymbol{w}\right) -\left( \boldsymbol{\boldsymbol{v}}\left( t_{0}\right) ,\boldsymbol{w}\right) ,\text{ as }m\rightarrow \infty \end{equation*} and since $\boldsymbol{w}\in D\left( A\right) $ we also have \begin{equation*} \lim_{m\rightarrow \infty }\int_{t_{0}}^{t}\left( \boldsymbol{v}_{m}\left( s\right) ,A\boldsymbol{w}\right) ds=\int_{t_{0}}^{t}\left( \boldsymbol{v} \left( s\right) ,A\boldsymbol{w}\right) ds. \end{equation*} Now \begin{equation} \lim_{m\rightarrow \infty }\lnorm{ P_{m}A\boldsymbol{w}-A\boldsymbol{w} } =\lim_{m\rightarrow \infty }\vnorm{ P_{m}\boldsymbol{w}- \boldsymbol{w}} =\lim_{m\rightarrow \infty }\lnorm{ P_{m} \boldsymbol{w}-\boldsymbol{w}} =0. \label{eq:alphaMHD:limPmw-w} \end{equation} For the nonlinear terms we have \begin{align*} & \abs{ \int_{t_{0}}^{t}\left( \tilde{B}\left( \boldsymbol{u} _{_{m}}\left( s\right) ,\boldsymbol{\boldsymbol{v}}_{_{m}}\left( s\right) \right) ,P_{m}\boldsymbol{w}\right) -\left\langle \tilde{B}\left( \boldsymbol{u}\left( s\right) ,\boldsymbol{\boldsymbol{v}}\left( s\right) \right) ,\boldsymbol{w}\right\rangle _{D\left( A\right) ^{\prime }}ds} \\ & \quad \leq \abs{ \int_{t_{0}}^{t}\left\langle \tilde{B}\left( \boldsymbol{u}_{_{m}}\left( s\right) ,\boldsymbol{\boldsymbol{v}} _{_{m}}\left( s\right) \right) ,P_{m}\boldsymbol{w}-\boldsymbol{w} \right\rangle _{D\left( A\right) ^{\prime }}ds} \\ & \quad +\abs{ \int_{t_{0}}^{t}\left\langle \tilde{B}\left( \boldsymbol{u}_{_{m}}\left( s\right) -\boldsymbol{u}\left( s\right) , \boldsymbol{\boldsymbol{v}}_{_{m}}\left( s\right) \right) ,\boldsymbol{w} \right\rangle _{D\left( A\right) ^{\prime }}ds} \\ & \quad +\abs{ \int_{t_{0}}^{t}\left\langle \tilde{B}\left( \boldsymbol{u}\left( s\right) ,\boldsymbol{\boldsymbol{v}}_{_{m}}\left( s\right) -\boldsymbol{\boldsymbol{v}}\left( s\right) \right) ,\boldsymbol{w} \right\rangle _{D\left( A\right) ^{\prime }}ds} \\ & \quad =:I_{m}^{(1)}+I_{m}^{(2)}+I_{m}^{(3)} \end{align*} By \eqref{eq:Btilde_estimate_V_H_D(A)_short} \begin{equation*} I_{m}^{(1)}\leq c\left( \lambda _{1}\right) ^{-1/4}\int_{t_{0}}^{t}\vnorm{ \boldsymbol{u}_{_{m}}\left( s\right) } \lnorm{ \boldsymbol{\boldsymbol{v}}_{_{m}}\left( s\right) } \lnorm{ P_{m}A\boldsymbol{w}-A\boldsymbol{w}} ds, \end{equation*} using Cauchy-Schwarz inequality we obtain \begin{equation*} I_{m}^{(1)} \leq c\left( \lambda _{1}\right) ^{-1/4}\lnorm{ P_{m}A \boldsymbol{w}-A\boldsymbol{w}} \norm{ \boldsymbol{u} _{m}} _{L^{2}\left( \left[ 0,T\right] ;V\right) }\norm{ \boldsymbol{v}_{m}} _{L^{2}\left( \left[ 0,T\right] ;H\right) }, \end{equation*} hence by \eqref{eq:alphaMHD:um_L2(0,T;V)_bound}, \eqref{eq:alphaMHD:vm_L2(0,T;H)_bound} and \eqref{eq:alphaMHD:limPmw-w} $ \lim_{m\rightarrow \infty }I_{m}^{(1)}=0$. Again, by \eqref{eq:Btilde_estimate_V_H_D(A)_short}, \begin{equation*} I_{m}^{(2)}\leq c\left( \lambda _{1}\right) ^{-1/4}\int_{t_{0}}^{t}\vnorm{ \boldsymbol{u}_{m}\left( s\right) - \boldsymbol{u}\left( s\right) } \lnorm{ \boldsymbol{v}_{m}\left( s\right) } \lnorm{ A\boldsymbol{w}} ds, \end{equation*} and by Cauchy-Schwarz and \eqref{eq:alphaMHD:vm_L2(0,T;H)_bound}, \begin{equation*} I_{m}^{(2)}\leq c\left( \lambda _{1}\right) ^{-1/4}\lnorm{ A\boldsymbol{w} } {\frac{{k_{1}}}{2{\nu }\alpha ^{2}}}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}\left( \int_{0}^{T}\vnorm{ \boldsymbol{u}_{m}\left( s\right) -\boldsymbol{u}\left( s\right) } ^{2}ds\right) ^{1/2}, \end{equation*} hence $\lim_{m\rightarrow \infty }I_{m}^{(2)}=0$, since $ \boldsymbol{u}_{m}\rightarrow \boldsymbol{u}$ in $L^{2}\left( \left[ 0,T \right] ;V\right) $. Finally, we show that $\lim_{m\rightarrow \infty }I_{m}^{(3)}=0$. We define a linear functional for $\boldsymbol{h }\in L^{2}\left( \left[ 0,T\right] ;H\right) $ by \begin{equation*} \phi\left( \boldsymbol{\boldsymbol{h}}\right) ={\ \int_{t_{0}}^{t}\left\langle \tilde{B}\left( \boldsymbol{u}\left( s\right) , \boldsymbol{\boldsymbol{h}}\right) ,\boldsymbol{w}\right\rangle _{D\left( A\right) ^{\prime }}ds} , \end{equation*} by \eqref{eq:Btilde_estimate_V_H_D(A)_short} and Cauchy-Schwarz \begin{equation*} \abs{ \phi \left( \boldsymbol{\boldsymbol{h}}\right) } \leq c\left( \lambda _{1}\right) ^{-1/4}\lnorm{ A\boldsymbol{w} } \vnorm{ \boldsymbol{u}\left( s\right) } _{L^{2}\left( \left[ 0,T\right] ;V\right) }\norm{ \boldsymbol{ \boldsymbol{h}}\left( s\right) } _{L^{2}\left( \left[ 0,T\right] ;H\right) } \end{equation*} hence, due to \eqref{eq:alphaMHD:um_L2(0,T;V)_bound}, $\phi $ is a bounded linear functional, and thus, since $\boldsymbol{v}_{m}\rightarrow \boldsymbol{v}$ weakly in $L^{2}\left( \left[ 0,T\right] ;H\right) $, \begin{equation*} \lim_{m\rightarrow \infty }{\phi \left( \boldsymbol{v}_{m}\left( s\right) - \boldsymbol{v}\left( s\right) \right) }=0. \end{equation*} and hence $\lim_{m\rightarrow \infty }I_{m}^{(3)}=0$. It remains to pass to the limit in the right hand side element of \eqref{eq:alphaMHD:weakFormulation_m_vel}. \begin{align*} & \abs{ \int_{t_{0}}^{t}\left( B\left( \boldsymbol{B}_{_{m}}\left( s\right) ,\boldsymbol{B}_{_{m}}\left( s\right) \right) ,P_{m}\boldsymbol{w} \right) -\left\langle B\left( \boldsymbol{B}\left( s\right) ,\boldsymbol{B} \left( s\right) \right) ,\boldsymbol{w}\right\rangle _{V^{\prime }}ds} \\ & \quad \leq \abs{ \int_{t_{0}}^{t}\left\langle {B}\left( \boldsymbol{B}_{_{m}}\left( s\right) ,\boldsymbol{B}_{_{m}}\left( s\right) \right) ,P_{m}\boldsymbol{w}-\boldsymbol{w}\right\rangle _{V^{\prime }}ds} \\ & \quad +\abs{ \int_{t_{0}}^{t}\left\langle {B}\left( \boldsymbol{B}_{_{m}}\left( s\right) -\boldsymbol{B}\left( s\right) , \boldsymbol{B}_{_{m}}\left( s\right) \right) ,\boldsymbol{w}\right\rangle _{V^{\prime }}ds} \\ & \quad +\abs{ \int_{t_{0}}^{t}\left\langle {B}\left( \boldsymbol{B}\left( s\right) ,\boldsymbol{B}_{_{m}}\left( s\right) - \boldsymbol{B}\left( s\right) \right) ,\boldsymbol{w}\right\rangle _{V^{\prime }}ds} \\ & \quad =:J_{m}^{(1)}+J_{m}^{(2)}+J_{m}^{(3)}. \end{align*} Now, by \eqref{eq:BandBtilde_estimate_V_V_V} and Poincar\'{e} inequality \begin{equation*} J_{m}^{(1)}\leq c\left( \lambda _{1}\right) ^{-1/4}\vnorm{ P_{m} \boldsymbol{w}-\boldsymbol{w}} \norm{ \boldsymbol{B} _{_{m}}} _{L^{2}\left( \left[ 0,T\right] ;V\right) }^{2}, \end{equation*} hence, by \eqref{eq:alphaMHD:limPmw-w}, $\lim_{m\rightarrow \infty }J_{m}^{(1)}=0$. By \eqref{eq:BandBtilde_estimate_V_V_V} and Poincar\'{e} inequality \begin{equation*} J_{m}^{(2)}\leq c\left( \lambda _{1}\right) ^{-1/4}\int_{t_{0}}^{t}\vnorm{ \boldsymbol{B}_{_{m}}\left( s\right) - \boldsymbol{B}\left( s\right) } \vnorm{ \boldsymbol{B} _{_{m}}\left( s\right) } \vnorm{ \boldsymbol{w}} ds \end{equation*} and, applying Cauchy-Schwarz and \eqref{eq:alphaMHD:Bm_L2(0,T;V)_bound}, \begin{equation*} J_{m}^{(2)}\leq c\left( \lambda _{1}\right) ^{-1/4}\vnorm{ \boldsymbol{w} } \frac{k_{1}}{2\eta }\left( \int_{0}^{T}\vnorm{ \boldsymbol{B} _{_{m}}\left( s\right) -\boldsymbol{B}\left( s\right) } ^{2}ds\right) ^{1/2},\quad \end{equation*} hence, since $\boldsymbol{B}_{m}\rightarrow \boldsymbol{B}$ weakly in $ L^{2}\left( \left[ 0,T\right] ;V\right) $, we have $\lim_{m\rightarrow \infty }J_{m}^{(2)}=0$ (similarly to the argument given for $I_{m}^{(3)}$). Similarly we can show that $\lim_{m\rightarrow \infty }J_{m}^{(3)}=0$. It remains to pass to the limit in \eqref{eq:alphaMHD:weakFormulation_m_magfld}. Note that \begin{equation} \lim_{m\rightarrow \infty }\lnorm{ P_{m}\boldsymbol{\xi }-\boldsymbol{\xi }} =0. \label{eq:alphaMHD:limPmx-x} \end{equation} We recall that $\boldsymbol{B}_{m}\left( s\right) \rightarrow \boldsymbol{B} \left( s\right) $ weakly in $V$ and strongly in $H$ for every $s\in \left[ 0,T\right] \backslash E$, hence the convergence for the linear terms is easy. For the nonlinear terms we have \begin{align*} & \abs{ \int_{t_{0}}^{t}\left( B\left( \boldsymbol{u}_{_{m}}\left( s\right) ,\boldsymbol{B}_{_{m}}\left( s\right) \right) ,P_{m}\boldsymbol{\xi }\right) -\left( B\left( \boldsymbol{u}\left( s\right) ,\boldsymbol{B}\left( s\right) \right) ,\boldsymbol{\xi }\right) ds} \\ & \quad \leq \abs{ \int_{t_{0}}^{t}\left( B\left( \boldsymbol{u} _{_{m}}\left( s\right) ,\boldsymbol{B}_{_{m}}\left( s\right) \right) ,P_{m} \boldsymbol{\xi }-\boldsymbol{\xi }\right) ds} \\ & \quad +\abs{ \int_{t_{0}}^{t}\left( B\left( \boldsymbol{u} _{_{m}}\left( s\right) -\boldsymbol{u}\left( s\right) ,\boldsymbol{B} _{_{m}}\left( s\right) \right) ,\boldsymbol{\xi }\right) ds} \\ & \quad +\abs{ \int_{t_{0}}^{t}\left( B\left( \boldsymbol{u}\left( s\right) ,\boldsymbol{B}_{_{m}}\left( s\right) -\boldsymbol{B}\left( s\right) \right) ,\boldsymbol{\xi }\right) ds} \\ & \quad =:S_{m}^{(1)}+S_{m}^{(2)}+S_{m}^{(3)}. \end{align*} Now, by \eqref{eq:BandBtilde_estimate_DA_V_H} and Cauchy-Schwarz inequality \begin{equation*} S_{m}^{(1)}\leq c\left( \lambda _{1}\right) ^{-1/4}\lnorm{ P_{m} \boldsymbol{\xi }-\boldsymbol{\xi }} \norm{ \boldsymbol{u} _{_{m}}} _{L^{2}\left( \left[ 0,T\right] ;D\left( A\right) \right) }\norm{ \boldsymbol{B}_{_{m}}} _{L^{2}\left( \left[ 0,T\right] ;V\right) }, \end{equation*} hence, by \eqref{eq:alphaMHD:um_L2(0,T;D(A))_bound}, \eqref{eq:alphaMHD:Bm_L2(0,T;V)_bound} and \eqref{eq:alphaMHD:limPmx-x}, $ \lim_{m\rightarrow \infty }S_{m}^{(1)}=0$. Again, by \eqref{eq:BandBtilde_estimate_DA_V_H} and Cauchy-Schwarz inequality \begin{equation*} S_{m}^{(2)}\leq c\left( \lambda _{1}\right) ^{-1/4}\lnorm{ \boldsymbol{ \xi }} \norm{ \boldsymbol{B}_{_{m}}} _{L^{2}\left( \left[ 0,T\right] ;V\right) }\norm{ \boldsymbol{u} _{_{m}}\left( s\right) -\boldsymbol{u}\left( s\right) } _{L^{2}\left( \left[ 0,T\right] ;D\left( A\right) \right) }^{2} \end{equation*} hence, since $\boldsymbol{u}_{m}\rightarrow \boldsymbol{u}$ weakly in $ L^{2}\left( \left[ 0,T\right] ;D\left( A\right) \right) $, we have $ \lim_{m\rightarrow \infty }S_{m}^{(2)}=0$ (similarly to the case for $ I_{m}^{(3)}$). By similar arguments, using that $\boldsymbol{B} _{m}\rightarrow \boldsymbol{B}$ weakly in $L^{2}\left( \left[ 0,T\right] ;V\right) $, we obtain also that $\lim_{m\rightarrow \infty }S_{m}^{(3)}=0.$ For the term $\int_{t_{0}}^{t}\left( B\left( \boldsymbol{B}_{_{m}}\left( s\right) ,\boldsymbol{u}_{_{m}}\left( s\right) \right) ,P_{m}\boldsymbol{\xi} \right) $ we can perform the same estimates using the \eqref{eq:B_estimate_V_DA_H} to bound operator $B$. Hence, we can pass to the limit in \eqref{grp:alphaMHD:weakFormulation_m} and we obtain that for every $t,t_{0}\in \left[ 0,T\right] \backslash E$ \begin{subequations} \label{grp:alphaMHD:weakFormulation} \begin{align} \left( \boldsymbol{\boldsymbol{v}}\left( t\right) ,\boldsymbol{w}\right) -\left( \boldsymbol{\boldsymbol{v}}\left( t_{0}\right) ,\boldsymbol{w} \right) +\int_{t_{0}}^{t}\left\langle \tilde{B}\left( \boldsymbol{u}\left( s\right) ,\boldsymbol{\boldsymbol{v}}\left( s\right) \right) ,\boldsymbol{w} \right\rangle _{D\left( A\right) ^{\prime }}ds& +\nu \int_{t_{0}}^{t}\left( \boldsymbol{v}\left( s\right) ,A\boldsymbol{w}\right) ds \label{eq:alphaMHD:weakFormulation_vel} \\ & =\int_{t_{0}}^{t}\left\langle B\left( \boldsymbol{B}\left( s\right) , \boldsymbol{B}\left( s\right) \right) ,\boldsymbol{w}\right\rangle _{V^{\prime }}ds, \notag \\ \left( \boldsymbol{B}\left( t\right) ,\boldsymbol{\xi }\right) -\left( \boldsymbol{B}\left( t_{0}\right) ,\boldsymbol{\xi }\right) +\int_{t_{0}}^{t}\left( B\left( \boldsymbol{u}\left( s\right) ,\boldsymbol{B} \left( s\right) \right) ,\boldsymbol{\xi }\right) ds& -\int_{t_{0}}^{t}\left( B\left( \boldsymbol{B}\left( s\right) ,\boldsymbol{u} \left( s\right) \right) ,\boldsymbol{\xi }\right) ds \label{eq:alphaMHD:weakFormulation_magfld} \\ & +\eta \int_{t_{0}}^{t}\left( \left( \boldsymbol{B}\left( s\right) , \boldsymbol{\xi }\right) \right) ds=0. \notag \end{align} \end{subequations} for every $\boldsymbol{w}\in D\left( A\right) ,\boldsymbol{\xi }\in V$. Now we show that $\boldsymbol{\boldsymbol{v}}\in C\left( \left[ 0,T\right] ;V^{\prime }\right) $ (or equivalently $\boldsymbol{u}\in C\left( \left[ 0,T \right] ;V\right) $) and $\boldsymbol{B}\in C\left( \left[ 0,T\right] ;H\right) $. Notice that since $\norm{ \boldsymbol{v}_{m}} _{L^{\infty }\left( \left[ 0,T\right] ;V^{\prime }\right) }^{2}\leq \frac{{ k_{1}}}{{\alpha ^{2}}}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}$ and $ \boldsymbol{v}_{m}\rightarrow \boldsymbol{v}$ strongly in $L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) $ then $\norm{ \boldsymbol{v} } _{L^{\infty }\left( \left[ 0,T\right] ;V^{\prime }\right) }^{2}\leq \frac{{k_{1}}}{{\alpha ^{2}}}\left( \lambda _{1}^{-1}+\alpha ^{2}\right) ^{2}$. Hence \eqref{eq:alphaMHD:weakFormulation_vel} implies that $\boldsymbol{\boldsymbol{v}}\left( t\right) \in $ $C_{w}\left( \left[ 0,T\right] ;V^{\prime }\right) $ because $D\left( A\right) $ is dense in $V$ . Since, also, for a fixed $t_{0}$, $\norm{ \boldsymbol{\boldsymbol{v} }\left( t\right) } _{V^{\prime }}\rightarrow \norm{ \boldsymbol{\boldsymbol{v}}\left( t_{0}\right) } _{V^{\prime }}$ , as $t\rightarrow t_{0}$, then we have $\boldsymbol{\boldsymbol{v}}\in $ $C\left( \left[ 0,T\right] ;V^{\prime }\right) $, or equivalently $ \boldsymbol{u}\in C\left( \left[ 0,T\right] ;V\right) $. Similarly, since $\norm{ \boldsymbol{B}_{m}} _{L^{\infty }\left( \left[ 0,T\right] ;H\right) }^{2}\leq {k_{1}}$ and $\boldsymbol{B} _{m}\rightarrow \boldsymbol{B}$ strongly in $L^{2}\left( \left[ 0,T\right] ;H\right) $ and $D\left( A\right) $ is dense in $H$ and because of \eqref{eq:alphaMHD:weakFormulation_magfld} we have $\boldsymbol{B}\in C\left( \left[ 0,T\right] ;H\right) $. \subsection{Uniqueness and continuous dependence of weak solutions on the initial data} Next, we show the continuous dependence of weak solutions on the initial data and, in particular, the uniqueness of weak solutions. Let $\boldsymbol{u},\,\boldsymbol{B}$ and $\boldsymbol{\bar{u}},\, \boldsymbol{\bar{B}}$ be any two weak solutions of \eqref{grp:alphaMHD:Projected} on the interval $\left[ 0,T\right] $ with initial values \mbox{$\boldsymbol{u}\left( 0\right) =\boldsymbol{u}^{in}$}, \mbox{$\boldsymbol{B}\left( 0\right) =\boldsymbol{B}^{in}$}, \mbox{$\boldsymbol{\bar{u}} \left( 0\right) =\boldsymbol{\bar{u}}^{in}$}, \mbox{$\boldsymbol{\bar{B}}\left( 0\right) =\boldsymbol{\bar{B}}^{in}$}. We denote \mbox{$\boldsymbol{v=u}+\alpha ^{2}A\boldsymbol{u}$}, \mbox{$\boldsymbol{\bar{v}=\bar{u}}+\alpha ^{2}A\boldsymbol{ \bar{u}}$}, \mbox{$\delta \boldsymbol{u=u}-\boldsymbol{\bar{u}}$}, \mbox{$\delta \boldsymbol{v=v}-\boldsymbol{\bar{v}}$} and \mbox{$\delta \boldsymbol{B=B}- \boldsymbol{\bar{B}}$}. Then \eqref{grp:alphaMHD:Projected} implies \begin{align} & \frac{d}{dt}\delta \boldsymbol{v}+\nu A\delta \boldsymbol{v}+\tilde{B} \left( \delta \boldsymbol{u},\boldsymbol{\boldsymbol{v}}\right) +\tilde{B} \left( \boldsymbol{\bar{u}},\delta \boldsymbol{v}\right) =B\left( \delta \boldsymbol{B},\boldsymbol{B}\right) +B\left( \boldsymbol{\bar{B}},\delta \boldsymbol{B}\right) , \label{eq:alphaMHD:delta_v} \\ & \frac{d}{dt}\delta \boldsymbol{B}+\eta A\delta \boldsymbol{B}=-B\left( \delta \boldsymbol{u},\boldsymbol{B}\right) -B\left( \boldsymbol{\bar{u}} ,\delta \boldsymbol{B}\right) +B\left( \delta \boldsymbol{B},\boldsymbol{u} \right) +B\left( \boldsymbol{\bar{B}},\delta \boldsymbol{u}\right) , \label{eq:alphaMHD:delta_B} \\ & \delta \boldsymbol{u}\left( 0\right) =\delta \boldsymbol{u}^{in}= \boldsymbol{u}^{in}-\boldsymbol{\bar{u}}^{in}, \\ & \delta \boldsymbol{B}\left( 0\right) =\delta \boldsymbol{B}^{in}= \boldsymbol{B}^{in}-\boldsymbol{\bar{B}}^{in}. \end{align} Since ${d\boldsymbol{v}}/{dt}\in L^{2}\left( \left[ 0,T \right] ;D\left( A\right) ^{\prime }\right) ,\ \delta \boldsymbol{u }\in L^{2}\left( \left[ 0,T\right] ;D\left( A\right) \right) $ and $d \boldsymbol{B}/{dt}\in L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) $, $\boldsymbol{B},\ \boldsymbol{\bar{B}},\ \delta \boldsymbol{B} \in L^{2}\left( \left[ 0,T\right] ;V\right) $ and due to the identities \eqref{eq:B_id2} and \eqref{eq:Btilda_id2}, we have for almost every $t\in \left[ 0,T\right] $ \begin{align*} & \left\langle \frac{d}{dt}\delta \boldsymbol{v},\delta \boldsymbol{u} \right\rangle _{D\left( A\right) ^{\prime }}+\nu \left( \vnorm{ \delta \boldsymbol{u}} ^{2}+\alpha ^{2}\lnorm{ A\delta \boldsymbol{u} } ^{2}\right) +\left\langle \tilde{B}\left( \boldsymbol{\bar{u}} ,\delta \boldsymbol{v}\right) ,\delta \boldsymbol{u}\right\rangle _{D\left( A\right) ^{\prime }} \\ & \qquad \qquad \qquad \qquad =\left\langle B\left( \delta \boldsymbol{B}, \boldsymbol{B}\right) ,\delta \boldsymbol{u}\right\rangle _{D\left( A\right) ^{\prime }}+\left\langle B\left( \boldsymbol{\bar{B}},\delta \boldsymbol{B} \right) ,\delta \boldsymbol{u}\right\rangle _{V^{\prime }}, \\ & \left\langle \frac{d}{dt}\delta \boldsymbol{B},\delta \boldsymbol{B} \right\rangle _{V^{\prime }}+\eta \vnorm{ \delta \boldsymbol{B} } ^{2}=-\left\langle B\left( \delta \boldsymbol{u},\boldsymbol{B} \right),\delta \boldsymbol{B}\right\rangle _{V^{\prime }}+\left\langle B\left( \delta \boldsymbol{B},\boldsymbol{u}\right) ,\delta \boldsymbol{B}\right\rangle _{V^{\prime }}+\left\langle B\left( \boldsymbol{\bar{B}},\delta \boldsymbol{u}\right), \delta \boldsymbol{B}\right\rangle _{V^{\prime }}. \end{align*} Notice that by theorem of interpolation by Lions and Magenes, see, e.g., \cite[ Chap. III, Lemma 1.2]{b_T84}, \begin{equation*} \left\langle \frac{d}{dt}\delta \boldsymbol{v},\delta \boldsymbol{u} \right\rangle _{D\left( A\right) ^{\prime }}=\frac{d}{dt}\left( \lnorm{ \delta \boldsymbol{u}} ^{2}+\alpha ^{2}\vnorm{ \delta \boldsymbol{u}} ^{2}\right) \end{equation*} and \begin{equation*} \left\langle \frac{d}{dt}\delta \boldsymbol{B},\delta \boldsymbol{B} \right\rangle _{V^{\prime }}=\frac{d}{dt}\lnorm{ \delta \boldsymbol{B} } ^{2}, \end{equation*} thus we have \begin{subequations} \begin{align} & \frac{d}{dt}\left( \lnorm{ \delta \boldsymbol{u}} ^{2}+\alpha ^{2}\vnorm{ \delta \boldsymbol{u}} ^{2}\right) +\nu \left( \vnorm{ \delta \boldsymbol{u}} ^{2}+\alpha ^{2}\lnorm{ A\delta \boldsymbol{u}} ^{2}\right) +\left\langle \tilde{B}\left( \boldsymbol{\bar{u}},\delta \boldsymbol{v}\right) ,\delta \boldsymbol{u} \right\rangle _{D\left( A\right) ^{\prime }} \label{eq:alphaMHD:delta_inner_product_delta_u} \\ & \qquad \qquad \qquad \qquad =\left\langle B\left( \delta \boldsymbol{B}, \boldsymbol{B}\right) ,\delta \boldsymbol{u}\right\rangle _{D\left( A\right) ^{\prime }}+\left\langle B\left( \boldsymbol{\bar{B}},\delta \boldsymbol{B} \right) ,\delta \boldsymbol{u}\right\rangle _{V^{\prime }}, \notag \\ & \frac{d}{dt}\lnorm{ \delta \boldsymbol{B}} ^{2}+\eta \vnorm{ \delta \boldsymbol{B}} ^{2}=-\left\langle B\left( \delta \boldsymbol{u},\boldsymbol{B}\right) ,\delta \boldsymbol{ B}\right\rangle _{V^{\prime }}+\left\langle B\left( \delta \boldsymbol{B}, \boldsymbol{u}\right) ,\delta \boldsymbol{B}\right\rangle _{V^{\prime }}+\left\langle B\left( \boldsymbol{\bar{B}},\delta \boldsymbol{u }\right),\delta \boldsymbol{B}\right\rangle _{V^{\prime }}. \label{eq:alphaMHD:delta_inner_product_delta_B} \end{align} \end{subequations} By summation of \eqref{eq:alphaMHD:delta_inner_product_delta_u} and \eqref{eq:alphaMHD:delta_inner_product_delta_B} we obtain \begin{multline*} \frac{d}{dt}\left( \lnorm{ \delta \boldsymbol{u}} ^{2}+\alpha ^{2}\vnorm{ \delta \boldsymbol{u}} ^{2}+\lnorm{ \delta \boldsymbol{B}} ^{2}\right) +\nu \left( \vnorm{ \delta \boldsymbol{u}} ^{2}+\alpha ^{2}\lnorm{ A\delta \boldsymbol{u} } ^{2}\right) +\eta \vnorm{ \delta \boldsymbol{B}} ^{2} \\ =-\left\langle \tilde{B}\left( \boldsymbol{\bar{u}},\delta \boldsymbol{v} \right) ,\delta \boldsymbol{u}\right\rangle _{D\left( A\right) ^{\prime }}+\left\langle \tilde{B}\left( \delta \boldsymbol{B},\boldsymbol{B}\right) ,\delta \boldsymbol{u}\right\rangle _{D\left( A\right) ^{\prime }}+\left\langle B\left( \delta \boldsymbol{B},\boldsymbol{u}\right) ,\delta \boldsymbol{B}\right\rangle _{V^{\prime }}. \end{multline*} By \eqref{eq:Btilde_estimate_D(A)_H_V} we get \begin{equation*} \abs{ \left\langle \tilde{B}\left( \boldsymbol{\bar{u}},\delta \boldsymbol{v}\right) ,\delta \boldsymbol{u}\right\rangle _{V^{\prime }}} \leq c\left( \vnorm{ \boldsymbol{\bar{u}}} ^{1/2}\lnorm{ A\boldsymbol{\bar{u}}} ^{1/2}\lnorm{ \delta \boldsymbol{v}} \vnorm{ \delta \boldsymbol{u}} +\lnorm{ A\boldsymbol{\bar{u}}} \lnorm{ \delta \boldsymbol{v} } \lnorm{ \delta \boldsymbol{u}} ^{1/2}\vnorm{ \delta \boldsymbol{u}} ^{1/2}\right) , \end{equation*} and by applying Young's inequality \begin{equation*} \abs{ \left\langle \tilde{B}\left( \boldsymbol{\bar{u}},\delta \boldsymbol{v}\right) ,\delta \boldsymbol{u}\right\rangle _{V^{\prime }}} \leq \frac{c}{\nu \lambda _{1}^{1/2}}\lnorm{ A \boldsymbol{\bar{u}}} ^{2}\left( \lnorm{ \delta \boldsymbol{u} } ^{2}+\alpha ^{2}\vnorm{ \delta \boldsymbol{u}} ^{2}\right) +\frac{\nu }{2}\vnorm{ \delta \boldsymbol{u}} ^{2}+ \frac{\nu }{4}\alpha ^{2}\lnorm{ A\delta \boldsymbol{u}} ^{2}. \end{equation*} By \eqref{eq:BandBtilde_estimate_H_V_D(A)} and Young's inequality we have \begin{equation*} \abs{ \left\langle \tilde{B}\left( \delta \boldsymbol{B},\boldsymbol{ B}\right) ,\delta \boldsymbol{u}\right\rangle _{D\left( A\right) ^{\prime }}} \leq c\lnorm{ \delta \boldsymbol{B}} ^{2}\vnorm{ \boldsymbol{B}} ^{2}+\frac{1}{\nu \alpha ^{2}} \vnorm{ \delta \boldsymbol{u}} ^{2}+\frac{\nu }{4}\alpha ^{2}\lnorm{ A\delta \boldsymbol{u}} . \end{equation*} Also, by \eqref{eq:B_estimate_V_DA_H} and Young's inequality we obtain \begin{equation*} \abs{ \left( B\left( \delta \boldsymbol{B},\boldsymbol{u}\right) ,\delta \boldsymbol{B}\right) } \leq \frac{c}{\eta \lambda _{1}^{1/2}}\lnorm{ A\boldsymbol{u}} ^{2}\lnorm{ \delta \boldsymbol{B}} ^{2}+\frac{\eta }{2}\vnorm{ \delta \boldsymbol{B}} ^{2} \end{equation*} Summing up we have \begin{align*} & \frac{d}{dt}\left( \lnorm{ \delta \boldsymbol{u}} ^{2}+\alpha ^{2}\vnorm{ \delta \boldsymbol{u}} ^{2}+\lnorm{ \delta \boldsymbol{B}} ^{2}\right) +\frac{\nu }{2}\left( \vnorm{ \delta \boldsymbol{u}} ^{2}+\alpha ^{2}\lnorm{ A\delta \boldsymbol{u}} ^{2}\right)+\frac{\eta }{2}\vnorm{ \delta \boldsymbol{B}} ^{2} \\ & \qquad \qquad \qquad \leq \left( \frac{c}{\nu \lambda _{1}^{1/2}} \lnorm{ A\boldsymbol{\bar{u}}} ^{2} +\frac{1}{\nu \alpha ^{4}}+c\vnorm{ \boldsymbol{B} } ^{2}+\frac{c}{\eta \lambda _{1}^{1/2}}\lnorm{ A\boldsymbol{u} } ^{2}\right) \left( \lnorm{ \delta \boldsymbol{u}} ^{2}+\alpha ^{2}\vnorm{ \delta \boldsymbol{u}} ^{2}+\lnorm{ \delta \boldsymbol{B}} ^{2}\right) . \end{align*} We denote \begin{equation*} z\left( s\right) = \frac{c}{\nu \lambda _{1}^{1/2}} \lnorm{ A\boldsymbol{\bar{u}}} ^{2} +\frac{1}{\nu \alpha ^{4}}+c\vnorm{ \boldsymbol{B} } ^{2}+\frac{c}{\eta \lambda _{1}^{1/2}}\lnorm{ A\boldsymbol{u} } ^{2} \end{equation*} and use Gronwall's inequality to obtain \begin{equation} \lnorm{ \delta \boldsymbol{u}\left( t\right) } ^{2}+\alpha ^{2}\vnorm{ \delta \boldsymbol{u}\left( t\right) } ^{2}+\lnorm{ \delta \boldsymbol{B}\left( t\right) } ^{2}\leq \left( \lnorm{ \delta \boldsymbol{u}\left( 0\right) } ^{2}+\alpha ^{2}\vnorm{ \delta \boldsymbol{u}\left( 0\right) } ^{2}+\lnorm{ \delta \boldsymbol{B}\left( 0\right) } ^{2}\right) \exp \left( \int_{0}^{t}z\left( s\right) ds\right) , \label{eq:alphaMHD:cont_depend} \end{equation} since $\boldsymbol{u},\boldsymbol{\bar{u}}\in L^{2}\left( \left[ 0,T\right] ;D\left( A\right) \right) $ and $\boldsymbol{B}\in L^{2}\left( \left[ 0,T \right] ;V\right) $ the integral $\left( \int_{0}^{t}z\left( s\right) ds\right) $ is finite. Hence \eqref{eq:alphaMHD:cont_depend} implies the continuous dependence of the weak solutions of \eqref{grp:alphaMHD:Projected} on the initial data in any bounded interval of time $\left[ 0,T\right] $. In particular, the solutions are unique. \subsection{Strong solutions} \begin{theorem} Let $T>0$, $\boldsymbol{u}^{in}\in V,\,\boldsymbol{B}^{in}\in H$. Then there exists a unique solution $\boldsymbol{u},\boldsymbol{B}$ of \eqref{grp:alphaMHD:Projected} on $\left[ 0,T\right] $ satisfying \begin{equation} \boldsymbol{u}\in L_{loc}^{\infty }\left( \left( 0,T\right] ;D(A^{{3/2} })\right) \cap L_{loc}^{2}\left( \left( 0,T\right] ;D(A^{2})\right) \cap C\left( \left[ 0,T\right] ;V\right) \cap L^{2}\left( \left[ 0,T \right] ;D(A)\right) \label{eq:alphaMHD:strong_sol_u} \end{equation} and \begin{equation} \boldsymbol{B}\in L_{loc}^{\infty }\left( \left( 0,T\right] ;V\right) \cap L_{loc}^{2}\left( \left( 0,T\right] ;D(A)\right) \cap C\left( \left[ 0,T\right] ;H\right) \cap L^{2}\left( \left[ 0,T\right] ;V\right) . \label{eq:alphaMHD:strong_sol_B} \end{equation} If $\boldsymbol{B}^{in}\in V$ and $\boldsymbol{u}^{in}\in D(A)$ then the solution is the strong solution \begin{align*} \boldsymbol{u}&\in C\left( \left[ 0,T\right] ;D(A)\right) \cap L^{2}( \left[ 0,T\right] ;D(A^{3/2})) , \\ \boldsymbol{B}&\in C\left( \left[ 0,T\right] ;V\right) \cap L^{2}\left( \left[ 0,T\right] ;D(A)\right). \end{align*} If, additionally, $\boldsymbol{u}^{in}\in D(A^{3/2})$ then \begin{equation*} \boldsymbol{u}\in C( \left[ 0,T\right] ;D(A^{3/2})) \cap L^{2}\left( \left[ 0,T\right] ;D(A^{2})\right) . \end{equation*} \end{theorem} \begin{remark} Following the techniques presented in \cite{a_FT89} (see also \cite{a_FT98}) we can show that for any $t>0$ the solution is analytic in time with values in a Gevrey class of regularity of spatial analytic functions. As a result, we have an exponentially fast convergence in the wave number $m$, as $m\to\infty$, in a certain sense, of the Galerkin approximation to the unique strong solution of \eqref{grp:alphaMHD:Projected}, see, for instance, \cite{a_DT93,a_JMT95}. This Gevrey regularity result also implies the exponential decay of large wavenumber modes in the dissipation range of turbulent flows \cite{a_DT95}. \end{remark} \begin{proof} We use the Galerkin estimates derived in the previous subsections and similar ideas and compactness theorems in the corresponding spaces to converge to the strong solution. For \eqref{eq:alphaMHD:strong_sol_u} and \eqref{eq:alphaMHD:strong_sol_B} we need the estimates \eqref{eq:alphaMHD:H3_estimate}, \eqref{eq:alphaMHD:integral_H4_estimate}, \eqref{eq:alphaMHD:H1_estimate}, \eqref{eq:alphaMHD:integral_H2_estimate} and \eqref{eq:alphaMHD:H2_estimate_of_u,H1_estimate_of_B}, \eqref{eq:alphaMHD:integral_H3_estimate}, \eqref{eq:alphaMHD:H1_estimate}, \eqref{eq:alphaMHD:integral_H2_estimate}. For $\boldsymbol{B}^{in}\in V, \ \boldsymbol{u}^{in}\in D(A)$ we use the estimate \eqref{eq:alphaMHD:H2 and integral_H3 estimates,uin_in_D(A),Bin_in_V} and if $ \boldsymbol{u}^{in}\in D(A^{3/2})$ we use \eqref{eq:alphaMHD:H3_estimate_uin_in_D(A3/2)}. Also, since the strong solutions are weak, by uniqueness of weak solutions the strong solutions are unique. \end{proof} \section{Convergence to the solutions of MHD equations as $\alpha \rightarrow 0^{+}$} We emphasize again that our point of view is that the alpha model is to be considered as a regularizing numerical scheme. The next theorem shows that using the \textit{a priori} estimates established previously, one can extract subsequences of the weak solutions of system \eqref{grp:alphaMHD:Projected}, which converge, as $\alpha \rightarrow 0^{+}$, (in the appropriate sense defined in the theorem) to a Leray-Hopf weak solution of the three-dimensional MHD equations on any time interval $\left[ 0,T \right] $. For the definition and existence of weak solutions of the 3D MHD equations, see, for instance, \cite{a_DL72} and \cite{a_ST83}. The notion of a Leray-Hopf weak solution of MHD that satisfies the energy inequality \eqref{eq:MHD:energyIneq} is inspired from a Leray-Hopf solution of NSE and formulated in the theorem. Also, if the initial data is smooth we prove that a subsequence of the strong solutions of the MHD-$\alpha$ equations converges to the unique strong solution of the 3D MHD on an interval $\left[0,T_*(u^{in}, B^{in})\right]$, which is the interval of existence of the strong solution. \begin{theorem} Let $T>0$, $\boldsymbol{u}^{in}\in V,\,\boldsymbol{B}^{in}\in H$ and denote by $\boldsymbol{u}_{\alpha },\,\boldsymbol{B}_{\alpha }$ and \mbox{$ \boldsymbol{v}_{\alpha }=\boldsymbol{u}_{\alpha }+\alpha ^{2}A\boldsymbol{u} _{\alpha }$} the weak solution of \eqref{grp:alphaMHD:Projected} on $ \left[ 0,T\right] $. Then there are subsequences $\boldsymbol{u}_{\alpha _{j}},\,\boldsymbol{v}_{\alpha _{j}},\,\boldsymbol{B}_{\alpha _{j}}$ and a pair of functions $\boldsymbol{v},\boldsymbol{B}\in L^{\infty }\left( \left[ 0,T\right] ;H\right) \cap L^{2}\left( \left[ 0,T\right] ;V\right) $ such that, as \mbox{$\alpha _{j}\rightarrow 0^{+}$}, \begin{enumerate} \item $\boldsymbol{u}_{\alpha _{j}}\rightarrow \boldsymbol{v}$ and $ \boldsymbol{B}_{\alpha _{j}}\rightarrow \boldsymbol{B}$ weakly in $ L^{2}\left( \left[ 0,T\right] ;V\right) $ and strongly in $L^{2}\left( \left[ 0,T\right] ;H\right) $, \item $\boldsymbol{v}_{\alpha _{j}}\rightarrow \boldsymbol{v}$ weakly in $ L^{2}\left( \left[ 0,T\right] ;H\right) $ and strongly in $L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) $ and \item $\boldsymbol{u}_{\alpha _{j}}\left( t\right) \rightarrow \boldsymbol{v}\left( t\right) $ and $\boldsymbol{B}_{\alpha _{j}}\left( t\right) \rightarrow \boldsymbol{B}\left( t\right) $ weakly in $H$ and uniformly on $\left[ 0,T\right] $. \end{enumerate} Furthermore, the pair $\boldsymbol{v},\boldsymbol{B}$ is a Leray-Hopf weak solution of the MHD equations \begin{align*} & \frac{d\boldsymbol{v}}{dt}+\tilde{B}\left( \boldsymbol{v},\boldsymbol{v} \right) +\nu A\boldsymbol{v}=B\left( \boldsymbol{B},\boldsymbol{B}\right) , \\ & \frac{d\boldsymbol{B}}{dt}+B\left( \boldsymbol{v},\boldsymbol{B}\right) -B\left( \boldsymbol{B},\boldsymbol{v}\right) +\eta A\boldsymbol{B}=0 \end{align*} with initial data $\boldsymbol{v} \left( 0\right) =\boldsymbol{u}^{in},\,\boldsymbol{B}\left( 0\right) = \boldsymbol{B}^{in}$, which satisfies the energy inequality \begin{equation}\label{eq:MHD:energyIneq} \lnorm{ \boldsymbol{v}\left( t\right) } ^{2}+\lnorm{ \boldsymbol{B} \left( t\right) } ^{2} +2 \int_{t_0}^{t}\left( \nu\vnorm{ \boldsymbol{v}(s)} ^{2}+\eta\vnorm{ \boldsymbol{B} (s)} ^{2}\right)ds \leq \lnorm{ \boldsymbol{v}\left( t_0\right) } ^{2}+\lnorm{ \boldsymbol{B}\left( t_0\right) } ^{2} \end{equation} for almost every $t_0$, $0\leq t_0 \leq T$ and all $t\in\left[t_0,T\right]$. \end{theorem} \begin{proof} From estimates \eqref{eq:alphaMHD:H1_estimate} and \eqref{eq:alphaMHD:integral_H2_estimate}, by passing to the limit (using the proof of Theorem \ref{thm:alphaMHD:weakSol}), we have that the solution of \eqref{grp:alphaMHD:Projected} satisfies \begin{equation*} {\lnorm{ \boldsymbol{u}_{\alpha }\left( t\right) } ^{2}+\alpha ^{2}\vnorm{ \boldsymbol{u}_{\alpha }\left( t\right) } ^{2}+\lnorm{ \boldsymbol{B}_{\alpha }\left( t\right) } ^{2}\leq k_{1}} \end{equation*} and \begin{equation*} 2 \int_{0}^{T}\left( \nu(\vnorm{ \boldsymbol{u}_{\alpha }(t)} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{\alpha }(t)} ^{2})+\eta\vnorm{ \boldsymbol{B}_{\alpha } (t)} ^{2}\right)dt \leq k_{1}, \end{equation*} notice that since $\alpha \rightarrow 0^{+}$ we can assume that $0<\alpha \leq L$; consequently, we can bound the right hand side by $\tilde{k}_{1}:=\lnorm{ \boldsymbol{u}^{in}} ^{2}+L^{2}\vnorm{ \boldsymbol{u}^{in}} ^{2}+\lnorm{ \boldsymbol{B}^{in}} ^{2}$, which is independent of $\alpha $, therefore we can extract subsequences $\boldsymbol{u}_{\alpha _{j}},\,\boldsymbol{v}_{\alpha _{j}},\,\boldsymbol{B}_{\alpha _{j}}$, such that \begin{align*} {\boldsymbol{u}_{\alpha _{j}}}\rightarrow \boldsymbol{u\quad }& \text{weakly in }L^{2}\left( \left[ 0,T\right] ;V\right) , \\ {\boldsymbol{v}_{\alpha _{j}}}\rightarrow \boldsymbol{v\quad }& \text{weakly in }L^{2}\left( \left[ 0,T\right] ;H\right) \text{ and} \\ {\boldsymbol{B}_{\alpha _{j}}}\rightarrow \boldsymbol{B\quad }& \text{weakly in }L^{2}\left( \left[ 0,T\right] ;V\right) , \end{align*} as $\alpha _{j}\rightarrow 0^{+}$. Now we establish uniform estimates, independent of $\alpha $, for ${d\boldsymbol{B} _{\alpha }}/{dt}$ and ${d\boldsymbol{u}_{\alpha }}/{dt}$. From \eqref{eq:alphaMHD:Projected:magField} we have \begin{equation*} \norm{ A^{-1}\frac{d\boldsymbol{B}_{\alpha }}{dt}} \leq \norm{ A^{-1}B\left( \boldsymbol{u}_{\alpha },\boldsymbol{B}_{\alpha }\right) } +\norm{ A^{-1}\boldsymbol{B}\left( \boldsymbol{B}_{\alpha },\boldsymbol{u}_{\alpha }\right) } +\eta \norm{ \boldsymbol{B}_{\alpha }} , \end{equation*} notice that by \eqref{eq:BandBtilde_estimate_H_V_D(A)} \begin{equation*} \lnorm{ A^{-1}B\left( \boldsymbol{u}_{\alpha },\boldsymbol{B}_{\alpha }\right) } \leq c\lambda _{1}^{-1/4}\lnorm{ \boldsymbol{u} _{\alpha }} \vnorm{ \boldsymbol{\boldsymbol{B}}_{\alpha }} , \end{equation*} hence \begin{align*} \norm{ B\left( \boldsymbol{u}_{\alpha },\boldsymbol{B}_{\alpha }\right) } _{L^{2}\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) }& \leq c\lambda _{1}^{-1/2}{\int_{0}^{T}}\lnorm{ \boldsymbol{u}_{\alpha }\left( t\right) } ^{2}\vnorm{ \boldsymbol{\boldsymbol{B}}_{\alpha }\left( t\right) } ^{2}dt \\ & \leq c\lambda _{1}^{-1/2}\tilde{k}_{1}^{2} \eta^{-1} \end{align*} and similarly \begin{equation*} \norm{ B\left( \boldsymbol{B}_{\alpha },\boldsymbol{u}_{\alpha }\right) } _{L^{2}\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) }\leq c\lambda _{1}^{-1/2}\tilde{k}_{1}^{2}\eta ^{-1}. \end{equation*} Hence \begin{equation*} \norm{ \frac{d\boldsymbol{B}_{\alpha }}{dt}} _{L^{2}\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) }\leq K, \end{equation*} where $K$ is independent of $\alpha $. From \eqref{eq:alphaMHD:Projected:velocity} we have \begin{equation*} \norm{ A^{-1}\frac{d\boldsymbol{u}_{\alpha }}{dt}} \leq \norm{ A^{-1}\left( I+\alpha ^{2}A\right) ^{-1}\tilde{B}\left( \boldsymbol{u},\boldsymbol{v}\right) } +\nu \norm{ \boldsymbol{u}_{\alpha }} +\norm{ A^{-1}\left( I+\alpha ^{2}A\right) ^{-1}B\left( \boldsymbol{B},\boldsymbol{B}\right) } , \end{equation*} and using \eqref{eq:Btilde_estimate_V_H_D(A)_short} \begin{align*} \lnorm{ A^{-1}\left( I+\alpha ^{2}A\right) ^{-1}\tilde{B}\left( \boldsymbol{u}_{\alpha },\boldsymbol{v}_{\alpha }\right) } & \leq \lnorm{ A^{-1}\tilde{B}\left( \boldsymbol{u}_{\alpha },\boldsymbol{v} _{\alpha }\right) } \\ & \leq c\lambda _{1}^{-1/4}\vnorm{ \boldsymbol{u}_{\alpha }} \lnorm{ \boldsymbol{v}_{\alpha }} \\ & \leq c\lambda _{1}^{-1/4}\vnorm{ \boldsymbol{u}_{\alpha }} \left( \lnorm{ \boldsymbol{u}_{\alpha }} +\alpha ^{2}\lnorm{ A\boldsymbol{u}_{\alpha }} \right) , \end{align*} thus \begin{equation*} \lnorm{ A^{-1}\left( I+\alpha ^{2}A\right) ^{-1}\tilde{B}\left( \boldsymbol{u}_{\alpha },\boldsymbol{v}_{\alpha }\right) } ^{2}\leq 2c\lambda _{1}^{-1/2}{\tilde{k}_{1}}\left( \vnorm{ \boldsymbol{u} _{\alpha }} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{\alpha }} ^{2}\right) , \end{equation*} and \begin{equation*} {\int_{0}^{T}}\lnorm{ A^{-1}\left( I+\alpha ^{2}A\right) ^{-1}\tilde{B} \left( \boldsymbol{u}_{\alpha }\left( t\right) ,\boldsymbol{v}_{\alpha }\left( t\right) \right) } ^{2}dt\leq c\lambda _{1}^{-1/2}{ \tilde{k}_{1}^{2}}\nu ^{-1}. \end{equation*} Also by \eqref{eq:BandBtilde_estimate_H_V_D(A)} \begin{equation*} \norm{ B\left( \boldsymbol{B}_{\alpha },\boldsymbol{B}_{\alpha }\right) } _{L^{2}\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) }\leq c\lambda _{1}^{-1/2}{\tilde{k}_{1}^{2}\eta }^{-1}. \end{equation*} As a result we have \begin{equation*} \norm{ \frac{d\boldsymbol{u}_{\alpha }}{dt}} _{L^{2}\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) }\leq K. \end{equation*} Using Aubin's Compactness Lemma (see, for example, \cite[Lemma 8.4]{b_CF88}) we can extract subsequences of $\boldsymbol{u}_{\alpha _{j}}$ and $ \boldsymbol{B}_{\alpha _{j}}$, which we relabel by $\boldsymbol{u}_{\alpha _{j}}$ and $\boldsymbol{B}_{\alpha _{j}}$ respectively, such that ${ \boldsymbol{u}_{\alpha _{j}}}\rightarrow \boldsymbol{u}$ and ${\boldsymbol{B} _{\alpha _{j}}}\rightarrow \boldsymbol{B}$ strongly in $L^{2}\left( \left[ 0,T\right] ;H\right) $ and strongly in $C\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) $, as $\alpha _{j}\rightarrow 0^{+}$. Observing that \begin{equation*} \norm{ \boldsymbol{v}_{\alpha _{j}}-\boldsymbol{u}_{\alpha _{j}}} _{L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) }=\alpha _{j}^{2}{\int_{0}^{T}}\vnorm{ \boldsymbol{u}_{\alpha _{j}}\left( t\right) } dt\leq \alpha _{j}^{2}\nu ^{-1}{\tilde{k}_{1}}, \end{equation*} we obtain that $\boldsymbol{v}_{\alpha _{j}}\rightarrow \boldsymbol{u}$ in $ L^{2}\left( \left[ 0,T\right] ;V^{\prime }\right) $, as $\alpha _{j}\rightarrow 0^{+}$; and hence also that $\boldsymbol{u}\left( t\right) = \boldsymbol{v}\left( t\right) $ almost everywhere on $\left[ 0,T\right] $. Now, following the lines of the proof of Theorem \ref{thm:alphaMHD:weakSol}, we can extract further subsequences (which we relabel by $\boldsymbol{u} _{\alpha _{j}},\boldsymbol{v}_{\alpha _{j}}$ and $\boldsymbol{B}_{\alpha _{j}}$) and show that as $\alpha _{j}\rightarrow 0^{+}$, \begin{equation*} \tilde{B}\left( \boldsymbol{u}_{\alpha _{j}},\boldsymbol{v}_{\alpha _{j}}\right) \rightarrow \tilde{B}\left( \boldsymbol{v},\boldsymbol{v} \right) =B\left( \boldsymbol{v},\boldsymbol{v}\right) \end{equation*} weakly in $L^{1}\left( \left[ 0,T\right] ;D\left( A\right) ^{\prime }\right) $, and \begin{align*} B\left( \boldsymbol{B}_{\alpha _{j}},\boldsymbol{B}_{\alpha _{j}}\right) \rightarrow B\left( \boldsymbol{B},\boldsymbol{B}\right) ,\, B\left( \boldsymbol{u}_{\alpha _{j}},\boldsymbol{B}_{\alpha _{j}}\right) \rightarrow B\left( \boldsymbol{v},\boldsymbol{B}\right) ,\, B\left( \boldsymbol{B}_{\alpha _{j}},\boldsymbol{u}_{\alpha _{j}}\right) \rightarrow B\left( \boldsymbol{B},\boldsymbol{v}\right) \end{align*} weakly in $L^{1}\left( \left[ 0,T\right] ;V^{\prime }\right) $. Hence, we can pass to the limit (in the interpretation given by \eqref{grp:alphaMHD:weakSol_integralFormulation}) in \begin{align*} & \left\langle \frac{d}{dt}\boldsymbol{\boldsymbol{v}}_{\alpha _{j}}, \boldsymbol{w}\right\rangle _{D\left( A\right) ^{\prime }}+\left\langle \tilde{B}\left( \boldsymbol{u}_{\alpha _{j}},\boldsymbol{\boldsymbol{v}} _{\alpha _{j}}\right) ,\boldsymbol{w}\right\rangle _{D\left( A\right)^{\prime }}+\nu \left( \boldsymbol{v}_{\alpha _{j}},A\boldsymbol{w}\right) =\left\langle B\left( \boldsymbol{B}_{\alpha _{j}},\boldsymbol{B}_{\alpha _{j}}\right) , \boldsymbol{w}\right\rangle _{V^{\prime }}, \\ & \left\langle \frac{d}{dt}\boldsymbol{B}_{\alpha _{j}},\boldsymbol{\xi } \right\rangle _{V^{\prime }}+\left( B\left( \boldsymbol{u}_{\alpha _{j}}, \boldsymbol{B}_{\alpha _{j}}\right) ,\boldsymbol{\xi }\right) -\left( B\left( \boldsymbol{B}_{\alpha _{j}},\boldsymbol{u}_{\alpha _{j}}\right) , \boldsymbol{\xi }\right) +\eta \left( \left( \boldsymbol{B}_{\alpha _{j}}, \boldsymbol{\xi }\right) \right) =0, \end{align*} $\boldsymbol{w}\in D\left( A\right) ,\ \boldsymbol{\xi }\in V$ and we obtain that \begin{align*} & \left\langle \frac{d}{dt}\boldsymbol{\boldsymbol{v}},\boldsymbol{w} \right\rangle _{D\left( A\right) ^{\prime} }+\left\langle B\left( \boldsymbol{v} ,\boldsymbol{\boldsymbol{v}}\right) ,\boldsymbol{w}\right\rangle _{D(A) ^{\prime} }+\nu \left( \left( \boldsymbol{v},\boldsymbol{w}\right) \right) =\left\langle B\left( \boldsymbol{B},\boldsymbol{B}\right) , \boldsymbol{w}\right\rangle _{V ^{\prime} }, \\ & \left\langle \frac{d}{dt}\boldsymbol{B},\boldsymbol{\xi }\right\rangle _{V^{\prime }}+\left( B\left( \boldsymbol{v},\boldsymbol{B}\right) , \boldsymbol{\xi }\right) -\left( B\left( \boldsymbol{B},\boldsymbol{v} \right) ,\boldsymbol{\xi }\right) +\eta \left( \left( \boldsymbol{B}, \boldsymbol{\xi }\right) \right) =0, \end{align*} for every $\boldsymbol{w}\in D\left( A\right) ,\ \boldsymbol{\xi }\in V$ and for almost every $t\in \left[ 0,T\right] $. Now, since $\boldsymbol{v}\in L^{2}\left( \left[ 0,T\right] ;V\right) $, one can show that $B\left( \boldsymbol{v},\boldsymbol{\boldsymbol{v}}\right) \in L^{1}\left( \left[ 0,T\right] ;V^{\prime }\right) $ and then also that \mbox{$ \left({d}/{dt}\right)\boldsymbol{\boldsymbol{v}}\in L^{1}\left( \left[ 0,T\right] ;V^{\prime }\right) $}, and since $w\in D\left( A\right) $, which is dense in $V$, we obtain the weak formulation of the MHD equations \begin{align*} & \left\langle \frac{d}{dt}{\boldsymbol{v}},\boldsymbol{w} \right\rangle _{V^{\prime }}+\left\langle B\left( \boldsymbol{v},\boldsymbol{ \boldsymbol{v}}\right) ,\boldsymbol{w}\right\rangle _{V^{\prime }}+\nu \left( \left( \boldsymbol{v},\boldsymbol{w}\right) \right) =\left\langle B\left( \boldsymbol{B},\boldsymbol{B}\right) ,\boldsymbol{w}\right\rangle _{V^{\prime }}, \\ & \left\langle \frac{d}{dt}\boldsymbol{B},\boldsymbol{\xi }\right\rangle _{V^{\prime }}+\left( B\left( \boldsymbol{v},\boldsymbol{B}\right) , \boldsymbol{\xi }\right) -\left( B\left( \boldsymbol{B},\boldsymbol{v} \right) ,\boldsymbol{\xi }\right) +\eta \left( \left( \boldsymbol{B}, \boldsymbol{\xi }\right) \right) =0, \end{align*} for every $\boldsymbol{w},\ \boldsymbol{\xi }\in V$ and for almost every $ t\in \left[ 0,T\right] $. We notice, that every weak solution of \eqref{grp:alphaMHD:Projected} satisfies the energy equality \eqref{eq:alphaMHD:weakSol_energyEquality} and hence the energy inequality \eqref{eq:MHD:energyIneq} follows by passing to the $\liminf$ as $\alpha\to 0^{+}$, using the fact that if $x_\alpha\to x$ weakly in a Hilbert space $X$, then $\|x\| \leq \liminf\|x_\alpha\|$. \end{proof} \begin{theorem} Let $T>0$, $\boldsymbol{u}^{in}\in D(A),\,\boldsymbol{B}^{in}\in V$ and denote by $\boldsymbol{u}_{\alpha },\,\boldsymbol{B}_{\alpha }$ and \mbox{$ \boldsymbol{v}_{\alpha }=\boldsymbol{u}_{\alpha }+\alpha ^{2}A\boldsymbol{u} _{\alpha }$} the strong solution of \eqref{grp:alphaMHD:Projected} on $ \left[ 0,T\right] $. Then there exist $T_*=T_*(\Omega, \nu, \eta, u^{in}, B^{in})$, $0<T_*\leq T$, subsequences $\boldsymbol{u}_{\alpha _{j}},\,\boldsymbol{v}_{\alpha _{j}},\,\boldsymbol{B}_{\alpha _{j}}$ and a pair of functions $\boldsymbol{v},\boldsymbol{B}\in L^{\infty }\left( \left[ 0,T_*\right] ;V\right) \cap L^{2}\left( \left[ 0,T_*\right] ;D(A)\right) $ such that, as \mbox{$\alpha _{j}\rightarrow 0^{+}$}, \begin{enumerate} \item $\boldsymbol{u}_{\alpha _{j}}\rightarrow \boldsymbol{v}$ and $ \boldsymbol{B}_{\alpha _{j}}\rightarrow \boldsymbol{B}$ weakly in $ L^{2}\left( \left[ 0,T_*\right] ;D(A)\right) $ and strongly in $L^{2}\left( \left[ 0,T_*\right] ;V\right) $, \item $\boldsymbol{v}_{\alpha _{j}}\rightarrow \boldsymbol{v}$ weakly in $ L^{2}\left( \left[ 0,T_*\right] ;V\right) $ and strongly in $L^{2}\left( \left[ 0,T_*\right] ;H\right) $ and \item $\boldsymbol{u}_{\alpha _{j}}\left( t\right) \rightarrow \boldsymbol{v}\left( t\right) $ and $\boldsymbol{B}_{\alpha _{j}}\left( t\right) \rightarrow \boldsymbol{B} \left( t\right)$ weakly in $V$ and uniformly on $\left[ 0,T_*\right] $. \end{enumerate} Furthermore, the pair $\boldsymbol{v},\boldsymbol{B}$ is the unique strong solution of the 3D MHD equations on $\left[0,T_*\right]$ with initial data $\boldsymbol{v} \left( 0\right) =\boldsymbol{u}^{in},\,\boldsymbol{B}\left( 0\right) = \boldsymbol{B}^{in}$. The strong solution of the 3D MHD equations satisfies the energy equality \begin{equation*} \lnorm{ \boldsymbol{v}\left( t\right) } ^{2}+\lnorm{ \boldsymbol{B} \left( t\right) } ^{2} +2 \int_{t_0}^{t}\left( \nu\vnorm{ \boldsymbol{v}(s)} ^{2}+\eta\vnorm{ \boldsymbol{B} (s)} ^{2}\right)ds = \lnorm{ \boldsymbol{v}\left( t_0\right) } ^{2}+\lnorm{ \boldsymbol{B}\left( t_0\right) } ^{2},\qquad 0\leq t_0\leq t \leq T_*. \end{equation*} \end{theorem} \begin{proof} To prove the theorem we need to show that there exists $T_*$ such that we have a uniform (independent of $\alpha $) bound on \begin{equation} {\vnorm{ \boldsymbol{u}_{\alpha }\left( t\right) } ^{2}+\alpha ^{2}}\lnorm{ {A\boldsymbol{u}_{\alpha }\left( t\right) }} { ^{2}+\vnorm{ \boldsymbol{B}_{\alpha }(t)} ^{2}} \label{eq:cnvgStrSol:bnd1} \end{equation} and \begin{equation} \int_{0}^{T_*}\left( \nu (\lnorm{ A\boldsymbol{u}_{\alpha }(t)} ^{2}+\alpha ^{2}\lnorm{ A^{3/2}\boldsymbol{u}_{\alpha }(t)} ^{2})+\eta \lnorm{ A\boldsymbol{B}_{\alpha }(t)} ^{2}\right) dt \label{eq:cnvgStrSol:bnd2} \end{equation} in $\left[ 0,T_*\right] $. Then we can continue similarly to the proof of the previous theorem, appropriately smoothing the data and replacing $T$ by $ T_*$. Next we derive the formal estimates on \eqref{eq:cnvgStrSol:bnd1} and \eqref{eq:cnvgStrSol:bnd2} that can be proved rigorously using the Galerkin approximation scheme and then passing to the limit using the proof of Theorem \ref{thm:alphaMHD:weakSol}. Let us recall \eqref{eq:alphaMHD:sum:inner_product_A}. By \eqref{eq:BandBtilde_estimate_DA_V_H} and several applications of Young's inequality we bound \begin{align*} \abs{ \left( \tilde{B}\left( \boldsymbol{u}_{\alpha},\boldsymbol{v}_{\alpha}\right) ,A \boldsymbol{u}_{\alpha}\right) } & \leq c\vnorm{ \boldsymbol{u}_{\alpha} } ^{1/2}\lnorm{ A\boldsymbol{u}_{\alpha}} ^{1/2}(\vnorm{ \boldsymbol{u}_{\alpha}} +\alpha ^{2}\lnorm{ A^{3/2}\boldsymbol{u}_{\alpha} } )\lnorm{ A\boldsymbol{u}_{\alpha}} \\ & \leq c\nu ^{-3}\vnorm{ \boldsymbol{u}_{\alpha}} ^{6}+\nu ^{-3}\alpha ^{6}\lnorm{ A\boldsymbol{u}_{\alpha}} ^{6}+\frac{\nu }{4}\lnorm{ A \boldsymbol{u}_{\alpha}} ^{2}+\frac{\nu }{2}\alpha ^{2}\lnorm{ A^{3/2} \boldsymbol{u}_{\alpha}} ^{2}. \end{align*} By \eqref{eq:BandBtilde_estimate_DA_V_H} \begin{align*} \abs{ \left( B\left( \boldsymbol{B}_{\alpha},\boldsymbol{B}_{\alpha}\right) ,A \boldsymbol{u}_{\alpha}\right) } & \leq c\vnorm{ \boldsymbol{B}_{\alpha} } ^{1/2}\lnorm{ A\boldsymbol{B}_{\alpha}} ^{1/2}\vnorm{ \boldsymbol{B}_{\alpha}} \lnorm{ A\boldsymbol{u}_{\alpha}} \\ & \leq c\nu ^{-2}\eta ^{-1}\vnorm{ \boldsymbol{B}_{\alpha}} ^{6}+\frac{ \eta }{4}\lnorm{ A\boldsymbol{B}_{\alpha}} ^{2}+\frac{\nu }{4}\lnorm{ A\boldsymbol{u}_{\alpha}} ^{2}. \end{align*} By \eqref{eq:BandBtilde_estimate_DA_V_H} we also have \begin{align*} \abs{ \left( B\left( \boldsymbol{B}_{\alpha},\boldsymbol{u}_{\alpha}\right) ,A \boldsymbol{B}_{\alpha}\right) } & \leq c\vnorm{ \boldsymbol{B}_{\alpha} } ^{1/2}\vnorm{ \boldsymbol{u}_{\alpha}} \lnorm{ A \boldsymbol{B}_{\alpha}} ^{3/2} \\ & \leq c\eta ^{-3}\vnorm{ \boldsymbol{B}_{\alpha}} ^{6}+\eta ^{-3}\vnorm{ \boldsymbol{u}_{\alpha}} ^{6}+\frac{\eta }{8}\lnorm{ A \boldsymbol{B}_{\alpha}} ^{2} \end{align*} and by \eqref{eq:B_estimate_V_DA_H} \begin{equation*} \abs{ \left( B\left( \boldsymbol{u}_{\alpha},\boldsymbol{B}_{\alpha}\right) ,A \boldsymbol{B}_{\alpha}\right) } \leq c\vnorm{ \boldsymbol{B}_{\alpha} } ^{1/2}\vnorm{ \boldsymbol{u}_{\alpha}} \lnorm{ A \boldsymbol{B}_{\alpha}} ^{3/2}. \end{equation*} Hence by \eqref{eq:alphaMHD:sum:inner_product_A} and the above estimates we have \begin{equation} \frac{d}{dt}\left( \vnorm{ \boldsymbol{u}_{\alpha}} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{\alpha}} ^{2}+\vnorm{ \boldsymbol{B}_{\alpha} } ^{2}\right) +\nu \left( |{A\boldsymbol{u}_{\alpha}|}^{2}+\alpha ^{2}|{ A^{3/2}\boldsymbol{u}_{\alpha}|}^{2}\right) +\eta |{A\boldsymbol{B}_{\alpha}|}^{2}\leq c\mu ^{-3}\left( \vnorm{ \boldsymbol{u}_{\alpha}} ^{6}+\alpha ^{6}\lnorm{ A \boldsymbol{u}_{\alpha}} ^{6}+\vnorm{ \boldsymbol{B}_{\alpha}} ^{6}\right) \label{eq:alphaMHD:ConvStrSols:H2_inequality} \end{equation} Denote \begin{equation*} \boldsymbol{y}=\vnorm{ \boldsymbol{u}_{\alpha}} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}_{\alpha}} ^{2}+\vnorm{ \boldsymbol{B}_{\alpha} } ^{2}. \end{equation*} Now, if $y(0)=0$, that is $\boldsymbol{u}^{in}=\boldsymbol{B}^{in}=0$, then the solution is steady $\boldsymbol{u}_{\alpha}(t)\equiv 0$, $\boldsymbol{v}_{\alpha}(t)\equiv 0$, $\boldsymbol{B}_{\alpha}(t)\equiv 0$ and $\boldsymbol{v}(t)\equiv 0$, $\boldsymbol{B}(t)\equiv 0$ exists for all $t\geq 0$. Otherwise, from \eqref{eq:alphaMHD:ConvStrSols:H2_inequality} we have \begin{equation*} \frac{d}{dt}\boldsymbol{y}\leq c\mu ^{-3}\boldsymbol{y}^{3} \end{equation*} and thus \begin{equation*} \boldsymbol{y}\left( t\right) \leq 2\boldsymbol{y}\left( 0\right) \end{equation*} for $0\leq t\leq \frac{3}{8}c^{3}\mu ^{3}\boldsymbol{y}\left( 0\right) ^{-2}$. We conclude that \begin{equation*} {\vnorm{ \boldsymbol{u}_{\alpha }\left( t\right) } ^{2}+\alpha ^{2}}\lnorm{ {A\boldsymbol{u}_{\alpha }\left( t\right) }} { ^{2}+\vnorm{ \boldsymbol{B}_{\alpha }(t)} ^{2}}\leq 2\left( \vnorm{ \boldsymbol{u}^{in}} ^{2}+\alpha ^{2}\lnorm{ A \boldsymbol{u}^{in}} ^{2}+\vnorm{ \boldsymbol{B} ^{in}} ^{2}\right) \end{equation*} for $0\leq t\leq T_*:=\min \left( T,\frac{3}{8}c^{3}\mu ^{3}\boldsymbol{y} \left( 0\right) ^{-2}\right) $. Also, by integrating \eqref{eq:alphaMHD:ConvStrSols:H2_inequality} over $(0,T_*)$, we obtain \begin{multline*} \int_{0}^{T_*}\left( \nu (\lnorm{ A\boldsymbol{u}_{\alpha }(t)} ^{2}+\alpha ^{2}\lnorm{ A^{3/2}\boldsymbol{u}_{\alpha }(t)} ^{2})+\eta \lnorm{ A\boldsymbol{B}_{\alpha }(t)} ^{2}\right) dt \\ \leq \vnorm{ \boldsymbol{u}^{in}} ^{2}+\alpha ^{2}\lnorm{ A \boldsymbol{u}^{in}} ^{2}+\vnorm{ \boldsymbol{B} ^{in}} ^{2}+c\mu ^{-3}T_*\left( \vnorm{ \boldsymbol{u} ^{in}} ^{2}+\alpha ^{2}\lnorm{ A\boldsymbol{u}^{in}} ^{2}+\vnorm{ \boldsymbol{B}^{in}} ^{2}\right) ^{3}. \end{multline*} Assuming that $0<\alpha \leq L$, the bounds are independent of $\alpha $. \end{proof} \section{Discussion} We proved the well-posedness of the three-dimensional MHD-$\alpha$ model \eqref{grp:alphaMHD_intro} in the periodic boundary conditions. This model modifies the nonlinearity of the MHD equations \eqref{grp:MHD} without enhancing dissipation. We showed that the model has a unique global weak (or strong, for smooth initial data) solution. Also, there is a subsequence of weak solutions of the MHD-$ \alpha $ equations that converge, as \mbox{$\alpha \rightarrow $ $0^{+}$}, (in the appropriate sense) to a Leray-Hopf weak solution (which satisfies the energy inequality \eqref{eq:MHD:energyIneq}) of the MHD equations \eqref{grp:MHD} on any time interval $[0,T]$. Also, if the initial data is smooth, a subsequence of solutions converges for a short interval of time, to the unique strong solution of the MHD equations on this interval. These properties are essential for the $\alpha$ models to be regarded as regularizing numerical schemes. In a follow up paper, we intend to do the error estimates in which we will investigate the error in terms of $m$ and $\alpha$. Namely, the distance between the solution of the Galerkin MHD-$\alpha$ model to that of the exact strong solution of the MHD equations, for smooth initial data. There are many different $\alpha$ models. For example, the global well-posedness can be shown for the 3D Modified-Leray-$ \alpha $-MHD model \eqref{grp:ML_alpha_MHD_intro}. However, at the moment we are unable to find a conserved quantity in the ideal version of \eqref{grp:ML_alpha_MHD_intro}, which can be identified with a cross helicity, contrary to the MHD-$\alpha$ model \eqref{grp:alphaMHD_intro}, where there exist the ideal invariants that could be identified with the three invariants of the original 3D MHD equations. \end{document}
arXiv
Two chords, $AB$ and $CD,$ meet inside a circle at $P.$ If $AP = CP = 7,$ then what is $\frac{BP}{DP}$? By the Power of a Point formula, we know that $AP \cdot BP = CP \cdot DP.$ Since $AP = CP,$ we have that $BP = DP$ as well, so $\frac{BP}{DP} = \boxed{1}.$
Math Dataset